Recently bookmarked papers

with concepts:
  • Coupled-wire constructions use bosonization to analytically tackle the strong interactions underlying fractional topological states of matter. We give an introduction to this technique, discuss its strengths and weaknesses in comparison to other approaches, and provide an overview of the main achievements of coupled-wire construction.
    Weyl nodeFractional quantum Hall stateQuantum Hall statesLaughlin wavefunctionChiralityMultidimensional ArrayHamiltonianLuttinger liquidQuantum wireQuantum Hall Effect...
  • We examine an extension of the Standard Model that addresses the dark matter puzzle and generates Dirac neutrinos masses through the radiative seesaw mechanism. The new field content includes a scalar field that plays an important role in setting the relic abundance of dark matter. We analyze the phenomenology in the light of direct, indirect, and collider searches of dark matter. In this framework, the dark matter candidate is a Dirac particle that is a mixture of new singlet-doublet fields with mass $m_{\chi_1^0}\lesssim 1.1\,\text{TeV}$. We find that the allowed parameter space of this model is broader than the well-known Majorana dark matter scenario.
    Dark matterNeutrino massRelic abundanceNeutrinoStandard ModelSpin-independent cross sectionDirac neutrinoDark matter particleAnnihilation cross sectionSpin independent...
  • Generative adversarial networks have been very successful in generative modeling, however they remain relatively hard to optimize compared to standard deep neural networks. In this paper, we try to gain insight into the optimization of GANs by looking at the game vector field resulting from the concatenation of the gradient of both players. Based on this point of view, we propose visualization techniques that allow us to make the following empirical observations. First, the training of GANs suffers from rotational behavior around locally stable stationary points, which, as we show, corresponds to the presence of imaginary components in the eigenvalues of the Jacobian of the game. Secondly, GAN training seems to converge to a stable stationary point which is a saddle point for the generator loss, not a minimum, while still achieving excellent performance. This counter-intuitive yet persistent observation questions whether we actually need a Nash equilibrium to get good performance in GANs.
    Generative Adversarial NetDwarf novaSaddle pointOptimization landscapeOptimizationDeep Neural NetworksNash equilibriumNeural networkBumpingGenerative model...
  • We combine and extend the analyses of effective scalar, vector, Majorana and Dirac fermion Higgs portal models of dark matter (DM), in which DM couples to the Standard Model (SM) Higgs boson via an operator of the form $\mathcal{O}_{\textrm{DM}}\, H^\dagger H$. For the fermion models, we take an admixture of scalar $\overline{\psi} \psi$ and pseudoscalar $\overline{\psi} i\gamma_5 \psi$ interaction terms. For each model, we apply constraints on the parameter space based on the Planck measured DM relic density and the LHC limits on the Higgs invisible branching ratio. For the first time, we perform a consistent study of the indirect detection prospects for these models based on the WMAP7/Planck observations of the cosmic microwave background, a combined analysis of 15 dwarf spheroidal galaxies by Fermi-LAT and the upcoming Cherenkov Telescope Array (CTA). We also perform a correct treatment of the momentum-dependent direct search cross section that arises from the pseudoscalar interaction term in the fermionic DM theories. We find, in line with previous studies, that current and future direct search experiments such as LUX and XENON1T can exclude much of the parameter space, and we demonstrate that a joint observation in both indirect and direct searches is possible for high mass weakly interacting massive particles. In the case of a pure pseudoscalar interaction of a fermionic DM candidate, future gamma-ray searches are the only class of experiment capable of probing the high mass range of the theory.
    Dark matterHiggs bosonPseudoscalarDark matter particle massCherenkov Telescope ArrayEffective field theoryStandard ModelLUX experimentDark matter annihilationXENON1T...
  • Knot theory provides a powerful tool for the understanding of topological matters in biology, chemistry, and physics. Here knot theory is introduced to describe topological phases in the quantum spin system. Exactly solvable models with long-range interactions are investigated, and Majorana modes of the quantum spin system are mapped into different knots and links. The topological properties of ground states of the spin system are visualized and characterized using crossing and linking numbers, which capture the geometric topologies of knots and links. The interactivity of energy bands is highlighted. In gapped phases, eigenstate curves are tangled and braided around each other forming links. In gapless phases, the tangled eigenstate curves may form knots. Our findings provide an alternative understanding of the phases in the quantum spin system, and provide insights into one-dimension topological phases of matter.
    Linking numberTopological orderTorusGraphWinding numberKnot theoryCrossing numberHamiltonianUnknotTrefoil knot...
  • Cramer's Transactional Interpretation (TI) is applied to the "Quantum Liar Experiment" (QLE). It is shown how some apparently paradoxical features can be explained naturally, albeit nonlocally (since TI is an explicitly nonlocal interpretation, at least from the vantage point of ordinary spacetime). At the same time, it is proposed that in order to preserve the elegance and economy of the interpretation, it may be necessary to consider offer and confirmation waves as propagating in a "higher space" of possibilities.
    Transactional interpretationAbsorptivityParadoxismAbsorbanceExcited stateTime-reversal symmetryEntanglementInterferenceInteraction-free measurementReal space...
  • A modified version of Young's experiment by Shahriar Afshar indirectly reveals the presence of a fully articulated interference pattern prior to the post-selection of a particle in a "which-slit" basis. While this experiment does not constitute a violation of Bohr's Complementarity Principle as claimed by Afshar, both he and many of his critics incorrectly assume that a commonly used relationship between visibility parameter V and "which-way" parameter K has crucial relevance to his experiment. It is argued here that this relationship does not apply to this experimental situation and that it is wrong to make any use of it in support of claims for or against the bearing of this experiment on Complementarity.
    InterferenceComplementarityAfshar experimentVibrationIntensityInterferometersDegree of freedomUnitarityParticle detectorRecombination...
  • When two macromolecules come very near in a fluid, the molecules that surround them, having finite volume, are less likely to get in between. This leads to a pressure difference manifesting as an entropic attraction, called depletion force. Here we calculate the density profile of liquid molecules surrounding a disordered linear macromolecule, and analytically determine the position dependence of the depletion force between two such molecules. We then verify our formulas with realistic molecular dynamics simulations. Our result can be regarded as an extension of the classical Asakura-Oosawa formula.
    DisorderLiquidsPolymersMolecular dynamics simulationHard sphereImpact parameterDNAColloidTelescopesProtein...
  • We introduce the Backwards Quantum Propagation of Phase errors (Baqprop) principle, a central theme upon which we construct multiple universal optimization heuristics for training both parametrized quantum circuits and classical deep neural networks on a quantum computer. Baqprop encodes error information in relative phases of a quantum wavefunction defined over the space of network parameters; it can be thought of as the unification of the phase kickback principle of quantum computation and of the backpropagation algorithm from classical deep learning. We propose two core heuristics which leverage Baqprop for quantum-enhanced optimization of network parameters: Quantum Dynamical Descent (QDD) and Momentum Measurement Gradient Descent (MoMGrad). QDD uses simulated quantum coherent dynamics for parameter optimization, allowing for quantum tunneling through the hypothesis space landscape. MoMGrad leverages Baqprop to estimate gradients and thereby perform gradient descent on the parameter landscape; it can be thought of as the quantum-classical analogue of QDD. In addition to these core optimization strategies, we propose various methods for parallelization, regularization, and meta-learning as augmentations to MoMGrad and QDD. We introduce several quantum-coherent adaptations of canonical classical feedforward neural networks, and study how Baqprop can be used to optimize such networks. We develop multiple applications of parametric circuit learning for quantum data, and show how to perform Baqprop in each case. One such application allows for the training of hybrid quantum-classical neural-circuit networks, via the seamless integration of Baqprop with classical backpropagation. Finally, for a representative subset of these proposed applications, we demonstrate the training of these networks via numerical simulations of implementations of QDD and MoMGrad.
    OptimizationHamiltonianQubitBackpropagationWavefunctionNeural networkExpectation ValueArchitectureDeep learningQuantum computer...
  • We prove a microlocal lower bound on the mass of high energy eigenfunctions of the Laplacian on compact surfaces of negative curvature, and more generally on surfaces with Anosov geodesic flows. This implies controllability for the Schr\"odinger equation by any nonempty open set, and shows that every semiclassical measure has full support. We also prove exponential energy decay for solutions to the damped wave equation on such surfaces, for any nontrivial damping coefficient. These results extend previous works [arXiv:1705.05019], [arXiv:1712.02692], which considered the setting of surfaces of constant negative curvature. The proofs use the strategy of [arXiv:1705.05019], [arXiv:1712.02692] and rely on the fractal uncertainty principle of [arXiv:1612.09040]. However, in the variable curvature case the stable/unstable foliations are not smooth, so we can no longer associate to these foliations a pseudodifferential calculus of the type used in [arXiv:1504.06589]. Instead, our argument uses Egorov's Theorem up to local Ehrenfest time and the hyperbolic parametrix of [arXiv:0706.3242], together with the $C^{1+}$ regularity of the stable/unstable foliations.
    ManifoldUncertainty principleFractalFoliationGeodesicEigenfunctionCurvatureGraphQuantizationAntiderivative...
  • Starting from the most general scalar-tensor theory with second order field equations in four dimensions, we establish the unique action that will allow for the existence of a consistent self-tuning mechanism on FLRW backgrounds, and show how it can be understood as a combination of just four base Lagrangians with an intriguing geometric structure dependent on the Ricci scalar, the Einstein tensor, the double dual of the Riemann tensor and the Gauss-Bonnet combination. Spacetime curvature can be screened from the net cosmological constant at any given moment because we allow the scalar field to break Poincar\'e invariance on the self-tuning vacua, thereby evading the Weinberg no-go theorem. We show how the four arbitrary functions of the scalar field combine in an elegant way opening up the possibility of obtaining non-trivial cosmological solutions.
  • Optimal transport (OT) theory can be informally described using the words of the French mathematician Gaspard Monge (1746-1818): A worker with a shovel in hand has to move a large pile of sand lying on a construction site. The goal of the worker is to erect with all that sand a target pile with a prescribed shape (for example, that of a giant sand castle). Naturally, the worker wishes to minimize her total effort, quantified for instance as the total distance or time spent carrying shovelfuls of sand. Mathematicians interested in OT cast that problem as that of comparing two probability distributions, two different piles of sand of the same volume. They consider all of the many possible ways to morph, transport or reshape the first pile into the second, and associate a "global" cost to every such transport, using the "local" consideration of how much it costs to move a grain of sand from one place to another. Recent years have witnessed the spread of OT in several fields, thanks to the emergence of approximate solvers that can scale to sizes and dimensions that are relevant to data sciences. Thanks to this newfound scalability, OT is being increasingly used to unlock various problems in imaging sciences (such as color or texture processing), computer vision and graphics (for shape manipulation) or machine learning (for regression, classification and density fitting). This short book reviews OT with a bias toward numerical methods and their applications in data sciences, and sheds lights on the theoretical properties of OT that make it particularly useful for some of these applications.
    Optimal transportData scienceRegressionImage ProcessingClassificationNumerical methodsMachine learningTheoryDimensionsProbability...
  • Quantum computation is traditionally expressed in terms of quantum bits, or qubits. In this work, we instead consider three-level qu$trits$. Past work with qutrits has demonstrated only constant factor improvements, owing to the $\log_2(3)$ binary-to-ternary compression factor. We present a novel technique using qutrits to achieve a logarithmic depth (runtime) decomposition of the Generalized Toffoli gate using no ancilla--a significant improvement over linear depth for the best qubit-only equivalent. Our circuit construction also features a 70x improvement in two-qudit gate count over the qubit-only equivalent decomposition. This results in circuit cost reductions for important algorithms like quantum neurons and Grover search. We develop an open-source circuit simulator for qutrits, along with realistic near-term noise models which account for the cost of operating qutrits. Simulation results for these noise models indicate over 90% mean reliability (fidelity) for our circuit construction, versus under 30% for the qubit-only baseline. These results suggest that qutrits offer a promising path towards scaling quantum computation.
    QubitAncillaQuantum computationToffoli gateQuantum circuitTrapped ionQuantum algorithmsSchedulingQuantum gatesOptimization...
  • This book is an invitation to discover advanced topics in category theory through concrete, real-world examples. It aims to give a tour: a gentle, quick introduction to guide later exploration. The tour takes place over seven sketches, each pairing an evocative application, such as databases, electric circuits, or dynamical systems, with the exploration of a categorical structure, such as adjoint functors, enriched categories, or toposes. No prior knowledge of category theory is assumed. A feedback form for typos, comments, questions, and suggestions is available here: https://docs.google.com/document/d/160G9OFcP5DWT8Stn7TxdVx83DJnnf7d5GML0_FOD5Wg/edit
    Category theoryEnriched categorySketchDynamical systems...
  • In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogues to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting, and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions.
    GraphWaveletWavelet transformDilationBipartite networkGraph theoryEigenfunctionRandom walkHarmonic analysisFilter bank...
  • Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
    GraphCitation networkClassificationGraph ConvolutionAdjacency matrixNeural networkHyperparameterLogistic regressionRegularizationFeature vector...
  • This paper addresses the challenging problem of retrieval and matching of graph structured objects, and makes two key contributions. First, we demonstrate how Graph Neural Networks (GNN), which have emerged as an effective model for various supervised prediction problems defined on structured data, can be trained to produce embedding of graphs in vector spaces that enables efficient similarity reasoning. Second, we propose a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism. We demonstrate the effectiveness of our models on different domains including the challenging problem of control-flow-graph based function similarity search that plays an important role in the detection of vulnerabilities in software systems. The experimental analysis demonstrates that our models are not only able to exploit structure in the context of similarity learning but they can also outperform domain-specific baseline systems that have been carefully hand-engineered for these problems.
    GraphAttentionEmbeddingGraph embeddingVector spaceNeural networkSoftwareHidden layerClassificationOptimization...
  • Stochastic blockmodels (SBM) and their variants, $e.g.$, mixed-membership and overlapping stochastic blockmodels, are latent variable based generative models for graphs. They have proven to be successful for various tasks, such as discovering the community structure and link prediction on graph-structured data. Recently, graph neural networks, $e.g.$, graph convolutional networks, have also emerged as a promising approach to learn powerful representations (embeddings) for the nodes in the graph, by exploiting graph properties such as locality and invariance. In this work, we unify these two directions by developing a \emph{sparse} variational autoencoder for graphs, that retains the interpretability of SBMs, while also enjoying the excellent predictive performance of graph neural nets. Moreover, our framework is accompanied by a fast recognition model that enables fast inference of the node embeddings (which are of independent interest for inference in SBM and its variants). Although we develop this framework for a particular type of SBM, namely the \emph{overlapping} stochastic blockmodel, the proposed framework can be adapted readily for other types of SBMs. Experimental results on several benchmarks demonstrate encouraging results on link prediction while learning an interpretable latent structure that can be used for community discovery.
    GraphInferenceLink predictionEmbeddingAutoencoderGenerative modelLatent variableBit arrayNeural networkMonte Carlo Markov chain...
  • Alternatives to the cold, collisionless dark matter (DM) paradigm in which DM behaves as a collisional fluid generically suppress small-scale structure. Herein we use the observed population of Milky Way (MW) satellite galaxies to constrain the collisional nature of DM, focusing on DM--baryon scattering. We first derive conservative analytic upper limits on the velocity-independent DM--baryon scattering cross section by translating the upper bound on the lowest mass of halos inferred to host satellites into a characteristic cutoff scale in the linear matter power spectrum. We then confirm and improve these results through a detailed probabilistic inference of the MW satellite population that marginalizes over relevant astrophysical uncertainties. This yields $95\%$ confidence upper limits on the DM--baryon scattering cross section of $6\times10^{-30}\ \rm{cm}^2$ ($10^{-27}\ \rm{cm}^2$) for DM particle masses $m_\chi$ of $10\ \rm{keV}$ ($10\ \rm{GeV}$); these limits scale as $m_\chi^{1/4}$ for $m_\chi \ll 1\ \rm{GeV}$ and $m_\chi$ for $m_\chi \gg 1\ \rm{GeV}$. This analysis improves upon cosmological bounds derived from cosmic-microwave-background anisotropy measurements by more than three orders of magnitude over a wide range of DM masses, excluding regions of parameter space previously unexplored by other methods, including direct-detection experiments. Our work reveals a mapping between DM--baryon scattering and other alternative DM models, and we discuss the implications of our results for warm and fuzzy DM scenarios.
    Dark matterDark Matter-Baryon ScatteringMilky Way satelliteMilky WayDark matter subhaloDark matter particle massScattering cross sectionWarm dark matterFuzzy dark matterCosmic microwave background...
  • A number of anomalous results in short-baseline oscillation may hint at the existence of one or more light sterile neutrino states in the eV mass range and have triggered a wave of new experimental efforts to search for a definite signature of oscillations between active and sterile neutrino states. The present paper aims to provide a comprehensive review on the status of light sterile neutrino searches in mid-2019: we discuss not only the basic experimental approaches and sensitivities of reactor, source, atmospheric, and accelerator neutrino oscillation experiments but also the complementary bounds arising from direct neutrino mass experiments and cosmological observations. Moreover, we review current results from global oscillation analyses that include the constraints set by running reactor and atmospheric neutrino experiments. They permit to set tighter bounds on the active-sterile oscillation parameters but as yet are not able to provide a definite conclusion on the existence of eV-scale sterile neutrinos.
    NeutrinoSterile neutrinoNeutrino experiment anomalyNeutrino massActive neutrinoCosmic microwave backgroundMuonInverse beta decayAntineutrinoLiquid Scintillator Neutrino Detector...
  • We investigate the abundance, small-scale clustering and galaxy-galaxy lensing signal of galaxies in the Baryon Oscillation Spectroscopic Survey (BOSS). To this end, we present new measurements of the redshift and stellar mass dependence of the lensing properties of the galaxy sample. We analyse to what extent models assuming the Planck18 cosmology fit to the number density and clustering can accurately predict the small-scale lensing signal. In qualitative agreement with previous BOSS studies at redshift $z \sim 0.5$ and with results from the Sloan Digital Sky Survey, we find that the expected signal at small scales ($0.1 < r_{\rm p} < 3 \, h^{-1} \mathrm{Mpc}$) is higher by $\sim 25\%$ than what is measured. Here, we show that this result is persistent over the redshift range $0.1 < z < 0.7$ and for galaxies of different stellar masses. If interpreted as evidence for cosmological parameters different from the Planck CMB findings, our results imply $S_8 = \sigma_8 \sqrt{\Omega_{\rm m} / 0.3} = 0.744 \pm 0.015$, whereas $S_8 = 0.832 \pm 0.013$ for Planck18. However, in addition to being in tension with CMB results, such a change in cosmology alone does not accurately predict the lensing amplitude at larger scales. Instead, other often neglected systematics like baryonic feedback or assembly bias are likely contributing to the small-scale lensing discrepancy. We show that either effect alone, though, is unlikely to completely resolve the tension. Ultimately, a combination of the two effects in combination with a moderate change in cosmological parameters might be needed.
    GalaxyLensing signalStellar massBaryon Oscillation Spectroscopic SurveyMilky WayHalo assembly biasCosmologyCosmological parametersGalaxy galaxy lensingGalaxy assembly...
  • Topological insulators are a new class of materials that have attracted significant attention in contemporary condensed matter physics. They are different from the regular insulators and they display novel quantum properties that also involve the idea of `topology', an area of mathematics. Some of the fundamental ideas behind the topological insulators, particularly in low-dimensional condensed matter systems such as poly-acetylene chains, can be understood using a simple one-dimensional toy model popularly known as the Su-Schrieffer-Heeger model or the SSH model. This model can also be used as an introduction to the topological insulators of higher dimensions. Here we give a concise description of the SSH model along with a brief review of the background physics and attempt to understand the ideas of topological invariants, edge states, and bulk-boundary correspondence using the model.
    HamiltonianTopological insulatorWinding numberBrillouin zoneInsulatorsUnit cellEdge excitationsTopological invariantBand gapFermi energy...
  • This paper presents a field-programmable gate array (FPGA) design of a segmentation algorithm based on convolutional neural network (CNN) that can process light detection and ranging (LiDAR) data in real-time. For autonomous vehicles, drivable region segmentation is an essential step that sets up the static constraints for planning tasks. Traditional drivable region segmentation algorithms are mostly developed on camera data, so their performance is susceptible to the light conditions and the qualities of road markings. LiDAR sensors can obtain the 3D geometry information of the vehicle surroundings with high precision. However, it is a computational challenge to process a large amount of LiDAR data in real-time. In this paper, a convolutional neural network model is proposed and trained to perform semantic segmentation using data from the LiDAR sensor. An efficient hardware architecture is proposed and implemented on an FPGA that can process each LiDAR scan in 17.59 ms, which is much faster than the previous works. Evaluated using Ford and KITTI road detection benchmarks, the proposed solution achieves both high accuracy in performance and real-time processing in speed.
    Convolutional neural networkAutonomous vehiclesSemantic segmentationArchitectureMultidimensional ArrayAlgorithmsPrecisionGeometryField...
  • The advent of deep learning has given rise to neural scene representations - learned mathematical models of a 3D environment. However, many of these representations do not explicitly reason about geometry and thus do not account for the underlying 3D structure of the scene. In contrast, geometric deep learning has explored 3D-structure-aware representations of scene geometry, but requires explicit 3D supervision. We propose Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-to-end from only 2D observations, without access to depth or geometry. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model.
    Deep learningFeature vectorLatent spaceArchitectureTraining setPoint cloudGenerative modelNeural renderingGround truthOptimization...
  • An introductory overview of current research developments regarding solitons and fractional boundary charges in graphene nanoribbons is presented. Graphene nanoribbons and polyacetylene have chiral symmetry and share numerous similar properties, e.g., the bulk-edge correspondence between the Zak phase and the existence of edge states, along with the presence of chiral boundary states, which are important for charge fractionalization. In polyacetylene, a fermion mass potential in the Dirac equation produces an excitation gap, and a twist in this scalar potential produces a zero-energy chiral soliton. Similarly, in a gapful armchair graphene nanoribbon, a distortion in the chiral gauge field can produce soliton states. In polyacetylene, a soliton is bound to a domain wall connecting two different dimerized phases. In graphene nanoribbons, a domain-wall soliton connects two topological zigzag edges with different chiralities. However, such a soliton does not display spin-charge separation. The existence of a soliton in finite-length polyacetylene can induce formation of fractional charges on the opposite ends. In contrast, for gapful graphene nanoribbons, the antiferromagnetic coupling between the opposite zigzag edges induces integer boundary charges. The presence of disorder in graphene nanoribbons partly mitigates antiferromagnetic coupling effect. Hence, the average edge charge of gap states with energies within a small interval is e/2, with significant charge fluctuations. However, midgap states exhibit a well-defined charge fractionalization between the opposite zigzag edges in the weak-disorder regime. Numerous occupied soliton states in a disorder-free and doped zigzag graphene nanoribbon form a solitonic phase.
    SolitonPolyacetyleneDisorderGrapheneGraphene nano-ribbonsDomain wallFractional chargeEdge excitationsChiral symmetryAntiferromagnetic...
  • We measure the intergalactic medium (IGM) opacity in the Ly$\alpha$ as well as in the Ly$\beta$ forest along $19$ quasar sightlines between $5.5\lesssim z_{\rm abs}\lesssim 6.1$, probing the end stages of the reionization epoch. Owing to its lower oscillator strength the Ly$\beta$ transition is sensitive to different gas temperatures and densities than Ly$\alpha$, providing additional constraints on the ionization and thermal state of the IGM. A comparison of our measurements to different inhomogeneous reionization models, derived from post-processing the Nyx cosmological hydrodynamical simulation to include spatial fluctuations in the ultraviolet background (UVB) or the gas temperature field, as well as to a uniform reionization model with varying thermal states of the IGM, leads to two primary conclusions: First, we find that including the effects of spectral noise is key for a proper data to model comparison. Noise effectively reduces the sensitivity to high opacity regions, and thus even stronger spatial inhomogeneities are required to match the observed scatter in the observations than previously inferred. Second, we find that models which come close to reproducing the distribution of Ly$\alpha$ effective optical depths nevertheless underpredict the Ly$\beta$ opacity at the same spatial locations. The origin of this disagreement is not entirely clear but models with an inversion in the temperature-density relation of the IGM just after reionization is completed match our measurements best, although they still do not fully capture the observations at $z\gtrsim 5.8$.
    Intergalactic mediumOpacityReionizationQuasarUltraviolet backgroundReionization modelsTemperature-density relationEffective optical depthMean transmitted fluxEpoch of reionization...
  • Precise knowledge about the size of a crowd, its density and flow can provide valuable information for safety and security applications, event planning, architectural design and to analyze consumer behavior. Creating a powerful machine learning model, to employ for such applications requires a large and highly accurate and reliable dataset. Unfortunately the existing crowd counting and density estimation benchmark datasets are not only limited in terms of their size, but also lack annotation, in general too time consuming to implement. This paper attempts to address this very issue through a content aware technique, uses combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. The results shows that by simply replacing the commonly used density map generators with the proposed method, higher level of accuracy can be achieved using the existing state of the art models.
    CountingGround truthNearest-neighbor siteNearest neighbor searchMachine learningIntensityUniversal Conductance FluctuationsRegressionSecurityNeural network...
  • Given a locally injective real function g on the vertex set V of a finite simple graph G=(V,E), we prove the Poincare-Hopf formula f_G(t) = 1+t sum_{x in V} f_{S_g(x)}(t), where S_g(x) = { y in S(x), g(y) less than g(x) } and f_G(t)=1+f_0 t + ... + f_{d} t^{d+1} is the f-function encoding the f-vector of a graph G, where f_k counts the number of k-dimensional cliques, complete sub-graphs, in G. The corresponding computation of f reduces the problem recursively to n tasks of graphs of half the size. For t=-1, the parametric Poincare-Hopf formula reduces to the classical Poincare-Hopf result X(G)=sum_x i_g(x), with integer indices i_g(x)=1-X(S_g(x)) and Euler characteristic X. In the new Poincare-Hopf formula, the indices are integer polynomials and the curvatures K_x(t) expressed as index expectations K_x(t) = E[i_x(t)] are polynomials with rational coefficients. Integrating the Poincare-Hopf formula over probability spaces of functions g gives Gauss-Bonnet formulas like f_G(t) = 1+sum_{x} F_{S(x)}(t), where F_G is the anti-derivative of f_G. A similar computation is done for the generating function f_{G,H}(t,s) = sum_{k,l} f_{k,l}(G,H) s^k t^l of the f-intersection matrix f_{k,l}(G,H) counting the number of intersections of k-simplices in G with l-simplices in H. Also here, the computation is reduced to 4 n^2 computations for graphs of half the size: f_{G,H}(t,s) = sum_{v,w} f_{B_g(v),B_g(w)}(t,s) - f_{B_g(v),S_g(w)}(t,s) - f_{S_g(v),B_g(w)}(t,s) + f_{S_g(v),S_g(w)}(t,s), where B_g(v)= S_g(v)+{v} is the unit ball of v.
    GraphStar S2Euler characteristicCurvatureSimple graphManifoldCountingIntersection numberSectional curvatureGauss-Bonnet theorem...
  • Developing countries suffer from traffic congestion, poorly planned road/rail networks, and lack of access to public transportation facilities. This context results in an increase in fuel consumption, pollution level, monetary losses, massive delays, and less productivity. On the other hand, it has a negative impact on the commuters feelings and moods. Availability of real-time transit information - by providing public transportation vehicles locations using GPS devices - helps in estimating a passenger's waiting time and addressing the above issues. However, such solution is expensive for developing countries. This paper aims at designing and implementing a crowd-sourced mobile phones-based solution to estimate the expected waiting time of a passenger in public transit systems, the prediction of the remaining time to get on/off a vehicle, and to construct a real time public transit schedule. Trans-Sense has been evaluated using real data collected for over 800 hours, on a daily basis, by different Android phones, and using different light rail transit lines at different time spans. The results show that Trans-Sense can achieve an average recall and precision of 95.35% and 90.1%, respectively, in discriminating lightrail stations. Moreover, the empirical distributions governing the different time delays affecting a passenger's total trip time enable predicting the right time of arrival of a passenger to her destination with an accuracy of 91.81%.In addition, the system estimates the stations dimensions with an accuracy of 95.71%.
    SchedulingCluster analysisMobile phoneTiming of arrivalTime delayArchitectureActivity patternsGoogle.comGaussian distributionGround truth...
  • Detecting the transportation mode of a user is important for a wide range of applications. While a number of recent systems addressed the transportation mode detection problem using the ubiquitous mobile phones, these studies either leverage GPS, the inertial sensors, and/or multiple cell towers information. However, these different phone sensors have high energy consumption, limited to a small subset of phones (e.g. high-end phones or phones that support neighbouring cell tower information), cannot work in certain areas (e.g. inside tunnels for GPS), and/or work only from the user side. In this paper, we present a transportation mode detection system, MonoSense, that leverages the phone serving cell information only. The basic idea is that the phone speed can be correlated with features extracted from both the serving cell tower ID and the received signal strength from it. To achieve high detection accuracy with this limited information, MonoSense leverages diversity along multiple axes to extract novel features. Specifically, MonoSense extracts features from both the time and frequency domain information available from the serving cell tower over different sliding widow sizes. More importantly, we show also that both the logarithmic and linear RSS scales can provide different information about the movement of a phone, further enriching the feature space and leading to higher accuracy. Evaluation of MonoSense using 135 hours of cellular traces covering 485 km and collected by four users using different Android phones shows that it can achieve an average precision and recall of 89.26% and 89.84% respectively in differentiating between the stationary, walking, and driving modes using only the serving cell tower information, highlighting MonoSense ability to enable a wide set of intelligent transportation applications.
    ArchitectureClassificationTransmittanceFast Fourier transformMarketFeature spaceFeature extractionGalaxyLine of sightOperator system...
  • Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.
    GraphArchitectureNeural networkRegularizationCovarianceTranslational invarianceMNIST datasetHierarchical clusteringMultigrid methodWavelet...
  • Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.
    GraphEmbeddingLong short term memoryProteinClustering coefficientMultilayer perceptronRandom walkArchitectureClassificationProtein interactions...
  • In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.
    GraphConvolutional neural networkArchitectureGraph theoryDeep learningSocial networkClassificationChebyshev polynomialsWord vectorsWord embedding...
  • We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).
    GraphArchitectureNeural networkClassificationConvolutional neural networkCitation networkLong short term memoryRegularizationOpacityRecurrent neural network...
  • We show that Fermi repulsion can lead to cored density profiles in dwarf galaxies for sub-keV fermionic dark matter. We treat the dark matter as a quasi-degenerate self-gravitating Fermi gas and calculate its density profile assuming hydrostatic equilibrium. We find that suitable dwarf galaxy cores of larger than 130 pc can be achieved for fermion dark matter with mass in the range 70 eV - 400 eV. While in conventional dark matter scenarios, such sub-keV thermal dark matter would be excluded by free streaming bounds, the constraints are ameliorated in models with dark matter at lower temperature than conventional thermal scenarios, such as the Flooded Dark Matter model that we have previously considered. Modifying the arguments of Tremaine and Gunn we derive a conservative lower bound on the mass of fermionic dark matter of 70 eV and a stronger lower bound from Lyman-$\alpha$ clouds of about 470 eV, leading to slightly smaller cores than have been observed. We comment on this result and how the tension is relaxed in dark matter scenarios with non-thermal momentum distributions.
    Dark matterDwarf galaxyFornax Dwarf spheroidal galaxyDark matter particle massDegree of freedomDegenerate Fermi gasDark matter modelCored dark matter density profileStandard ModelFermionic dark matter...
  • Machine learning develops rapidly, which has made many theoretical breakthroughs and is widely applied in various fields. Optimization, as an important part of machine learning, has attracted much attention of researchers. With the exponential growth of data amount and the increase of model complexity, optimization methods in machine learning face more and more challenges. A lot of work on solving optimization problems or improving optimization methods in machine learning has been proposed successively. The systematic retrospect and summary of the optimization methods from the perspective of machine learning are of great significance, which can offer guidance for both developments of optimization and machine learning research. In this paper, we first describe the optimization problems in machine learning. Then, we introduce the principles and progresses of commonly used optimization methods. Next, we summarize the applications and developments of optimization methods in some popular machine learning fields. Finally, we explore and give some challenges and open problems for the optimization in machine learning.
    OptimizationMachine learningStochastic gradient descentInferenceDeep Neural NetworksHamiltonian Monte CarloRecurrent neural networkReinforcement learningMeta learningMonte Carlo Markov chain...
  • The policy gradient theorem describes the gradient of the expected discounted return with respect to an agent's policy parameters. However, most policy gradient methods do not use the discount factor in the manner originally prescribed, and therefore do not optimize the discounted objective. It has been an open question in RL as to which, if any, objective they optimize instead. We show that the direction followed by these methods is not the gradient of any objective, and reclassify them as semi-gradient methods with respect to the undiscounted objective. Further, we show that they are not guaranteed to converge to a locally optimal policy, and construct an counterexample where they will converge to the globally pessimal policy with respect to both the discounted and undiscounted objectives.
    SchedulingSigmoid functionHyperparameterInstabilityStochastic gradient descentFunction approximationMachine learningNeural networkObjectiveAction...
  • Regular physics is unsatisfactory in that it fails to take into consideration phenomena relating to mind and meaning, whereas on the other side of the cultural divide such constructs have been studied in detail. This paper discusses a possible synthesis of the two perspectives. Crucial is the way systems realising mental function can develop step by step on the basis of the scaffolding mechanisms of Hoffmeyer, in a way that can be clarified by consideration of the phenomenon of language. Taking into account such constructs, aspects of which are apparent even with simple systems such as acoustically excited water (as with cymatics), potentially opens up a window into a world of mentality excluded from conventional physics as a result of the primary focus of the latter on the matter-like aspect of reality.
    Primary focusLanguage
  • In three dimensional turbulence there is on average a cascade of kinetic energy from the largest to the smallest scales of the flow. While the dominant idea is that the cascade occurs through the physical process of vortex stretching, evidence for this is debated. In the framework of the Karman-Howarth equation for the two point turbulent kinetic energy, we derive a new result for the average flux of kinetic energy between two points in the flow that reveals the role of vortex stretching. However, the result shows that vortex stretching is in fact not the main contributor to the average energy cascade; the main contributor is the self-amplification of the strain-rate field. We emphasize the need to correctly distinguish and not conflate the roles of vortex stretching and strain-self amplification in order to correctly understand the physics of the cascade, and also resolve a paradox regarding the differing role of vortex stretching on the mechanisms of the energy cascade and energy dissipation rate. Direct numerical simulations are used to confirm the results, as well as provide further results and insights on vortex stretching and strain-self amplification at different scales in the flow. Interestingly, the results imply that while vortex stretching plays a sub-leading role in the average cascade, it may play a leading order role during large fluctuations of the energy cascade about its average behavior.
    Vortex stretchingDissipationTurbulenceVorticityDirect numerical simulationKinematicsTurbulent flowKármán-Howarth equationStatisticsCompressibility...
  • Unstable shear layers in environmental and industrial flows roll up into a series of vortices, which often form complex nonlinear merging patterns like pairs and triplets. These patterns crucially determine the subsequent turbulence, mixing and scalar transport. We show that the late-time, highly nonlinear merging patterns are predictable from the linearized initial state. The initial asymmetry between consecutive wavelengths of the vertical velocity field provides an effective measure of the strength and geometry of vortex merging. The predictions of this measure are substantiated using direct numerical simulations. We also show that this measure has significant implications in determining the route to turbulence and the ensuing turbulence characteristics.
    TurbulenceDirect numerical simulationKelvin-Helmholtz instabilityVortex sheetNumerical simulationInstabilityArithmeticPlanetary atmospheresKinematicsInfinitesimal...
  • It is well known that when a laser is reflected from a rough surface or transmitted through a diffusive medium, a speckle pattern will be formed at a given observation plane. Speckle is commonly produced by laser beams with a homogeneous intensity, for which, well-known relations have been derived, relating the speckle size to the area of illumination. Here we investigate the speckle generated by higher-order Laguerre-Gaussian (LG) modes, characterized by a non-uniform intensity distribution of concentric rings.We show that the ring-structure of the LG modes does not play any role in the speckle size, which happens to be the same as that obtained for a homogeneous intensity distribution. This allow us to provide with a simple expression that relates the speckle size to the spot size of the LG modes. Our findings will be of great relevance in many speckle-based applications.
    IntensityLasersSpatial light modulatorsGlassTopological quantum numberOrbital angular momentum of lightCharge coupled deviceInterferenceGaussian beamL3...
  • Embedded vortices in turbulent wall-bounded flow over a flat plate, generated by a passive rectangular vane-type vortex generator with variable angle $\beta$ to the incoming flow in a low-Reynolds number flow ($Re=2600$ based on the inlet grid mesh size $L=0.039\;$m and free stream velocity $U_{\infty} = 1.0\;$m s$^{-1}$) have been studied with respect to helical symmetry. The studies were carried out in a low-speed closed-circuit wind tunnel utilizing Stereoscopic Particle Image Velocimetry (SPIV). The vortices have been shown to possess helical symmetry, allowing the flow to be described in a simple fashion. Iso-contour maps of axial vorticity revealed a dominant primary vortex and a weaker secondary one for $20^{\circ} \leq \beta \leq 40^{\circ}$. For angles outside of this range, the helical symmetry was impaired due to the emergence of additional flow effects. A model describing the flow has been utilized, showing strong concurrence with the measurements, even though the model is decoupled from external flow processes that could perturb the helical symmetry. The pitch, vortex core size, circulation and the advection velocity of the vortex all vary linearly with the device angle $\beta$. This is important for flow control, since one thereby can determine the axial velocity induced by the helical vortex as well as the swirl redistributing the axial velocity component for a given device angle $\beta$. This also simplifies theoretical studies, \eg to understand and predict the stability of the vortex and to model the flow numerically.
    Vortex generatorExternal flowVorticityConcurrenceWind tunnelVelocimetryReynolds numberFree streaming of particlesSymmetryVelocity...
  • We present a method for converting a time record of turbulent velocity measured at a point in a flow to a spatial velocity record consisting of consecutive convection elements. The spatial record allows computation of dynamic statistical moments such as turbulent kinetic wavenumber spectra and spatial structure functions in a way that bypasses the need for Taylor's Hypothesis. The spatial statistics agree with the classical counterparts, such as the total kinetic energy spectrum, at least for spatial extents up to the Taylor microscale. The requirements for applying the method is access to the instantaneous velocity magnitude, in addition to the desired flow quantity, and a high temporal resolution in comparison to the relevant time scales of the flow. We map, without distortion and bias, notoriously difficult developing turbulent high intensity flows using three main aspects that distinguish these measurements from previous work in the field; 1) The measurements are conducted using laser Doppler anemometry and are therefore not contaminated by directional ambiguity (in contrast to, e.g., frequently employed hot-wire anemometers); 2) The measurement data are extracted using a correctly and transparently functioning processor and is analysed using methods derived from first principles to provide unbiased estimates of the velocity statistics; 3) The method is first confirmed to produce the correct statistics using computer simulations and later applied to measurements in some of the most difficult regions of a round turbulent jet -- the non-equilibrium developing region and the outermost parts of the developed jet. The measurements in the developing region reveal interesting features of an incomplete Richardson-Kolmogorov cascade under development.
    TurbulenceStatisticsIntensityEddyTaylor microscaleCurvatureLasersTurbulent jetTurbulent flowInfinitesimal...
  • We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
    ArchitectureComputational linguisticsInferenceGeneralized Likelihood Uncertainty EstimationLanguage...
  • Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.
    Information retrievalInformation and communication technologiesInferenceLatent variableGoogle.comArchitectureTraining setApplication programming interfaceStatisticsRanking...
  • Runko is a new open-source plasma simulation framework implemented in C++ and Python. It is designed to function as an easy-to-extend general toolbox for simulating astrophysical plasmas with different theoretical and numerical models. Computationally intensive low-level "kernels" are written in modern C++14 taking advantage of polymorphic classes, multiple inheritance, and template metaprogramming. High-level functionality is operated with Python3 scripts. This hybrid program design ensures fast code and ease of use. The framework has a modular object-oriented design that allow the user to easily add new numerical algorithms to the system. The code can be run on various computing platforms ranging from laptops (shared-memory systems) to massively parallel supercomputer architectures (distributed-memory systems). The framework also supports heterogeneous multi-physics simulations in which different physical solvers can be combined and run simultaneously. Here we report on the first results from the framework's relativistic particle-in-cell (PIC) module. Using the PIC module, we simulate decaying relativistic kinetic turbulence in suddenly stirred magnetically-dominated pair plasma. We show that the resulting particle distribution can be separated into a thermal part that forms the turbulent cascade and into a separate decoupled non-thermal particle population that acts as an energy sink for the system.
    Particle-in-cellTurbulenceRankMagnetohydrodynamicsLorentz factorCharged particleEddyCachingParticle massParticle velocity...
  • When light from a distant source object, like a galaxy or a supernova, travels towards us, it is deflected by massive objects that lie on its path. When the mass density of the deflecting object exceeds a certain threshold, multiple, highly distorted images of the source are observed. This strong gravitational lensing effect has so far been treated as a model-fitting problem. Using the observed multiple images as constraints yields a self-consistent model of the deflecting mass density and the source object. As several models meet the constraints equally well, we develop a lens characterisation that separates data-based information from model assumptions. The observed multiple images allow us to determine local properties of the deflecting mass distribution on any mass scale from one simple set of equations. Their solution is unique and free of model-dependent degeneracies. The reconstruction of source objects can be performed completely model-independently, enabling us to study galaxy evolution without a lens-model bias. Our approach reduces the lens and source description to its data-based evidence that all models agree upon, simplifies an automated treatment of large datasets, and allows for an extrapolation to a global description resembling model-based descriptions.
    GalaxyTime delayGravitational lensingSupernovaCluster of galaxiesAngular diameter distanceStrong gravitational lensingQuadrupoleQuasarLine of sight...
  • Recent advancements in the imaging of low-surface-brightness objects revealed numerous ultra-diffuse galaxies in the local Universe. These peculiar objects are unusually extended and faint: their effective radii are comparable to the Milky Way, but their surface brightnesses are lower than that of dwarf galaxies. Their ambiguous properties motivate two potential formation scenarios: the "failed" Milky Way and the dwarf galaxy scenario. In this paper, for the first time, we employ X-ray observations to test these formation scenarios on a sample of isolated, low-surface-brightness galaxies. Since hot gas X-ray luminosities correlate with the dark matter halo mass, "failed" Milky Way-type galaxies, which reside in massive dark matter halos, are expected to have significantly higher X-ray luminosities than dwarf galaxies, which reside in low-mass dark matter halos. We perform X-ray photometry on a subset of low-surface-brightness galaxies identified in the Hyper Suprime-Cam Subaru survey, utilizing the XMM-Newton XXL North survey. We find that none of the individual galaxies show significant X-ray emission. By co-adding the signal of individual galaxies, the stacked galaxies remain undetected and we set an X-ray luminosity upper limit of ${L_{\rm{0.3-1.2keV}}\leq6.2 \times 10^{37} (d/65 \rm{Mpc})^2 \ \rm{erg \ s^{-1}}}$ for an average isolated low-surface-brightness galaxy. This upper limit is about 40 times lower than that expected in a galaxy with a massive dark matter halo, implying that the majority of isolated low-surface-brightness galaxies reside in dwarf-size dark matter halos.
    GalaxyUltra-diffuse galaxy-like objectDark matter haloMilky WayX-ray luminosityLow surface brightness galaxyVirial massDwarf galaxyHot gasLuminosity...