Recently bookmarked papers

with concepts:
  • Non-equilibrium ionization effects are important in cosmological hydrodynamical simulations but are computationally expensive. We study the effect of non-equilibrium ionization evolution and UV ionizing background (UVB) generated with different quasar spectral energy distribution (SED) on the derived physical conditions of the intergalactic medium (IGM) at $2\leq z \leq 6$ using our post-processing tool 'Code for Ionization and Temperature Evolution' (CITE). CITE produces results matching well with self-consistent simulations more efficiently. The HeII reionization progresses more rapidly in non-equilibrium model as compared to equilibrium models. The redshift of HeII reionization strongly depends on the quasar SED and occurs earlier for UVB models with flatter quasar SEDs. During this epoch the temperature of the IGM at mean density, $T_0(z)$, has a maximum while the slope of the effective equation of state, $\gamma(z)$, has a minimum, but occurring at different redshifts. The $T_0$ is higher in non-equilibrium models using UVB obtained with flatter quasar SEDs. The observed median HeII effective optical depth evolution and its scatter are well reproduced in equilibrium and non-equilibrium models even with the uniform UVB assumption. For a given thermal history the redshift dependence of HI photo-ionization rate derived from observed HI effective optical depth ($\tau_{\rm eff,HI}$) are different for equilibrium or non-equilibrium models. This may lead to different requirements on the evolution of ionizing emissivities of sources. We show that, in the absence of strong differential pressure smoothing effects, it is possible to recover the $T_0$ and $\gamma$ in non-equilibrium model from the equilibrium models generated by rescaling photo-heating rates while producing the same $\tau_{\rm eff,HI}$.
    IonizationQuasarIntergalactic mediumReionizationSpectral energy distributionTemperature-density relationStatisticsG2Effective optical depthPressure smoothing...
  • As quantum computers have become available to the general public, the need has arisen to train a cohort of quantum programmers, many of whom have been developing classic computer programs for most of their career. While currently available quantum computers have less than 100 qubits, quantum computer hardware is widely expected to grow in terms of qubit counts, quality, and connectivity. Our article aims to explain the principles of quantum programming, which are quite different from classical programming, with straight-forward algebra that makes understanding the underlying quantum mechanics optional (but still fascinating). We give an introduction to quantum computing algorithms and their implementation on real quantum hardware. We survey 20 different quantum algorithms, attempting to describe each in a succintc and self-contained fashion; we show how they are implemented on IBM's quantum computer; and in each case we discuss the results of the implementation with respect to differences of the simulator and the actual hardware runs. This article introduces computer scientists and engineers to quantum algorithms and provides a blueprint for their implementations.
    Quantum algorithmsQubitQuantum computationQuantum programmingComputer programmingQuantum mechanicsAlgorithmsSurveysAlgebra...
  • We study a suite of extremely high-resolution cosmological FIRE simulations of dwarf galaxies ($M_{\rm halo} \lesssim 10^{10}$$M_{\odot}$), run to $z=0$ with $30 M_{\odot}$ resolution, sufficient (for the first time) to resolve the internal structure of individual supernovae remnants within the cooling radius. Every halo with $M_{\rm halo} \gtrsim 10^{8.6} M_{\odot}$ is populated by a resolved {\em stellar} galaxy, suggesting very low-mass dwarfs may be ubiquitous in the field. Our ultra-faint dwarfs (UFDs; $M_{\ast}<10^{5}\,M_{\odot}$) have their star formation truncated early ($z\gtrsim2$), likely by reionization, while classical dwarfs ($M_{\ast}>10^{5} M_{\odot}$) continue forming stars to $z<0.5$. The systems have bursty star formation (SF) histories, forming most of their stars in periods of elevated SF strongly clustered in both space and time. This allows our dwarf with $M_{\ast}/M_{\rm halo} > 10^{-4}$ to form a dark matter core $>200$pc, while lower-mass UFDs exhibit cusps down to $\lesssim100$pc, as expected from energetic arguments. Our dwarfs with $M_{\ast}>10^{4}\,M_{\odot}$ have half-mass radii ($R_{\rm 1/2}$) in agreement with Local Group (LG) dwarfs; dynamical mass vs. $R_{1/2}$ and the degree of rotational support also resemble observations. The lowest-mass UFDs are below surface brightness limits of current surveys but are potentially visible in next-generation surveys (e.g. LSST). The stellar metallicities are lower than in LG dwarfs; this may reflect pre-enrichment of the LG by the massive hosts or Pop-III stars. Consistency with lower resolution studies implies that our simulations are numerically robust (for a given physical model).
    Ultra-faint dwarf spheroidal galaxyGalaxyStar formationLocal groupMilky WayStarDwarf galaxySurface brightnessFIRE simulationsStellar mass...
  • The computation of Feynman integrals is often the bottleneck of multi-loop calculations. We propose and implement a new method to efficiently evaluate such integrals in the physical region through the numerical integration of a suitable set of differential equations, where the initial conditions are provided in the unphysical region via the sector decomposition method. We present numerical results for a set of two-loop integrals, where the non-planar ones complete the master integrals for $gg\to\gamma\gamma$ and $q\bar{q}\to\gamma\gamma$ scattering mediated by the top quark.
    Path integralKinematicsPrecisionLoop integralPropagatorTop quarkLoop amplitudeLoop momentumPartial differential equationLaurent series...
  • Star clusters stand at the intersection of much of modern astrophysics: the interstellar medium, gravitational dynamics, stellar evolution, and cosmology. Here we review observations and theoretical models for the formation, evolution, and eventual disruption of star clusters. Current literature suggests a picture of this life cycle with several phases: (1) Clusters form in hierarchically-structured, accreting molecular clouds that convert gas into stars at a low rate per dynamical time until feedback disperses the gas. (2) The densest parts of the hierarchy resist gas removal long enough to reach high star formation efficiency, becoming dynamically-relaxed and well-mixed. These remain bound after gas removal. (3) In the first $\sim 100$ Myr after gas removal, clusters disperse moderately fast, through a combination of mass loss and tidal shocks by dense molecular structures in the star-forming environment. (4) After $\sim 100$ Myr, clusters lose mass via two-body relaxation and shocks by giant molecular clouds, processes that preferentially affect low-mass clusters and cause a turnover in the cluster mass function to appear on $\sim 1-10$ Gyr timescales. (5) Even after dispersal, some clusters remain coherent and thus detectable in chemical or action space for multiple galactic orbits. In the next decade a new generation of space- and AO-assisted ground-based telescopes will enable us to test and refine this picture.
    StarStar clusterStar formationMilky WayGiant Molecular CloudGalaxyOf starsGlobular clusterOpen clusterAbundance...
  • This work presents an approach (fitCMD) designed to obtain a comprehensive set of astrophysical parameters from colour-magnitude diagrams (CMDs) of star clusters. Based on initial mass function (IMF) properties taken from isochrones, fitCMD searches for the values of total (or cluster) stellar mass, age, global metallicity, foreground reddening, distance modulus, and magnitude-dependent photometric completeness that produce the artificial CMD that best reproduces the observed one; photometric scatter is also taken into account in the artificial CMDs. Inclusion of photometric completeness proves to be an important feature of fitCMD, something that becomes apparent especially when luminosity functions are considered. These parameters are used to build a synthetic CMD that also includes photometric scatter. Residual minimization between the observed and synthetic CMDs leads to the best-fit parameters. When tested against artificial star clusters, fitCMD shows to be efficient both in terms of computational time and ability to recover the input values.
    Star clusterCompletenessInitial mass functionMetallicityReddeningStellar massStarDistance modulusOf starsHertzsprung-Russell diagram...
  • Ultraluminous X-ray sources (ULXs) represent a class of binary systems that are more luminous than any black hole in our Galaxy. The nature of these objects remained unclear for a long time. The most popular models for the ULXs involve either intermediate mass black holes (IMBHs) or stellar-mass black holes accreting at super-Eddington rates. In the last few years our understating of these objects was significantly improved, which made the model of super-Eddington accretion much preferable. Both the X-ray and optical spectra provide evidence for the strong outflows coming from supercritical accretion disk. Another surprising result was discovery of pulsations in four ULXs, which claims that these systems must host neutron stars. Besides the presence of pulsations, there is no sharp difference between ultraluminous pulsars and normal ULXs. This fact implies that significant number of known ULXs might eventually be neutron stars.
    Ultraluminous X-rayNeutron starIntermediate-mass black holeAstronomical X-ray sourcePulsarLuminosityInclinationStarBlack holeAccretion...
  • The aim of this work is to identify main impact factors affecting variations in the geomorphology of the Mariana Trench which is the deepest place of the Earth, located in the west Pacific Ocean: steepness angle and structure of the sediment compression. The Mariana Trench presents a complex ecosystem with highly interconnected factors: geology (sediment thickness and tectonics including four plates that Mariana trench crosses: Philippine, Pacific, Mariana, Caroline), bathymetry (coordinates, slope angle, depth values in the observation points). To study such a complex system, an objective method combining various approaches (statistics, R, GIS, descriptive analysis and graphical plotting) was performed. Methodology of the research includes following clusters: R programming language for writing codes, statistical analysis, mathematical algorithms for data processing, analysis and visualizing diagrams, GIS for digitizing bathymetric profiles and spatial analysis. The statistical analysis of the data taken from the bathymetric profiles was applied to environmental factors, i.e. coordinates, depths, geological properties sediment thickness, slope angles, etc. Finally, factor analysis was performed by R libraries to analyze impact factors of the Mariana Trench ecosystem. Euler-Venn logical diagrams highlighted similarities between four tectonic plates and environmental factors. The results revealed distinct correlations between the environmental factors (sediment thickness, slope steepness, depth values by observation points, geographic location of the profiles) affecting Mariana Trench morphology. The research demonstrated that coding on R language provides a powerful and highly effective statistical tools, mathematical algorithms of factor analysis to study ocean trench formation.
    Impact FactorEcosystemsStatisticsProgramming LanguageEarthComplex systemsAlgorithmsLanguageObjective...
  • We investigate how Einstein rings and magnified arcs are affected by small-mass dark-matter haloes placed along the line-of-sight to gravitational lens systems. By comparing the gravitational signature of line-of-sight haloes with that of substructures within the lensing galaxy, we derive a mass-redshift relation that allows us to rescale the detection threshold (i.e. lowest detectable mass) for substructures to a detection threshold for line-of-sight haloes at any redshift. We then quantify the line-of-sight contribution to the total number density of low-mass objects that can be detected through strong gravitational lensing. Finally, we assess the degeneracy between substructures and line-of-sight haloes of different mass and redshift to provide a statistical interpretation of current and future detections, with the aim of distinguishing between CDM and WDM. We find that line-of-sight haloes statistically dominate with respect to substructures, by an amount that strongly depends on the source and lens redshifts, and on the chosen dark matter model. Substructures represent about 30 percent of the total number of perturbers for low lens and source redshifts (as for the SLACS lenses), but less than 10 per cent for high redshift systems. We also find that for data with high enough signal-to-noise ratio and angular resolution, the non-linear effects arising from a double-lens-plane configuration are such that one is able to observationally recover the line-of-sight halo redshift with an absolute error precision of 0.15 at the 68 per cent confidence level.
    Line of sightDark matter subhaloNavarro-Frenk-White profileDeflection angleVirial massConcentration-mass relationCold dark matterMass functionWarm dark matterSurface brightness...
  • We have studied the preheating phase for a class of plateau inflationary model considering the four-legs interaction term $(1/2)g^2\phi^2\chi^2$ between the inflaton $(\phi)$ and reheating field $(\chi)$. We specifically focus on the effects of a parameter $\phi_*$ that controls inflationary dynamics and the shape of the inflaton potential. For $\phi_* < M_p$, the departure of the inflaton potential from the usual power-law behavior $\phi^n$ significantly modifies the microscopic behavior of the preheating dynamics. We analyze and compare the efficiency of production, thermalization and the final equation of state of the system for different models under consideration with $n=2,4,6$ for two different values of $\phi_*$. Most importantly as we increase $n$, or decrease $\phi_*$, the preheating occurs very efficiently with the final equation of state to be that of the radiation, $w=1/3$. Specially for $n=2$, the final equation of state turned out to be $w\simeq 0.2$. However, a complete decay of inflaton could not be achieved with the four-legs interaction for any model under consideration. Therefore, in order to complete the reheating process, we perform the perturbative analysis for the second stage of the reheating phase. Taking the end product of the preheating phase as an initial condition we have solved the homogeneous Boltzmann equations for both the fields supplemented by the constraints coming from the subsequent entropy conservation. In so doing, we can calculate the reheating temperature which is otherwise ill-defined right after the end of preheating. The temperature can be uniquely fixed for a given inflaton decay constant and the CMB temperature. We also compare our results with the conventional reheating constraint analysis and discuss the limit of inflaton decay constant from the field theory perspective.
    InflatonPreheatingReheatingInflationThermalisationModel of inflationReheating temperatureLattice calculationsInstabilityScale factor...
  • We specify the semiclassical no-boundary wave function of the universe without relying on a functional integral of any kind. The wave function is given as a sum of specific saddle points of the dynamical theory that satisfy conditions of regularity on geometry and field and which together yield a time neutral state that is normalizable in an appropriate inner product. This specifies a predictive framework of semiclassical quantum cosmology that is adequate to make probabilistic predictions, which are in agreement with observations in simple models. The use of holography to go beyond the semiclassical approximation is briefly discussed.
    Saddle pointLinearized gravityQuantum cosmologyCoarse grainingDegree of freedomQuantum mechanicsInflationPath integralQuantum gravityCosmology...
  • Resonance phenomena in solids generally fall into two distinct classes, electric and magnetic, driven, respectively, by the $E$ and $H$ components of the electromagnetic wave incident on the solid. The canonical examples of the two types of resonances are the electron cyclotron resonance (CR) and the electron paramagnetic resonance (EPR), originating from the electron orbital and spin degrees of freedom, respectively. The behavior becomes considerably more interesting (and more complicated) in the presence of the spin-orbital interaction. In this case, a more general type of resonance may occur, which is driven by the electric excitation mechanism and involves the spin degrees of freedom. Such electric-dipole spin resonance (EDSR) may occur at the spin excitation frequency or at a combination of the orbital and spin frequencies, spanning a wide bandwidth. The EDSR phenomenon, first predicted by Rashba (1960), has been probed experimentally in 3D solids with different crystal symmetries, as well as in low-dimensional systems (heterojunctions, inversion layers, dislocations and impurity states). Due to its electric dipole origin, the EDSR features a relatively high intensity, which may exceed by orders of magnitude the EPR intensity. This review summarizes the work on EDSR prior to 1991, laying out the theoretical framework and discussing different experimental systems in which the EDSR-related physics can be realized and explored.
    IntensityElectron paramagnetic resonanceSpin orbitHamiltonianSemiconductorElectron cyclotron resonanceDegree of freedomDislocationSelection ruleHeterojunction...
  • Humans spend a remarkable fraction of waking life engaged in acts of "mental time travel". We dwell on our actions in the past and experience satisfaction or regret. More than merely autobiographical storytelling, we use these event recollections to change how we will act in similar scenarios in the future. This process endows us with a computationally important ability to link actions and consequences across long spans of time, which figures prominently in addressing the problem of long-term temporal credit assignment; in artificial intelligence (AI) this is the question of how to evaluate the utility of the actions within a long-duration behavioral sequence leading to success or failure in a task. Existing approaches to shorter-term credit assignment in AI cannot solve tasks with long delays between actions and consequences. Here, we introduce a new paradigm for reinforcement learning where agents use recall of specific memories to credit actions from the past, allowing them to solve problems that are intractable for existing algorithms. This paradigm broadens the scope of problems that can be investigated in AI and offers a mechanistic account of behaviors that may inspire computational models in neuroscience, psychology, and behavioral economics.
    Long short term memorySplicingReinforcement learningSignal to noise ratioRecurrent neural networkHidden layerDecision makingActivation functionComputational modellingTime travel...
  • Using deep, high resolution optical imaging from the Next Generation Virgo Cluster Survey we study the properties of nuclear star clusters (NSCs) in a sample of nearly 400 quiescent galaxies in the core of Virgo with stellar masses $10^{5}\lesssim M_{*}/M_{\odot} \lesssim10^{12}$. The nucleation fraction reaches a peak value $f_{n}\approx90\%$ for $M_{*} \approx 10^{9} M_{\odot}$ galaxies and declines for both higher and lower masses, but nuclei populate galaxies as small as $M_{*} \approx5\times10^{5} M_{\odot}$. Comparison with literature data for nearby groups and clusters shows that at the low-mass end nucleation is more frequent in denser environments. The NSC mass function peaks at $M_{NSC}\approx7\times10^{5} M_{\odot}$, a factor 3-4 times larger than the turnover mass for globular clusters (GCs). We find a nonlinear relation between the stellar masses of NSCs and of their host galaxies, with a mean nucleus-to-galaxy mass ratio that drops to $M_{NSC}/M_{*}\approx3.6\times10^{-3}$ for $M_{*} \approx 5\times10^{9} M_{\odot}$ galaxies. Nuclei in both more and less massive galaxies are much more prominent: $M_{NSC}\propto M_{*}^{0.46}$ at the low-mass end, where nuclei are nearly 50% as massive as their hosts. We measure an intrinsic scatter in NSC masses at fixed galaxy stellar mass of 0.4 dex, which we interpret as evidence that the process of NSC growth is significantly stochastic. At low galaxy masses we find a close connection between NSCs and GC systems, including a very similar occupation distribution and comparable total masses. We discuss these results in the context of current dissipative and dissipationless models of NSC formation.
    Nuclear Star ClusterGalaxyGlobular clusterStellar massMilky WayGalaxy massStar clusterMassive galaxiesHost galaxyVirgo Cluster...
  • Binary black holes (BBHs) appear to be widespread and are able to merge through the emission of gravitational waves, as recently illustrated by LIGO. The spin of the BBHs is one of the parameters that LIGO can infer from the gravitational wave signal and can be used to constrain their production site. If BBHs are assembled in stellar clusters they are likely to interact with stars, which could occasionally lead to a tidal disruption event (TDE). When a BBH tidally disrupts a star it can accrete a significant fraction of the debris, effectively altering the spins of the BHs. Therefore, although dynamically formed BBHs are expected to have random spin orientations, tidal stellar interactions can significantly alter their birth spins both in direction and magnitude. Here we investigate how TDEs by BBHs can affect the properties of the BH members as well as exploring the characteristics of the resulting electromagnetic signatures. We conduct hydrodynamic simulations with a Lagrangian Smoothed Particle Hydrodynamics code of a wide range of representative tidal interactions. We find that both spin magnitude and orientation can be altered and temporarily aligned or anti-aligned through accretion of stellar debris, with a significant dependence on the mass ratio of the disrupted star and the BBH members. These tidal interactions feed material to the BBH at very high accretion rates, with the potential to launch a relativistic jet. The corresponding beamed emission is a beacon to an otherwise quiescent BBH.
    Black holeStarBinary black hole systemLaser Interferometer Gravitational-Wave ObservatorySpace debrisAccretionTidal disruptionGravitational waveAccretion diskTidal interaction...
  • Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effects on the underlying loss landscape, are not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple "filter normalization" method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.
    ArchitectureNon-convexityNeural networkCurvatureGeneralization errorOptimizationPrincipal component analysisScale invarianceAttractorHigh Performance Computing...
  • We perform a comprehensive study of the Higgs couplings, gauge-boson couplings to fermions and triple gauge boson vertices. We work in the framework of effective theories including the effects of the dimension-six operators contributing to these observables. We determine the presently allowed range for the coefficients of these operators via a 20 parameter global fit to the electroweak precision data, as well as electroweak diboson and Higgs production data from LHC Run 1 and 2. We quantify the improvement on the determination of the 20 Wilson coefficients by the inclusion of the Run 2 results. In particular we present a novel analysis of the ATLAS Run 2 36.1 $\rm fb^{-1}$ data on the transverse mass distribution of $W^+W^-$ and $W^\pm Z$ in the leptonic channel which allow for stronger tests of the triple gauge boson vertices. We discuss the discrete (quasi)-degeneracies existing in the parameter space of operator coefficients relevant for the Higgs couplings to fermions and gauge bosons. In particular we show how the inclusion of the incipient $tH$ data can break those degeneracies in the determination of the top-Higgs coupling. We also discuss and quantify the effect of keeping the terms quadratic in the Wilson coefficients in the analysis and we show the importance of the Higgs data to constrain some of the operators that modify the triple gauge boson couplings in the linear regime.
    Higgs bosonStandard ModelWilson coefficientsLarge Hadron ColliderElectroweakDimension--six operatorsATLAS Experiment at CERNEffective LagrangianElectroweak precision testLHC 8 TeV run...
  • We report the discovery of a Milky-Way satellite in the constellation of Antlia. The Antlia 2 dwarf galaxy is located behind the Galactic disc at a latitude of $b\sim 11^{\circ}$ and spans 1.26 degrees, which corresponds to $\sim2.9$ kpc at its distance of 130 kpc. While similar in extent to the Large Magellanic Cloud, Antlia~2 is orders of magnitude fainter with $M_V=-8.5$ mag, making it by far the lowest surface brightness system known (at $32.3$ mag/arcsec$^2$), $\sim100$ times more diffuse than the so-called ultra diffuse galaxies. The satellite was identified using a combination of astrometry, photometry and variability data from Gaia Data Release 2, and its nature confirmed with deep archival DECam imaging, which revealed a conspicuous BHB signal. We have also obtained follow-up spectroscopy using AAOmega on the AAT to measure the dwarf's systemic velocity, $290.9\pm0.5$km/s, its velocity dispersion, $5.7\pm1.1$ km/s, and mean metallicity, [Fe/H]$=-1.4$. From these properties we conclude that Antlia~2 inhabits one of the least dense Dark Matter (DM) halos probed to date. Dynamical modelling and tidal-disruption simulations suggest that a combination of a cored DM profile and strong tidal stripping may explain the observed properties of this satellite. The origin of this core may be consistent with aggressive feedback, or may even require alternatives to cold dark matter (such as ultra-light bosons).
    StarProper motionRR Lyrae starVelocity dispersionDark matterHalf-light radiusMilky WayNavarro-Frenk-White profileLuminosityHertzsprung-Russell diagram...
  • We investigate the holographic entanglement entropy of deformed conformal field theories which are dual to a cutoff AdS space. The holographic entanglement entropy evaluated on a three-dimensional Poincare AdS space with a finite cutoff can be reinterpreted as that of the dual field theory deformed by either a boost or $T \bar{T}$ deformation. For the boost case, we show that, although it trivially acts on the underlying theory, it nontrivially affects the entanglement entropy due to the length contraction. For a three-dimensional AdS, we show that the effect of the boost transformation can be reinterpreted as the rescaling of the energy scale, similar to the $T \bar{T}$ deformation. Under the boost and $T \bar{T}$ deformation, the $c$-function of the entanglement entropy exactly shows the features expected by the Zamoldchikov's $c$-theorem. The deformed theory is always stationary at a UV fixed point and monotonically flows to another CFT in the IR fixed point. We also show that the holographic entanglement entropy in a Poincare cutoff AdS space can reproduce the exact same result of the $T \bar{T}$ deformed theory on a two-dimensional sphere.
    Entanglement entropyConformal field theoryAnti de Sitter spaceRenormalisation group flowField theoryQuantum field theoryMutual informationMinimal surfaceUV fixed pointInfrared fixed point...
  • We introduce an extension of the ELVIS project to account for the effects of the Milky Way galaxy on its subhalo population. Our simulation suite, Phat ELVIS, consists of twelve high-resolution cosmological dark matter-only (DMO) zoom simulations of Milky Way-size $\Lambda$CDM~ haloes ($M_{\rm v} = 0.7-2 \times 10^{12} \,\mathrm{M}_\odot$) along with twelve re-runs with embedded galaxy potentials grown to match the observed Milky Way disk and bulge today. The central galaxy potential destroys subhalos on orbits with small pericenters in every halo, regardless of the ratio of galaxy mass to halo mass. This has several important implications. 1) Most of the $\mathtt{Disk}$ runs have no subhaloes larger than $V_{\rm max} = 4.5$ km s$^{-1}$ within $20$ kpc and a significant lack of substructure going back $\sim 8$ Gyr, suggesting that local stream-heating signals from dark substructure will be rare. 2) The pericenter distributions of Milky Way satellites derived from $\mathit{Gaia}$ data are remarkably similar to the pericenter distributions of subhaloes in the $\mathtt{Disk}$ runs, while the DMO runs drastically over-predict galaxies with pericenters smaller than 20 kpc. 3) The enhanced destruction produces a tension opposite to that of the classic `missing satellites' problem: in order to account for ultra-faint galaxies known within $30$ kpc of the Galaxy, we must populate haloes with $V_\mathrm{peak} \simeq 7$ km s$^{-1}$ ($M \simeq 3 \times 10^{7} \,\mathrm{M}_\odot$ at infall), well below the atomic cooling limit of $V_\mathrm{peak} \simeq 16$ km s$^{-1}$ ($M \simeq 5 \times 10^{8} \,\mathrm{M}_\odot$ at infall). 4) If such tiny haloes do host ultra-faint dwarfs, this implies the existence of $\sim 1000$ satellite galaxies within 300 kpc of the Milky Way.
    Dark matter subhaloMilky WayGalaxyPeriapsisVirial massSatellite galaxyExploring the Local Volume in SimulationsFaint galaxiesMilky Way satelliteN-body simulation...
  • We present the results from three gravitational-wave searches for coalescing compact binaries with component masses above 1$\mathrm{M}_\odot$ during the first and second observing runs of the Advanced gravitational-wave detector network. During the first observing run (O1), from September $12^\mathrm{th}$, 2015 to January $19^\mathrm{th}$, 2016, gravitational waves from three binary black hole mergers were detected. The second observing run (O2), which ran from November $30^\mathrm{th}$, 2016 to August $25^\mathrm{th}$, 2017, saw the first detection of gravitational waves from a binary neutron star inspiral, in addition to the observation of gravitational waves from a total of seven binary black hole mergers, four of which we report here for the first time: GW170729, GW170809, GW170818 and GW170823. For all significant gravitational-wave events, we provide estimates of the source properties. The detected binary black holes have total masses between $18.6_{-0.7}^{+3.1}\mathrm{M}_\odot$, and $85.1_{-10.9}^{+15.6} \mathrm{M}_\odot$, and range in distance between $320_{-110}^{+120}$ Mpc and $2750_{-1320}^{+1350}$ Mpc. No neutron star - black hole mergers were detected. In addition to highly significant gravitational-wave events, we also provide a list of marginal event candidates with an estimated false alarm rate less than 1 per 30 days. From these results over the first two observing runs, which include approximately one gravitational-wave detection per 15 days of data searched, we infer merger rates at the 90% confidence intervals of 110 - 3840 $\mathrm{Gpc}^{-3}\,\mathrm{y}^{-1}$ for binary neutron stars and 9.7 - 101 $\mathrm{Gpc}^{-3}\,\mathrm{y}^{-1}$ for binary black holes, and determine a neutron star - black hole merger rate 90% upper limit of 610 $\mathrm{Gpc}^{-3}\,\mathrm{y}^{-1}$.
    Laser Interferometer Gravitational-Wave ObservatoryGravitational waveGW170817Matched filterBinary black hole systemInspiralCalibrationMass ratioStatisticsLIGO GW151226 event...
  • Binary systems anchor many of the fundamental relations relied upon in asteroseismology. Masses and radii are rarely constrained better than when measured via orbital dynamics and eclipse depths. Pulsating binaries have much to offer. They are clocks, moving in space, that encode orbital motion in the Doppler-shifted pulsation frequencies. They offer twice the opportunity to obtain an asteroseismic age, which is then applicable to both stars. They enable comparative asteroseismology -- the study of two stars by their pulsation properties, whose only fundamental differences are the mass and rotation rates with which they were born. In eccentric binaries, oscillations can be excited tidally, informing our knowledge of tidal dissipation and resonant frequency locking. Eclipsing binaries offer benchmarks against which the asteroseismic scaling relations can be tested. We review these themes in light of both observational and theoretical developments recently made possible by space-based photometry.
    StarWhite dwarfEclipsesBinary starAsteroseismologyCompanionRed giantInclinationMain sequence starMass ratio...
  • Search for periodic signals from blazars has become widely discussed topic in recent years. In the scenario that such periodic changes originate from the innermost regions of blazars, the signals bear imprints of the processes occurring near the central engine, which is mostly inaccessible to our direct view. Such signals provide insights into various aspect of blazar studies including disk-jet connection, magnetic field configuration and, more importantly, strong gravity near the supermassive black holes and release of gravitational waves from the binary supermassive black hole systems. In this work, we report detection of a periodic signal in the radio light curve of the blazar J1043+2408 spanning $\sim$10.5 years. We performed multiple methods of time series analysis, namely, epoch folding, Lomb-Scargle periodogram, and discrete auto-correlation function. All three methods consistently reveal a repeating signal with a periodicity of $\sim560$ days. To robustly account for the red-noise processes usually dominant in the blazar variability and other possible artifacts, a large number of Monte Carlo simulations were performed. This allowed us to estimate a high significance (99.9\% local and 99.4\% global) against possible spurious detection. As possible explanations, we discuss a number of scenarios including binary supermassive black hole system, Lense-Thirring precession and jet precession.
    BlazarLight curveBlack holeQuasi-Periodic OscillationsTime SeriesSupermassive black holeTwo-point correlation functionBL LacertaeAccretion diskActive Galactic Nuclei...
  • A consistent interpretation is provided for Neolithic Gobekli Tepe and Catalhoyuk as well as European Palaeolithic cave art. It appears they all display the same method for recording dates based on precession of the equinoxes, with animal symbols representing an ancient zodiac. The same constellations are used today in the West, although some of the zodiacal symbols are different. In particular, the Shaft Scene at Lascaux is found to have a similar meaning to the Vulture Stone at Gobekli Tepe. Both can be viewed as memorials of catastrophic encounters with the Taurid meteor stream, consistent with Clube and Napier's theory of coherent catastrophism. The date of the likely comet strike recorded at Lascaux is 15,150 BC to within 200 years, corresponding closely to the onset of a climate event recorded in a Greenland ice core. A survey of radiocarbon dates from Chauvet and other Palaeolithic caves is consistent with this zodiacal interpretation, with a very high level of statistical significance. Finally, the Lion Man of Hohlenstein-Stadel, circa 38,000 BC, is also consistent with this interpretation, indicating this knowledge is extremely ancient and was widespread.
    EquinoxClimateStatistical significanceConstellationsMeteor streamsCometTheorySurveysEvent...
  • This paper presents the design of Glow, a machine learning compiler for heterogeneous hardware. It is a pragmatic approach to compilation that enables the generation of highly optimized code for multiple targets. Glow lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation. The high-level intermediate representation allows the optimizer to perform domain-specific optimizations. The lower-level instruction-based address-only intermediate representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation and copy elimination. At the lowest level, the optimizer performs machine-specific code generation to take advantage of specialized hardware features. Glow features a lowering phase which enables the compiler to support a high number of input operators as well as a large number of hardware targets by eliminating the need to implement all operators on all targets. The lowering phase is designed to reduce the input space and allow new hardware backends to focus on a small number of linear algebra primitives.
    GraphIntermediate representationOptimizationNeural networkMachine learningArchitectureInferenceSchedulingArithmeticQuantization...
  • In many reinforcement learning tasks, the goal is to learn a policy to manipulate an agent, whose design is fixed, to maximize some notion of cumulative reward. The design of the agent's physical structure is rarely optimized for the task at hand. In this work, we explore the possibility of learning a version of the agent's design that is better suited for its task, jointly with the policy. We propose a minor alteration to the OpenAI Gym framework, where we parameterize parts of an environment, and allow an agent to jointly learn to modify these environment parameters along with its policy. We demonstrate that an agent can learn a better structure of its body that is not only better suited for the task, but also facilitates policy learning. Joint learning of policy and structure may even uncover design principles that are useful for assisted-design applications. Videos of results at https://designrl.github.io/
    Reinforcement learningEmbodied CognitionRoboticsMachine learningHidden layerOrientationGaussian distributionLIDARArchitectureGoogle.com...
  • Recently the EDGES collaboration reported an anomalous absorption signal in the sky-averaged 21-cm spectrum around $z=17$. Such a signal may be understood as an indication for an unexpected cooling of the hydrogen gas during or prior to the so called Cosmic Dawn era. Here we explore the possibility that dark matter cooled the gas through velocity-dependent, Rutherford-like interactions. We argue that such interactions require a light mediator that is highly constrained by 5th force experiments and limits from stellar cooling. Consequently, only a hidden or the visible photon can in principle mediate such a force. Neutral hydrogen thus plays a sub-leading role and the cooling occurs via the residual free electrons and protons. We find that these two scenarios are strongly constrained by the predicted dark matter self-interactions and by limits on millicharged dark matter respectively. We conclude that the 21-cm absorption line is unlikely to be the result of gas cooling via the scattering with a dominant component of the dark matter. An order 1\% subcomponent of millicharged dark matter remains a viable explanation.
    Dark matterCoolingHydrogen 21 cm lineEDGES experimentHidden photonDark matter particle massFifth forceLight mediatorAbsorption lineNeutral hydrogen gas...
  • We examine spontaneous symmetry breaking of a renormalisable U(1) x U(1) gauge theory coupled to fermions when kinetic mixing is present. We do not assume that the kinetic mixing parameter is small. A rotation plus scaling is used to remove the mixing and put the gauge kinetic terms in the canonical form. Fermion currents are also rotated in a non-orthogonal way by this basis transformation. Through suitable redefinitions the interaction is cast into a diagonal form. This framework, where mixing is absent, is used for subsequent analysis. The symmetry breaking determines the fermionic current which couples to the massless gauge boson. The strength of this coupling as well as the couplings of the massive gauge boson are extracted. This formulation is used to consider a gauged model for dark matter by identifying the massless gauge boson with the photon and the massive state to its dark counterpart. Matching the coupling of the residual symmetry with that of the photon sets a lower bound on the kinetic mixing parameter. We present analytical formulae of the couplings of the dark photon in this model and indicate some physics consequences.
    Kinetic mixingSymmetry breakingHidden photonGauge theorySpontaneous symmetry breakingDark matterGauge bosonsPhotonFermionU(1)...
  • We consider a hidden sector model of dark matter which is charged under a hidden U(1)_X gauge symmetry. Kinetic mixing of U(1)_X with the Standard Model hypercharge U(1)_Y is allowed to provide communication between the hidden sector and the Standard Model sector. We present various limits on the kinetic mixing parameter and the hidden gauge coupling constant coming from various low energy observables, electroweak precision tests, and the right thermal relic density of the dark matter. Saturating these constraints, we show that the spin-independent elastic cross section of the dark matter off nucleons is mostly below the current experimental limits, but within the future sensitivity. Finally, we analyze the prospect of observing the hidden gauge boson through its dimuon decay channel at hadron colliders.
    Dark matterKinetic mixingStandard ModelElectroweak phase transitionHidden sectorBosonizationTevatronSpin independentRelic abundanceDark matter particle mass...
  • We report results from a search for stable particles with charge > $10^{-5}$ e in bulk matter using levitated dielectric microspheres in high vacuum. No evidence for such particles was found in a total sample of 1.4 ng, providing an upper limit on the abundance per nucleon of 2.5 x $10^{-14}$ at the 95% confidence level for the material tested. These results provide the first direct search for single particles with charge < 0.1 e bound in macroscopic quantities of matter and demonstrate the ability to perform sensitive force measurements using optically levitated microspheres in vacuum.
    LasersCoolingMilli-Charged ParticleOptical levitationDissipationGlassFractional chargeAbundanceOilFrequency comb...
  • We set constraints on millicharged particles (mCPs) based on electron scattering data from MiniBooNE and the Liquid Scintillator Neutrino Detector (LSND). Both experiments are found to provide new (and leading) constraints in certain mCP mass windows: 5 - 35 MeV for LSND and 100 - 180 MeV for MiniBooNE. Furthermore, we provide projections for the ongoing SBN program, the Deep Underground Neutrino Experiment (DUNE), and the proposed Search for Hidden Particles (SHiP) experiment. Both DUNE and SHiP are capable of probing parameter space for mCP masses ranging from 5 MeV - 5 GeV that is significantly beyond the reach of existing bounds, including those from collider searches and SLAC's mQ experiment.
    DUNE experimentLiquid Scintillator Neutrino DetectorMiniBooNE experimentMilli-Charged ParticleNeutrinoDark matterBranching ratioSHiP experimentElectron scatteringElectron beam dump...
  • Relativistic millicharged particles ($\chi_q$) have been proposed in various extensions to the Standard Model of particle physics. We consider the scenarios where they are produced at nuclear reactor core and via interactions of cosmic-rays with the earth's atmosphere. Millicharged particles could also be candidates for dark matter, and become relativistic through acceleration by supernova explosion shock waves. The atomic ionization cross section of $\chi_q$ with matter are derived with the equivalent photon approximation. Smoking-gun signatures with significant enhancement in the differential cross section are identified. New limits on the mass and charge of $\chi_q$ are derived, using data taken with a point-contact germanium detector with 500g mass functioning at an energy threshold of 300~eV at the Kuo-Sheng Reactor Neutrino Laboratory.
    Dark matterCosmic rayMilli-Charged ParticleEquivalent photon approximationDifferential cross sectionReactor neutrinoIonizationStandard ModelShock waveCharged particle...
  • In this LOI we propose a dedicated experiment that would detect "milli-charged" particles produced by pp collisions at LHC Point 5. The experiment would be installed during LS2 in the vestigial drainage gallery above UXC and would not interfere with CMS operations. With 300 fb$^{-1}$ of integrated luminosity, sensitivity to a particle with charge $\mathcal{O}(10^{-3})~e$ can be achieved for masses of $\mathcal{O}(1)$ GeV, and charge $\mathcal{O}(10^{-2})~e$ for masses of $\mathcal{O}(10)$ GeV, greatly extending the parameter space explored for particles with small charge and masses above 100 MeV.
    Large Hadron ColliderCMS experimentPhotomultiplier tubesLuminosityMilli-Charged ParticleDark currentCoolingParticle detectorIntegrated luminosityHypercharge...
  • Recently, a search for milli-charged particles produced at the LHC has been proposed. The experiment, named milliQan, is expected to obtain sensitivity to charges of $10^{- 1} - 10^{-3}e$ for masses in the 0.1 - 100 GeV range. The detector is composed of 3 stacks of 80 cm long plastic scintillator arrays read out by PMTs. It will be installed in an existing tunnel 33 m from the CMS interaction point at the LHC, with 17 m of rock shielding to suppress beam backgrounds. In the fall of 2017 a 1% scale "demonstrator" of the proposed detector was installed at the planned site in order to study the feasibility of the experiment, focusing on understanding various background sources such as radioactivity of materials, PMT dark current, cosmic rays, and beam induced backgrounds. The data from the demonstrator provides a unique opportunity to understand the backgrounds and to optimize the design of the full detector.
    Large Hadron ColliderMilli-Charged ParticlePhotomultiplier tubesMuonCosmic ray muonMultidimensional ArrayDark currentRadioactivityCosmic rayInstantaneous luminosity...
  • New physics that exhibits irregular tracks such as kinks, intermittent hits or decay in flight may easily be missed at hadron colliders. We demonstrate this by studying viable models of light, O(10 GeV), colored particles that decay predominantly inside the tracker. Such particles can be produced at staggering rates, and yet may not be identified or even triggered on at the LHC, unless specifically searched for. In addition, the models we study provide an explanation for the original measurement of the anomalous charged track distribution by CDF. The presence of irregular tracks in these models reconcile that measurement with the subsequent reanalysis and the null results of ATLAS and CMS. Our study clearly illustrates the need for a comprehensive study of irregular tracks at the LHC.
    HadronizationCMS experimentFractionally charged particleTevatronFractional chargeReheatingCharged particleIonizationLiquidsCosmic ray...
  • Dark sectors, consisting of new, light, weakly-coupled particles that do not interact with the known strong, weak, or electromagnetic forces, are a particularly compelling possibility for new physics. Nature may contain numerous dark sectors, each with their own beautiful structure, distinct particles, and forces. This review summarizes the physics motivation for dark sectors and the exciting opportunities for experimental exploration. It is the summary of the Intensity Frontier subgroup "New, Light, Weakly-coupled Particles" of the Community Summer Study 2013 (Snowmass). We discuss axions, which solve the strong CP problem and are an excellent dark matter candidate, and their generalization to axion-like particles. We also review dark photons and other dark-sector particles, including sub-GeV dark matter, which are theoretically natural, provide for dark matter candidates or new dark matter interactions, and could resolve outstanding puzzles in particle and astro-particle physics. In many cases, the exploration of dark sectors can proceed with existing facilities and comparatively modest experiments. A rich, diverse, and low-cost experimental program has been identified that has the potential for one or more game-changing discoveries. These physics opportunities should be vigorously pursued in the US and elsewhere.
    Dark matterHidden photonAxionDark sectorChameleonAxion-like particleStandard ModelLight dark matterMilli-Charged ParticleDark energy...
  • We have considered the effect of the reduction of the solar neutrino flux on earth due to the deflection of the charged neutrino by the magnetic field of the solar convective zone. The antisymmetry of this magnetic field about the plane of the solar equator induces the anisotropy of the solar neutrino flux thus creating the deficit of the neutrino flux on the earth. The deficit has been estimated in terms of solar and neutrino parameters and the condition of a 50 \% deficit has been obtained: $Q_{\nu} gradH \agt 10^{-18} eG/cm$ where $Q_{\nu}$ is the neutrino electric charge, $gradH$ is the gradient of the solar toroidal magnetic field, e is the electron charge. Some attractive experimental consequences of this scenario are qualitatively discussed.
    NeutrinoNeutrino fluxSolar neutrinoConvective zoneSolar Magnetic FieldEarthSolar neutrino problemSunToroidal magnetic fieldAnisotropy...
  • The low-energy approach to electric charge quantization predicts physics beyond the minimal standard model. A model-independent approach via effective Lagrangians is used examine the possible new physics, which may manifest itself indirectly through family-lepton--number violating rare decays.
    Standard ModelQuantizationHyperchargeSterile neutrinoEffective LagrangianYukawa couplingCharge quantizationRare decayWeak hyperchargeSeesaw mechanism...
  • In gauge theories like the standard model, the electric charges of the fermions can be heavily constrained from the classical structure of the theory and from the cancellation of anomalies. There is however mounting evidence suggesting that these anomaly constraints are not as well motivated as the classical constraints. In light of this we discuss possible modifications of the minimal standard model which will give us complete electric charge quantisation from classical constraints alone. Because these modifications to the standard model involve the consideration of baryon number violating scalar interactions, we present a complete catalogue of the simplest ways to modify the standard model so as to introduce explicit baryon number violation. This has implications for proton decay searches and baryogenesis.
    Baryon numberStandard ModelDecay widthBaryon number violationProton decayHyperchargeSterile neutrinoYukawa couplingCoupling constantGauge theory...
  • We show that the SuperKamiokande atmospheric neutrino results explain electric charge quantisation, provided that the oscillation mode is $\nu_{\mu} \to \nu_{\tau}$ and that the neutrino mass is of Majorana type.
    Atmospheric neutrinoStandard ModelNeutrino massAnomaly cancellationSterile neutrinoDirac mass termGravitational anomalyNeutrino oscillationsNeutrinoGauge invariance...
  • Experimentally it has been known for a long time that the electric charges of the observed particles appear to be quantized. An approach to understanding electric charge quantization that can be used for gauge theories with explicit $U(1)$ factors -- such as the standard model and its variants -- is pedagogically reviewed and discussed in this article. This approach uses the allowed invariances of the Lagrangian and their associated anomaly cancellation equations. We demonstrate that charge may be de-quantized in the three-generation standard model with massless neutrinos, because differences in family-lepton--numbers are anomaly-free. We also review the relevant experimental limits. Our approach to charge quantization suggests that the minimal standard model should be extended so that family-lepton--number differences are explicitly broken. We briefly discuss some candidate extensions (e.g. the minimal standard model augmented by Majorana right-handed neutrinos).
    Standard ModelQuantizationCharge quantizationHyperchargeSterile neutrinoAnomaly cancellationGlobal symmetryNeutrinoHiggs doubletVacuum expectation value...
  • The PVLAS anomaly can be explained if there exist millicharged particles of mass $\stackrel{<}{\sim} 0.1$ eV and electric charge $\epsilon \sim 10^{-6} e$. We point out that such particles occur naturally in spontaneously broken mirror models. We argue that this interpretation of the PVLAS anomaly is not in conflict with astrophysical constraints due to the self interactions of the millicharged particles which lead them to be trapped within stars. This conclusion also holds for a generic paraphoton model.
    PVLASMilli-Charged ParticleStarMean free pathHidden sectorBig bang nucleosynthesisHiggs potentialCosmic microwave backgroundYukawa couplingLocal thermal equilibrium...
  • In gauge theories like the standard model, the electric charges of the fermions can be heavily constrained from the classical structure of the theory and from the cancellation of anomalies. We argue that the anomaly conditions are not quite as well motivated as the classical constraints, since it is possible that new fermions could exist which cancel potential anomalies. For this reason we examine the classically allowed electric charges of the known fermions and we point out that the electric charge of the tau neutrino is classically allowed to be non-zero. The experimental bound on the electric charge of the tau neutrino is many orders of magnitude weaker than for any other known neutrino. We discuss possible modifications of the minimal standard model such that electric charge is quantized classically.
    Standard ModelQuantizationNeutrinoTau neutrinoBaryon numberHyperchargeDown quarkVacuum expectation valueAnomaly cancellationMajorana mass...
  • This is an account of the author's recollections of the turbulent days preceding the establishment of the Standard Model as an accurate description of all known elementary particles and forces.
    RenormalizationGauge theoryQuantum chromodynamicsGauge invarianceChiralityYang-Mills theoryWeak interactionInstantonQuantum field theoryQuantum electrodynamics...
  • Six major frameworks have emerged attempting to describe particle physics beyond the Standard Model. Despite their different theoretical genera, these frameworks have a number of common phenomenological features and problems. While it will be possible (and desirable) to conduct model-independent searches for new physics at the LHC, it is equally important to develop robust methods to discriminate between BSM 'look-alikes'.
    Standard ModelExtra dimensionsSupersymmetry breakingSuperpartnerHiggs bosonDark matterBSM physicsPlanck scaleParticle massGlobal symmetry...
  • Deep learning has shown that learned functions can dramatically outperform hand-designed functions on perceptual tasks. Analogously, this suggests that learned optimizers may similarly outperform current hand-designed optimizers, especially for specific problems. However, learned optimizers are notoriously difficult to train and have yet to demonstrate wall-clock speedups over hand-designed optimizers, and thus are rarely used in practice. Typically, learned optimizers are trained by truncated backpropagation through an unrolled optimization process. The resulting gradients are either strongly biased (for short truncations) or have exploding norm (for long truncations). In this work we propose a training scheme which overcomes both of these difficulties, by dynamically weighting two unbiased gradient estimators for a variational loss on optimizer performance. This allows us to train neural networks to perform optimization of a specific task faster than well tuned first-order methods. Moreover, by training the optimizer against validation loss (as opposed to training loss), we are able to learn optimizers that train networks to better generalization than first order methods. We demonstrate these results on problems where our learned optimizer trains convolutional networks in a fifth of the wall-clock time compared to tuned first-order methods, and with an improvement in test loss.
    OptimizationStatistical estimatorNeural networkBackpropagationSchedulingConvolutional neural networkDeep learningGraphHidden layerArchitecture...
  • Discovery of atomistic systems with desirable properties is a major challenge in chemistry and material science. Here we introduce a novel, autoregressive, convolutional deep neural network architecture that generates molecular equilibrium structures by sequentially placing atoms in three-dimensional space. The model estimates the joint probability over molecular configurations with tractable conditional probabilities which only depend on distances between atoms and their nuclear charges. It combines concepts from state-of-the-art atomistic neural networks with auto-regressive generative models for images and speech. We demonstrate that the architecture is capable of generating molecules close to equilibrium for constitutional isomers of C$_7$O$_2$H$_{10}$.
    ArchitectureDeep Neural NetworksNeural networkGenerative modelMachine learningQuantum chemistryMolecular structureOptimizationRecurrent neural networkGraph...
  • We present the results from our COS circumgalactic medium (CGM) compendium (CCC), a survey of the CGM at z<1 using HI-selected absorbers with 15<log N(HI) <19. We focus here on 82 partial Lyman limit systems (pLLSs, 16.2<log N(HI) <17.2) and 29 LLSs (17.2<log N(HI) <19). Using Bayesian techniques and Markov-chain Monte Carlo sampling of a grid of photoionization models, we derive the posterior probability distribution functions (PDFs) for the metallicity of each absorber in CCC. We show that the combined pLLS metallicity PDF at z<1 has two main peaks at [X/H]=-1.7 and -0.4, with a strong dip at [X/H]=-1. The metallicity PDF of the LLSs might be more complicated than an unimodal or bimodal distribution. The pLLSs and LLSs probe a similar range of metallicities -3<[X/H]<+0.4, but the fraction of very metal-poor absorbers with [X/H]<-1.4 is much larger for the pLLSs than the LLSs. In contrast, absorbers with log N(HI)>19 have mostly -1<[X/H]<0 at z<1. The metal-enriched gas probed by pLLSs and LLSs confirms that galaxies that have been enriching their CGM over billions of years. Surprisingly, despite this enrichment, there is also abundant metal-poor CGM gas (41-59% of the pLLSs have [X/H]<-1.4), representing a reservoir of near-pristine gas around z<1 galaxies. We compare our empirical results to recent cosmological zoom simulations, finding some discrepancies, including an overabundance of metal-enriched CGM gas in simulations.
    MetallicityLyman Limit SystemGalaxyIonizationMonte Carlo Markov chainFIRE simulationsCosmic Origins SpectrographCumulative distribution functionsEAGLE simulation projectPhotoionization...