Recently bookmarked papers

with concepts:
  • We provide an assessment of the energy dependence of key measurements within the scope of the machine parameters for a U.S. based Electron-Ion Collider (EIC) outlined in the EIC White Paper. We first examine the importance of the physics underlying these measurements in the context of the outstanding questions in nuclear science. We then demonstrate, through detailed simulations of the measurements, that the likelihood of transformational scientific insights is greatly enhanced by making the energy range and reach of the EIC as large as practically feasible.
    Electron-Ion-ColliderMeasurementEnergySimulationsLikelihood...
  • Dark matter may be in the form of non-baryonic structures such as compact subhalos and boson stars. Structures weighing between asteroid and solar masses may be discovered via gravitational microlensing, an astronomical probe that has in the past helped constrain the population of primordial black holes and baryonic MACHOs. We investigate the non-trivial effect of the size of and density distribution within these structures on the microlensing signal, and constrain their populations using the EROS-2 and OGLE-IV surveys. Structures larger than a solar radius are generally constrained more weakly than point-like lenses, but stronger constraints may be obtained for structures with mass distributions that give rise to caustic crossings or produce larger magnifications.
    Dark matterGravitational microlensingStarDark matter subhaloImpact parameterEinstein radiusAsteroidsSolar massSolar radiusMass distribution...
  • Superluminous supernovae radiate up to 100 times more energy than normal supernovae. The origin of this energy and the nature of their stellar progenitors are poorly understood. We identify neutral iron lines in the spectrum of one such transient, SN 2006gy, and show that they require a large mass of iron (>~0.3 Msun) expanding at 1500 km/s. We demonstrate that a model of a standard Type Ia supernova hitting a shell of circumstellar material produces a light curve and late-time iron-dominated spectrum that match SN 2006gy. In such a scenario, common envelope evolution of the progenitor system can synchronize envelope ejection and supernova explosion and may explain these bright transients.
    SN 2006gyEjectaPhotometrySupernovaLight curveLuminosityIonizationSupernova Type IaSpectral energy distributionWhite dwarf...
  • The time delay measured between the images of gravitationally lensed quasars probes a combination of the angular diameter distance to the source-lens system and the mass density profile of the lens. Observational campaigns to measure such systems have reported a determination of the Hubble parameter $H_0$ that shows significant tension with independent determination based on the cosmic microwave background (CMB) and large scale structure (LSS). We show that lens mass models that exhibit a cored component, coexisting with a stellar cusp, probe a degenerate direction in the lens model parameter space, being an approximate mass sheet transformation. This family of lens models has not been considered by the cosmographic analyses. Once added to the model, the cosmographic error budget should become dependent on stellar kinematics uncertainties. We propose that a dark matter core coexisting with a stellar cusp could bring the lensing measurements of $H_0$ to accord with the CMB/LSS value.
    Time delayCosmic microwave backgroundLarge scale structureMass-sheet degeneracyQuasarNavarro-Frenk-White profileDark matterKinematicsDeflection angleStellar kinematics...
  • We determine the Hubble constant $H_0$ precisely (2.3\% uncertainty) in a manner independent of cosmological model through Gaussian process regression, using strong lensing and supernova data. Strong gravitational lensing of a variable source can provide a time-delay distance $D_{\Delta t}$ and angular diameter distance to the lens $D_{\rm{d}}$. These absolute distances can anchor Type Ia supernovae, which give an excellent constraint on the shape of the distance-redshift relation. Updating our previous results to use the H0LiCOW program's milestone dataset consisting of six lenses, four of which have both $D_{\Delta t}$ and $D_{\rm{d}}$ measurements, we obtain $H_0=72.8_{-1.7}^{+1.6}\rm{\ km/s/Mpc}$ for a flat universe and $H_0=77.3_{-3.0}^{+2.2}\rm{\ km/s/Mpc}$ for a non-flat universe. We carry out several consistency checks on the data and find no statistically significant tensions, though a noticeable redshift dependence persists in a particular systematic manner that we investigate. Speculating on the possibility that this trend of derived Hubble constant with lens distance is physical, we show how this can arise through modified gravity light propagation, which would also impact the weak lensing $\sigma_8$ tension.
    SupernovaTime delayStrong gravitational lensingModified gravityCosmologyCold dark matterAngular diameter distanceHubble constantCosmic microwave backgroundCosmological model...
  • We introduce LATIS, the Ly$\alpha$ Tomography IMACS Survey, a spectroscopic survey at Magellan designed to map the z=2.2-2.8 intergalactic medium (IGM) in three dimensions by observing the Ly$\alpha$ forest in the spectra of galaxies and QSOs. Within an area of 1.7 deg${}^2$, we will observe approximately half of $\gtrsim L^*$ galaxies at z=2.2-3.2 for typically 12 hours, providing a dense network of sightlines piercing the IGM with an average transverse separation of 2.5 $h^{-1}$ comoving Mpc (1 physical Mpc). At these scales, the opacity of the IGM is expected to be closely related to the dark matter density, and LATIS will therefore map the density field in the $z \sim 2.5$ universe at $\sim$Mpc resolution over the largest volume to date. Ultimately LATIS will produce approximately 3800 spectra of z=2.2-3.2 galaxies that probe the IGM within a volume of $4 \times 10^6 h^{-3}$ Mpc${}^3$, large enough to contain a representative sample of structures from protoclusters to large voids. Observations are already complete over one-third of the survey area. In this paper, we describe the survey design and execution. We present the largest IGM tomographic maps at comparable resolution yet made. We show that the recovered matter overdensities are broadly consistent with cosmological expectations based on realistic mock surveys, that they correspond to galaxy overdensities, and that we can recover structures identified using other tracers. LATIS is conducted in Canada-France-Hawaii Telescope Legacy Survey fields, including COSMOS. Coupling the LATIS tomographic maps with the rich data sets collected in these fields will enable novel studies of environment-dependent galaxy evolution and the galaxy-IGM connection at cosmic noon.
    GalaxyQuasarIntergalactic mediumLyman-alpha forestLyman break galaxyProtoclustersCOSMOS surveyCanada-France-Hawaii Telescope Legacy SurveyMilky WayCosmic void...
  • In this paper, we consider the nonlinear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered to reconstruct the estimator from the random noisy data. In this statistical learning setting, we derive the rates of convergence for the regularized solution under certain assumptions on the nonlinear forward operator and the prior assumptions. We discuss estimates of the reconstruction error using the approach of reproducing kernel Hilbert spaces.
    Inverse problemsTikhonov regularizationEffective dimensionStatistical estimatorRegularizationRegularization schemePositive semi definiteSquare-integrable functionCovarianceModel selection...
  • We study the linear ill-posed inverse problem with noisy data in the statistical learning setting. Approximate reconstructions from random noisy data are sought with general regularization schemes in Hilbert scale. We discuss the rates of convergence for the regularized solution under the prior assumptions and a certain link condition. We express the error in terms of certain distance functions. For regression functions with smoothness given in terms of source conditions the error bound can then be explicitly established.
    RegularizationRegularization schemeInverse problemsEffective dimensionRegressionTikhonov regularizationPositive semi definiteCovarianceSelf-adjoint operatorUnbounded operator...
  • A grand challenge of the 21st century cosmology is to accurately estimate the cosmological parameters of our Universe. A major approach to estimating the cosmological parameters is to use the large-scale matter distribution of the Universe. Galaxy surveys provide the means to map out cosmic large-scale structure in three dimensions. Information about galaxy locations is typically summarized in a "single" function of scale, such as the galaxy correlation function or power-spectrum. We show that it is possible to estimate these cosmological parameters directly from the distribution of matter. This paper presents the application of deep 3D convolutional networks to volumetric representation of dark-matter simulations as well as the results obtained using a recently proposed distribution regression framework, showing that machine learning techniques are comparable to, and can sometimes outperform, maximum-likelihood point estimates using "cosmological models". This opens the way to estimating the parameters of our Universe with higher accuracy.
    Cosmological parametersCosmologyN-body simulationMachine learningStatistical estimatorDark Matter Density ProfileGalaxyTwo-point correlation functionLarge scale structureOrthonormal basis...
  • Deep learning is a promising tool to determine the physical model that describes our universe. To handle the considerable computational cost of this problem, we present CosmoFlow: a highly scalable deep learning application built on top of the TensorFlow framework. CosmoFlow uses efficient implementations of 3D convolution and pooling primitives, together with improvements in threading for many element-wise operations, to improve training performance on Intel(C) Xeon Phi(TM) processors. We also utilize the Cray PE Machine Learning Plugin for efficient scaling to multiple nodes. We demonstrate fully synchronous data-parallel training on 8192 nodes of Cori with 77% parallel efficiency, achieving 3.5 Pflop/s sustained performance. To our knowledge, this is the first large-scale science application of the TensorFlow framework at supercomputer scale with fully-synchronous training. These enhancements enable us to process large 3D dark matter distribution and predict the cosmological parameters $\Omega_M$, $\sigma_8$ and n$_s$ with unprecedented accuracy.
    Deep learningMachine learningCosmological parametersDeep Neural NetworksHigh Performance ComputingTraining setCachingConvolutional neural networkPythonCosmology...
  • The Quijote simulations are a set of 43100 full N-body simulations spanning more than 7000 cosmological models in the $\{\Omega_{\rm m}, \Omega_{\rm b}, h, n_s, \sigma_8, M_\nu, w \}$ hyperplane. At a single redshift the simulations contain more than 8.5 trillions of particles over a combined volume of 43100 $(h^{-1}{\rm Gpc})^3$. Billions of dark matter halos and cosmic voids have been identified in the simulations, whose runs required more than 35 million core hours. The Quijote simulations have been designed for two main purposes: 1) to quantify the information content on cosmological observables, and 2) to provide enough data to train machine learning algorithms. In this paper we describe the simulations and show a few of their applications. We also release the Petabyte of data generated, comprising hundreds of thousands of simulation snapshots at multiple redshifts, halo and void catalogs, together with millions of summary statistics such as power spectra, bispectra, correlation functions, marked power spectra, and estimated probability density functions.
    StatisticsCosmic voidCosmological parametersNeutrinoCosmologyMassive neutrinoCold dark matterTwo-point correlation functionRedshift spaceMachine learning...
  • We perform an analysis of the Cosmic Web as a complex network, which is built on a $\Lambda$CDM cosmological simulation. For each of nodes, which are in this case dark matter halos formed in the simulation, we compute 10 network metrics, which characterize the role and position of a node in the network. The relation of these metrics to topological affiliation of the halo, i.e. to the type of large scale structure, which it belongs to, is then investigated. In particular, the correlation coefficients between network metrics and topology classes are computed. We have applied different machine learning methods to test the predictive power of obtained network metrics and to check if one could use network analysis as a tool for establishing topology of the large scale structure of the Universe. Results of such predictions, combined in the confusion matrix, show that it is not possible to give a good prediction of the topology of Cosmic Web (score is $\approx$ 70 $\%$ in average) based only on coordinates and velocities of nodes (halos), yet network metrics can give a hint about the topological landscape of matter distribution.
    Cosmic webLarge scale structureClustering coefficientClassificationCentrality of collisionMachine learningCosmic voidComplex networkSimulations of structure formationGalaxy...
  • We propose a light-weight deep convolutional neural network (CNN) to estimate the cosmological parameters from simulated 3-dimensional dark matter distributions with high accuracy. The training set is based on 465 realizations of a cubic box with a side length of $256\ h^{-1}\ \rm Mpc$, sampled with $128^3$ particles interpolated over a cubic grid of $128^3$ voxels. These volumes have cosmological parameters varying within the flat $\Lambda$CDM parameter space of $0.16 \leq \Omega_m \leq 0.46$ and $2.0 \leq 10^9 A_s \leq 2.3$. The neural network takes as an input cubes with $32^3$ voxels and has three convolution layers, three dense layers, together with some batch normalization and pooling layers. In the final predictions from the network we find a $2.5\%$ bias on the primordial amplitude $\sigma_8$ that can not easily be resolved by continued training. We correct this bias to obtain unprecedented accuracy in the cosmological parameter estimation with statistical uncertainties of $\delta \Omega_m$=0.0015 and $\delta \sigma_8$=0.0029, which are better than the results of previous CNN works by an order of magnitude. Compared with a 2-point analysis method using clustering region of 0-130 and 10-130 $h^{-1}$ Mpc, the CNN constraints on $\Omega_m$/$\sigma_8$ are 3.5/2.3 and 19/11 times more precise, respectively. Finally, we conduct preliminary checks of the error-tolerance abilities of the neural network, and find that it exhibits robustness against smoothing, masking, random noise, global variation, rotation, reflection, and simulation resolution. Those effects are well understood in typical clustering analysis, but had not been tested before for the CNN approach.
    Neural networkConvolutional neural networkCosmological parametersLarge scale structureCosmologyArchitectureDeep learningStatistical errorGalaxyTwo-point correlation function...
  • Matter evolved under influence of gravity from minuscule density fluctuations. Non-perturbative structure formed hierarchically over all scales, and developed non-Gaussian features in the Universe, known as the Cosmic Web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and employ a large ensemble of computer simulations to compare with the observed data in order to extract the full information of our own Universe. However, to evolve trillions of galaxies over billions of years even with the simplest physics is a daunting task. We build a deep neural network, the Deep Density Displacement Model (hereafter D$^3$M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory. Our extensive analysis, demonstrates that D$^3$M outperforms the second order perturbation theory (hereafter 2LPT), the commonly used fast approximate simulation method, in point-wise comparison, 2-point correlation, and 3-point correlation. We also show that D$^3$M is able to accurately extrapolate far beyond its training data, and predict structure formation for significantly different cosmological parameters. Our study proves, for the first time, that deep learning is a practical and accurate alternative to approximate simulations of the gravitational structure formation of the Universe.
    Second order lagrangian perturbation theoryStructure formationTransfer functionOrigin of the universeCosmological parametersZeldovich approximationMatter power spectrumGround truthDeep learningPerturbation theory...
  • The nature of dark matter remains uncertain despite several decades of dedicated experimental searches. The lack of tangible evidence for its non-gravitational interactions with ordinary matter gives good motivation for exploring new avenues of inferring its properties through purely gravitational probes. In particular, addressing its small-scale distribution could provide valuable new insights into its particle nature, either confirming the predictions of cold dark matter hypothesis or favouring models with suppressed small-scale matter power spectrum. In this work a machine learning technique for constraining the abundance of DM subhalos through the analysis of galactic stellar field is proposed. While the study is the first of its kind and hence applied only to a simplified synthetic datasets, the obtained results show promising potential for addressing the amount of DM substructure present within Milky Way. Using accurate astrometric observations, which became available only recently and are expected to rapidly improve in the near future, there is good hope to reach sensitivity needed for detecting DM subhalos with masses down to $10^7 M_\odot$.
    Dark matter subhaloDark matterStarOf starsMilky WayVelocity dispersionMachine learningCold dark matterImage ProcessingDark matter particle...
  • Rapid gamma-ray flares pose an astrophysical puzzle, requiring mechanisms both to accelerate energetic particles and to produce fast observed variability. These dual requirements may be satisfied by collisionless relativistic magnetic reconnection. On the one hand, relativistic reconnection can energize gamma-ray emitting electrons. On the other, as previous kinetic simulations have shown, the reconnection acceleration mechanism preferentially focuses high-energy particles -- and their emitted photons -- into beams, which may create rapid blips in flux as they cross a telescope's line of sight. Using a series of 2D pair-plasma particle-in-cell simulations, we explicitly demonstrate the critical role played by radiative cooling in mediating the observable signatures of this `kinetic beaming' effect. Only in our efficiently cooled simulations do we measure kinetic beaming beyond one light crossing time of the reconnection layer. We find a correlation between the cooling strength and the photon energy range across which persistent kinetic beaming occurs: stronger cooling coincides with a wider range of beamed photon energies. We also apply our results to rapid gamma-ray flares in flat-spectrum radio quasars, suggesting that a paradigm of radiatively efficient kinetic beaming constrains relevant emission models. In particular, beaming-produced variability may be more easily realized in two-zone (e.g. spine-sheath) setups, with Compton seed photons originating in the jet itself, rather than in one-zone external Compton scenarios.
    CoolingInverse ComptonRadiative coolingMagnetic reconnectionFlat spectrum radio quasarLorentz factorGamma-ray flaresBlazarPlasmoidTorus...
  • DUNE with its cutting edge technology is designed to study the neutrino science and proton decay physics. This facility can be further exploited for the study of the ground breaking discoveries i.e. origin of matter, unification of forces, dark matter detection etc. In this work we have explored the DUNE potential for capturing the sub-GeV dark matter in viable dark matter parameter space. The scenario of sub-GeV dark matter range requires a light mediator that couples the hidden sector with the standard model. The choice of the mediator will decide the different channels by which dark matter candidates can be produced. Here three channels $\pi^{0}/\eta$-decay, proton bremsstrahlung and parton-level production modes are considered for the production of dark matter with a 120 GeV proton beam facility placed at Fermi lab. To overcome the neutrino background we have used beam dump mode for the production of pure dark matter beam. To explore the new region of parameter space of dark matter at DUNE the elastic scattering of dark matter beam with electrons and nucleons are studied. In terms of DUNE potential for capturing dark matter signatures the dark matter yield results at DUNE (in our work) shows a significant improvement over existing dark matter probes i.e. BaBar, E137, LSND, MiniBooNE, T2K etc.
    Dark matterDUNE experimentHidden photonStandard ModelDark matter particleProton beamBeam dumpLight dark matterHidden sectorAnnihilation cross section...
  • It is well known that helical magnetic fields undergo a so-called inverse cascade by which their correlation length grows due to the conservation of magnetic helicity in classical ideal magnetohydrodynamics (MHD). At high energies above approximately $10$ MeV, however, classical MHD is necessarily extended to chiral MHD and then the conserved quantity is $\langle\mathcal{H}\rangle + 2 \langle\mu_5\rangle / \lambda$ with $\langle\mathcal{H}\rangle$ being the mean magnetic helicity and $\langle\mu_5\rangle$ being the mean chiral chemical potential of charged fermions. Here, $\lambda$ is a (phenomenological) chiral feedback parameter. In this paper, we study the evolution of the chiral MHD system with the initial condition of nonzero $\langle\mathcal{H}\rangle$ and vanishing $\mu_5$. We present analytic derivations for the time evolution of $\langle\mathcal{H}\rangle$ and $\langle\mu_5\rangle$ that we compare to a series of laminar and turbulent three-dimensional direct numerical simulations. We find that the late-time evolution of $\langle\mathcal{H}\rangle$ depends on the magnetic and kinetic Reynolds numbers ${\rm Re}_{_\mathrm{M}}$ and ${\rm Re}_{_\mathrm{K}}$. For a high ${\rm Re}_{_\mathrm{M}}$ and ${\rm Re}_{_\mathrm{K}}$ where turbulence occurs, $\langle\mathcal{H}\rangle$ eventually evolves in the same way as in classical ideal MHD where the inverse correlation length of the helical magnetic field scales with time $t$ as $k_\mathrm{p} \propto t^{-2/3}$. For a low Reynolds numbers where the velocity field is negligible, the scaling is changed to $k_\mathrm{p} \propto t^{-1/2}\mathrm{ln}\left(t/t_\mathrm{log}\right)$. After being rapidly generated, $\langle\mu_5\rangle$ always decays together with $k_\mathrm{p}$, i.e. $\langle\mu_5\rangle \approx k_\mathrm{p}$, with a time evolution that depends on whether the system is in the limit of low or high Reynolds numbers.
    Magnetic helicityMagnetic energyInverse cascadeChiral MagnetoHydroDynamicsReynolds numberDirect numerical simulationHelical magnetic fieldChiral magnetic effectTurbulenceChiral chemical potential...
  • Toy models for quantum evolution in the presence of closed timelike curves (CTCs) have gained attention in the recent literature due to the strange effects they predict. The circuits that give rise to these effects appear quite abstract and contrived, as they require non-trivial interactions between the future and past which lead to infinitely recursive equations. We consider the special case in which there is no interaction inside the CTC, referred to as an open timelike curve (OTC), for which the only local effect is to increase the time elapsed by a clock carried by the system. Remarkably, circuits with access to OTCs are shown to violate Heisenberg's uncertainty principle, allowing perfect state discrimination and perfect cloning of coherent states. The model is extended to wave-packets and smoothly recovers standard quantum mechanics in an appropriate physical limit. The analogy with general relativistic time-dilation suggests that OTCs provide a novel alternative to existing proposals for the behaviour of quantum systems under gravity.
    Uncertainty principleQuantum mechanicsCoherent stateQuadratureEntanglementWave packetTime dilationParadoxismTime travelSqueezed coherent state...
  • We propose a renormalizable theory based on the $SU(3)_C \times SU(3)_L \times U(1)_X$ gauge symmetry, supplemented by the spontaneously broken $U(1)_{L_g}$ global lepton number symmetry and the $S_3 \times Z_2$ discrete group, which successfully describes the observed SM fermion mass and mixing hierarchy. In our model the top and exotic quarks get tree level masses, whereas the bottom, charm and strange quarks as well as the tau and muon leptons obtain their masses from a tree level Universal seesaw mechanism thanks to their mixing with charged exotic vector like fermions. The masses for the first generation SM charged fermions are generated from a radiative seesaw mechanism at one loop level. The light active neutrino masses are produced from a loop level radiative seesaw mechanism. Furthermore, our model can successfully accommodates the electron and muon anomalous magnetic dipole moments.
    Neutrino massSeesaw mechanismFermion massMuonStandard Model fermionCharged leptonLepton numberVector-like fermionsQuark massGauge symmetry...
  • We discuss a few tests of the ER=EPR proposal. We consider certain conceptual issues as well as explicit physical examples that could be experimentally realized. In particular, we discuss the role of the Bell bounds, the large N limit, as well as the consistency of certain theoretical assumptions underlying the ER=EPR proposal. As explicit tests of the ER=EPR proposal we consider limits coming from the entropy-energy relation and certain limits coming from measurements of the speed of light as well as measurements of effective weights of entangled states. We also discuss various caveats of such experimental tests of the ER=EPR proposal.
    WormholeEntangled stateAdS/CFT correspondenceSpeed of lightBlack holeMaximally entangled statesEntanglement entropyEntropyEntanglementEarth...
  • The newly confirmed pentaquark state $P_c(4312)$ has been treated as a weakly bound $(\Sigma_c\bar{D})$ state by a well-established chiral constituent quark model and by a dynamical calculation on quark degrees of freedom where the quark exchange effect is accounted for. The obtained mass $4308$ MeV agrees with data. In this work, the selected strong decays of the $P_c(4312)$ state are studied with the obtained wave function. It is shown that the width of the $\Lambda_c\bar{D}^*$ decay is overwhelmed and the branching ratios of the $p\,\eta_c$ and $p\,J/\psi$ decays are both less than 1 percentage.
    Constituent quarkPentaquarkDegree of freedomDecay widthBranching ratioIsospinBound stateLHCb experimentDeuteronLight quark...
  • Origins of contemporary $B$-physics. Mesons with beauty and charm. Stable tetraquarks? Flavor and the problem of identity. Top matters. Electroweak symmetry breaking and the Higgs sector. Future instruments.
    TetraquarkHeavy quarkStandard ModelLarge Hadron ColliderDiquarkElectroweak symmetry breakingTop quarkHiggs bosonFermion massQuark mass...
  • In the diquark-diantiquark composition, we study the masses of hidden charm tetraquark systems ($cq\bar{c}\bar{q}$, $cs\bar{c}\bar{s}$ and $cs\bar{c}\bar{q}$) using a linear confinement potential. In this study, we have factorized the four body system into three subsequent two body systems. To remove degeneracy in the S and P wave masses of mesons and tetraquark states, the spin-spin, spin orbit and tensor components of the confined one gluon exchange interactions are employed. In this attempt, we have been able to assign the $\psi(4230)$ as pure $cq\bar{c}\bar{q}$ tetraquark state. $\psi(4360)$ and $\psi(4390)$ as pure $cs\bar{c}\bar{q}$ tetraquark states. According to our analysis $\psi(4260)$ is an admixture of $^1P_1$ and $^5P_1$ $cq\bar{c}\bar{q}$ tetraquark state. Additionally, we have been able to predict the radiative decay width $\Gamma_{(\psi \rightarrow J/\psi \gamma)}$, leptonic decay width $\Gamma_{e^+e^-}$ and hadronic decays of $1^{--}$ tetraquark stat
    TetraquarkDecay widthDiquarkY(4260)Radiative decayP-waveSpin orbitConfinementConfinement potentialVector meson dominance...
  • Constraints on primordial black holes in the range $10^{-18} M_{\odot}$ to $10^{3} M_{\odot}$ are reevaluated for a general class of extended mass functions. Whereas previous work has assumed that PBHs are produced with one single mass, instead there is expected to be a range of masses even in the case of production from a single mechanism; constraints therefore change from previous literature. Although tightly constrained in the majority of cases, it is shown that, even under conservative assumptions, primordial black holes in the mass range $10^{-10} M_{\odot}$ to $10^{-8} M_{\odot}$ could still constitute the entirety of the dark matter. This stresses both the importance for a comprehensive reevaluation of all respective constraints that have previously been evaluated only for a monochromatic mass function, and the need to obtain more constraints in the allowed mass range.
    Primordial black holeMass functionDark matterBlack holePrimordial black hole massPBH abundanceSolar massGravitational collapseLaser Interferometer Gravitational-Wave ObservatoryPlanck scale...
  • The microphysics of ~GeV cosmic ray (CR) transport on galactic scales remain deeply uncertain, with almost all studies adopting simple prescriptions (e.g. constant-diffusivity). We explore different physically-motivated, anisotropic, dynamical CR transport scalings in high-resolution cosmological FIRE simulations of dwarf and ~$L_{\ast}$ galaxies where scattering rates vary with local plasma properties motivated by extrinsic turbulence (ET) or self-confinement (SC) scenarios, with varying assumptions about e.g. turbulent power spectra on un-resolved scales, Alfven-wave damping, etc. We self-consistently predict observables including $\gamma$-rays ($L_{\gamma}$), grammage, residence times, and CR energy densities to constrain the models. We demonstrate many non-linear dynamical effects (not captured in simpler models) tend to enhance confinement. For example, in multi-phase media, even allowing arbitrary fast transport in neutral gas does not substantially reduce CR residence times (or $L_{\gamma}$), as transport is rate-limited by the ionized WIM and 'inner CGM' gaseous halo ($10^{4}-10^{6}$ K gas within 10-30 kpc), and $L_{\gamma}$ can be dominated by trapping in small 'patches.' Most physical ET models contribute negligible scattering of ~1-10 GeV CRs, but it is crucial to account for anisotropy and damping (especially of fast modes) or else scattering rates would violate observations. We show that the most widely-assumed scalings for SC models produce excessive confinement by factors >100 in the WIM and inner CGM, where turbulent and Landau damping dominate. This suggests either a breakdown of quasi-linear theory used to derive the CR transport parameters in SC, or that other novel damping mechanisms dominate in intermediate-density ionized gas.
    Cosmic rayMilky WayGalaxyDiffusion coefficientConfinementTurbulenceSupernovaDamping rateSpherical collapse modelCosmic ray flux...
  • We revisit certain natural algebraic transformations on the space of 3D topological quantum field theories (TQFTs) called "Galois conjugations." Using a notion of multiboundary entanglement entropy (MEE) defined for TQFTs on compact 3-manifolds with disjoint boundaries, we give these abstract transformations additional physical meaning. In the process, we prove a theorem on the invariance of MEE along orbits of the Galois action in the case of arbitrary Abelian theories defined on any link complement in $S^3$. We then give a generalization to non-Abelian TQFTs living on certain infinite classes of torus link complements. Along the way, we find an interplay between the modular data of non-Abelian TQFTs, the topology of the ambient spacetime, and the Galois action. These results are suggestive of a deeper connection between entanglement and fusion.
    Topological field theoryEntanglement entropyTorusGalois groupChern-Simons theoryReduced density matrixManifoldFusion rulesOpacityAnyon...
  • Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fill-in-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https://github.com/facebookresearch/LAMA.
    Knowledge baseGoogle.comRankComputational linguisticsTraining setNatural languageLong short term memoryRankingInformation retrievalSentence representations...
  • Leadership supercomputers feature a diversity of storage, from node-local persistent memory and NVMe SSDs to network-interconnected flash memory and HDD. Memory mapping files on different tiers of storage provides a uniform interface in applications. However, system-wide services like mmap are optimized for generality and lack flexibility for enabling application-specific optimizations. In this work, we present Umap to enable user-space page management that can be easily adapted to access patterns in applications and storage characteristics. Umap uses the userfaultfd mechanism to handle page faults in multi-threaded applications efficiently. By providing a data object abstraction layer, Umap is extensible to support various backing stores. The design of Umap supports dynamic load balancing and I/O decoupling for scalable performance. Umap also uses application hints to improve the selection of caching, prefetching, and eviction policies. We evaluate Umap in five benchmarks and real applications on two systems. Our results show that leveraging application knowledge for page management could substantially improve performance. On average, Umap achieved 1.25 to 2.5 times improvement using the adapted configurations compared to the system service.
    OptimizationCachingSmall-scale dynamoObjectNetworks...
  • It is common to express cosmological measurements in units of $h^{-1}{\rm Mpc}$. Here, we review some of the complications that originate from this practice. A crucial problem caused by these units is related to the normalization of the matter power spectrum, which is commonly characterized in terms of the linear-theory rms mass fluctuation in spheres of radius $8\,h^{-1}{\rm Mpc}$, $\sigma_8$. This parameter does not correctly capture the impact of $h$ on the amplitude of density fluctuations. We show that the use of $\sigma_8$ has caused critical misconceptions for both the so-called $\sigma_8$ tension regarding the consistency between low-redshift probes and cosmic microwave background data, and the way in which growth-rate estimates inferred from redshift-space distortions are commonly expressed. We propose to abandon the use of $h^{-1}{\rm Mpc}$ units in cosmology and to characterize the amplitude of the matter power spectrum in terms of $\sigma_{12}$, defined as the mass fluctuation in spheres of radius $12\,{\rm Mpc}$, whose value is similar to the standard $\sigma_8$ for $h\sim 0.67$.
    Matter power spectrumCosmologyRedshift-space distortionBaryon Oscillation Spectroscopic SurveySigma8Planck missionBaryon acoustic oscillationsDark Energy SurveyCosmic microwave backgroundObservational cosmology...
  • We update the standard model (SM) predictions of $R(D^*)$ using the latest results on the decay distributions in $B \to D^* \ell \nu_{\ell}$ ($\ell = \mu, e$) by Belle collaboration, while extracting $|V_{cb}|$ at the same time. Depending on the inputs used in the analysis, we define various fit scenarios. Although the central values of the predicted $R(D^*)$ in all the scenarios have reduced from its earlier predictions in 2017, the results are consistent with each other within the uncertainties. In this analysis, our prediction of $R(D^*)$ is consistent with the respective world average at $\sim 3\sigma$. We have also predicted several angular observables associated with $B \to D^* \tau \nu_{\tau}$ decays. We note that the predicted $F_L(D^*)$ is consistent with the corresponding measurement at 2$\sigma$. Utilizing these new results, we fit the Wilson coefficients appearing beyond the standard model of particle physics (BSM). To see the trend of SM predictions, we have utilized the recently published preliminary results on the form-factors at non-zero recoil by the lattice groups like Fermilab-MILC and JLQCD and predicted the observables in $B \to D^* \ell \nu_{\ell}$, and $B \to D^* \tau \nu_{\tau}$ decays.
    Standard ModelWilson coefficientsFermilabForm factorBSM physicsMeasurement...
  • Nuclear $\beta$ decays as well as the decay of the neutron are well-established low-energy probes of physics beyond the Standard Model (SM). In particular, with the axial-vector coupling of the nucleon $g_A$ determined from lattice QCD, the comparison between experiment and SM prediction is commonly used to derive constraints on right-handed currents. Further, in addition to the CKM element $V_{us}$ from kaon decays, $V_{ud}$ from $\beta$ decays is a critical input for the test of CKM unitarity. Here, we point out that the available information on $\beta$ decays can be re-interpreted as a stringent test of lepton flavor universality (LFU). In fact, we find that the ratio of $V_{us}$ from kaon decays over $V_{us}$ from $\beta$ decays (assuming CKM unitarity) is extremely sensitive to LFU violation (LFUV) in $W$-$\mu$-$\nu$ couplings thanks to a CKM enhancement by $(V_{ud}/V_{us})^2\sim 20$. From this perspective, recent hints for the violation of CKM unitarity can be viewed as further evidence for LFUV, fitting into the existing picture exhibited by semi-leptonic $B$ decays and the anomalous magnetic moments of muon and electron. Finally, we comment on the future sensitivity that can be reached with this LFU violating observable and discuss complementary probes of LFU that may reach a similar level of precision, such as $\Gamma(\pi\to\mu\nu)/\Gamma(\pi\to e\nu)$ at the PEN experiment or even direct measurements of $W\to\mu\nu$ at an FCC-ee.
    Cabibbo-Kobayashi-Maskawa matrixUnitarityStandard ModelRadiative correctionMuonKaon decayLattice QCDNeutron decayFFC-ee experimentFermi coupling constant...
  • We study the bottom $\Lambda_b(6146)^0$ baryon, newly discovered by the LHCb Collaboration. By adopting an interpolating current of $(L_{\rho}, L_{\lambda})=(0,2)$ type and $D$-wave nature with spin-parity quantum numbers $J^P=\frac{3}{2}^+$ for this heavy bottom baryon, we calculate its mass and residue. The obtained mass, $m_{\Lambda_b}=(6144\pm 68)$~MeV is in accord nicely with the experimental data. The obtained value of the residue of this state can be served as a main input in the investigation of various decays of this state. We calculate the spectroscopic parameters of the $c$-partner of this state, namely $\Lambda_c(2860)^+$, as well and compare the obtained results with the existing theoretical predictions as well as experimental data. The results indicate that the state $\Lambda_b(6146)^0$ and its charmed-partner $\Lambda_c(2860)^+$ can be considered as $1D$-wave baryons with $J^P=\frac{3}{2}^+$.
    LHCb experimentCoupling constantHeavy quarkQCD sum rulesTwo-point correlation functionPropagatorBorel parameterDegree of freedomCharmed baryonsOperator product expansion...
  • We give a systematic study of ${\bf B}_c\to {\bf B}_n V$ decays, where ${\bf B}_c$ and $ {\bf B}_n$ correspond to the anti-triplet charmed and octet baryons, respectively, while $V$ stand for the vector mesons. We calculate the color-symmetric contributions to the decays from the effective Hamiltonian with the factorization approach and extract the anti-symmetric ones based on the experimental measurements and $SU(3)_F$ flavor symmetry. We find that most of the existing experimental data for ${\bf B}_c\to {\bf B}_n V$ are consistent with our fitting results. We present all the branching ratios of the Cabbibo allowed, singly Cabbibo suppressed and doubly Cabbibo suppressed decays of ${\bf B}_c\to {\bf B}_n V$. The decay parameters for the daughter baryons and mesons in ${\bf B}_c\to {\bf B}_n V$ are also evaluated. In particular, we point out that the Cabbibo allowed decays of $\Lambda_c^+ \to \Lambda^0 \rho^+$ and $ \Xi_c^0 \to \Xi^- \rho^+$ as well as the singly Cabbibo suppressed ones of $\Lambda_c^+ \to \Lambda^0 K^{*+}$, $\Xi_c^+ \to \Sigma^+ \phi$ and $\Xi_c^0\to \Xi^- K^{*+}$ have large branching ratios and decay parameters with small uncertainties, which can be tested by the experimental searches at the charm facilities.
    Branching ratioVector mesonEffective HamiltonianCharmed baryonsDecay widthForm factorFlavour symmetryWeak decayBag Model of Quark ConfinementBaryon decays...
  • We have systematically investigated the mass spectrum and rearrangement decay properties of the exotic tetraquark states with four different flavors using a color-magnetic interaction model. Their masses are estimated by assuming that the $X(4140)$ is a $cs\bar{c}\bar{s}$ tetraquark state and their decay widths are obtained by assuming that the Hamiltonian for decay is a constant. According to the adopted method, we find that the most stable states are probably the isoscalar $bs\bar{u}\bar{d}$ and $cs\bar{u}\bar{d}$ with $J^P=0^+$ and $1^+$. The width for most unstable tetraquarks is about tens of MeVs, but that for unstable $cu\bar{s}\bar{d}$ and $cs\bar{u}\bar{d}$ can be around 100 MeV. For the $X(5568)$, our method cannot give consistent mass and width if it is a $bu\bar{s}\bar{d}$ tetraquark state. For the $I(J^P)=0(0^+),0(1^+)$ double-heavy $T_{bc}=bc\bar{u}\bar{d}$ states, their widths can be several MeVs. We also discuss the relation between the tetraquark structure and the total width with the help of defined $K$ factors. We find that: (1) one cannot uniquely determine the structure of a four-quark state just from the decay ratios between its rearrangement channels; (2) it is possible to judge relative stabilities of tetraquark states just from the calculable $K$ factors for the quark-quark interactions; and (3) the range of total width for a tetraquark state can be estimated just with the masses of the involved states.
    TetraquarkDecay widthPhase spaceHamiltonianDecay channelsWavefunctionQuark massDiquarkDecay modeLHCb experiment...
  • We construct black hole geometries in AdS$_3$ with non-trivial values of KdV charges. The black holes are holographically dual to quantum KdV Generalized Gibbs Ensemble in 2d CFT. They satisfy thermodynamic identity and thus are saddle point configurations of the Euclidean gravity path integral. We discuss holographic calculation of the KdV generalized partition function and show that for a certain value of chemical potentials new geometries, not the conventional BTZ ones, are the leading saddles.
    Korteweg-de Vries equationConformal field theoryHamiltonianAnti de Sitter spaceBlack holeGeneralized Gibbs ensembleSaddle pointPath integralVirasoro algebraCharged black hole...
  • In the 1980's, work by Coleman and by Giddings and Strominger linked the physics of spacetime wormholes to `baby universes' and an ensemble of theories. We revisit such ideas, using features associated with a negative cosmological constant and asymptotically AdS boundaries to strengthen the results, introduce a change in perspective, and connect with recent replica wormhole discussions of the Page curve. A key new feature is an emphasis on the role of null states. We explore this structure in detail in simple topological models of the bulk that allow us to compute the full spectrum of associated boundary theories. The dimension of the asymptotically AdS Hilbert space turns out to become a random variable $Z$, whose value can be less than the naive number $k$ of independent states in the theory. For $k>Z$, consistency arises from an exact degeneracy in the inner product defined by the gravitational path integral, so that many a priori independent states differ only by a null state. We argue that a similar property must hold in any consistent gravitational path integral. We also comment on other aspects of extrapolations to more complicated models, and on possible implications for the black hole information problem in the individual members of the above ensemble.
    Path integralWormholeBlack holeAnti de Sitter spaceEntropyHartle-Hawking stateConformal field theoryManifoldPerturbation theoryPartition function...
  • We perform an analysis of the $b\to c\tau\nu$ data, including $R(D^{(*)})$, $R(J/\psi)$, $P_\tau(D^{*})$ and $F_L^{D^*}$, within and beyond the Standard Model (SM). We fit the $B\to D^{(*)}$ hadronic form factors in the HQET parametrization to the lattice and the light-cone sum rule (LCSR) results, applying the unitarity bounds derived in the analysis. We then investigate the model-independent and the leptoquark model explanations of the $b\to c\tau\nu$ anomalies. Specifically, we consider the one-operator, the two-operator new physics (NP) scenarios and the NP models with a single leptoquark which can address the $b\to c\tau\nu$ anomalies. We also give predictions for various observables including $R(D^{(*)})$ in the SM and the NP scenarios/leptoquark models based on the present form factor study and the analysis of NP.
    Form factorWilson coefficientsLight-cone sum rulesUnitarityHeavy Quark Effective TheoryTwo-point correlation functionBSM physicsLeptoquarkLattice QCDHiggs boson...
  • We analyze theoretically the $D^+\to \nu e^+ \rho \bar K$ and $D^+\to \nu e^+ \bar K^* \pi$ decays to see the feasibility to check the double pole nature of the axial-vector resonance $K_1(1270)$ predicted by the unitary extensions of chiral perturbation theory (UChPT). Indeed, within UChPT the $K_1(1270)$ is dynamically generated from the interaction of a vector and a pseudoscalar meson, and two poles are obtained for the quantum numbers of this resonance. The lower mass pole couples dominantly to $K^*\pi$ and the higher mass pole to $\rho K$, therefore we can expect that different reactions weighing differently these channels in the production mechanisms enhance one or the other pole. We show that the different final $VP$ channels in $D^+\to \nu e^+ V P$ weigh differently both poles, and this is reflected in the shape of the final vector-pseudoscalar invariant mass distributions. Therefore, we conclude that these decays are suitable to distinguish experimentally the predicted double pole of the $K_1(1270)$ resonance.
    Invariant massMass distributionScattering amplitudeVector mesonPseudoscalarFinal state interactionsPseudoscalar mesonChiral perturbation theorySemileptonic decayIsospin...
  • In the Shannon lecture at the 2019 International Symposium on Information Theory (ISIT), Ar{\i}kan proposed to employ a one-to-one convolutional transform as a pre-coding step before polar transform. The resulting codes of this concatenation are called {\em polarization-adjusted convolutional (PAC) codes}. In this scheme, a pair of polar mapper and demapper as pre- and post-processing devices are deployed around a memoryless channel, which provides polarized information to an outer decoder leading to improved error correction performance of outer code. In this paper, the implementations of list decoding and Fano decoding for PAC codes are first investigated. Then, in order to reduce the complexity of sequential decoding of PAC/polar codes, we propose (i) an adaptive heuristic metric, (ii) tree search constraints for backtracking to avoid exploration of unlikely sub-paths, and (iii) tree search strategies consistent with the pattern of error occurrence in polar codes. These contribute to reducing the average decoding time complexity up to 85\%, with only a relatively small degradation in error correction performance. Additionally, as an important ingredient in Fano decoding of PAC/polar codes, an efficient computation method for the intermediate LLRs and partial sums is provided. This method is necessary for backtracking and avoids storing the intermediate information or restarting the decoding process.
    Maximum likelihoodInformation theoryViterbi algorithmSoftwareInternet of ThingsTree traversalGraphScale factorBinary treeDecision making...
  • Integrated Information Theory is one of the leading models of consciousness. It aims to describe both the quality and quantity of the conscious experience of a physical system, such as the brain, in a particular state. In this contribution, we propound the mathematical structure of the theory, separating the essentials from auxiliary formal tools. We provide a definition of a generalized IIT which has IIT 3.0 of Tononi et. al., as well as the Quantum IIT introduced by Zanardi et. al. as special cases. This provides an axiomatic definition of the theory which may serve as the starting point for future formal investigations and as an introduction suitable for researchers with a formal background.
    IntensityInformation theoryMetric spaceUniform distributionGraphEarthConditional IndependenceMixed statesEmbeddingCategory theory...
  • The rateless and information additive properties of fountain codes make them attractive for use in broadcast/multicast applications, especially in radio environments where channel characteristics vary with time and bandwidth is expensive. Conventional schemes using a combination of ARQ (Automatic Repeat reQuest) and FEC (Forward Error Correction) suffer from serious drawbacks such as feedback implosion at the transmitter, the need to know the channel characteristics apriori so that the FEC scheme is designed to be effective and the fact that a reverse channel is needed to request retransmissions if the FEC fails. This paper considers the assessment of fountain codes over radio channels. The performance of fountain codes, in terms of the associated overheads, over radio channels of the type experienced in GPRS (General Packet Radio Service) is presented. The work is then extended to assessing the performance of Fountain codes in combination with the GPRS channel coding schemes in a radio environment.
  • We present algorithms for classification of linear codes over finite fields, based on canonical augmentation and on lattice point enumeration. We apply these algorithms to obtain classification results over fields with 2, 3 and 4 elements. We validate a correct implementation of the algorithms with known classification results from the literature, which we partially extend to larger ranges of parameters.
    AutomorphismSoftwareGalois fieldEmpty Lattice ApproximationCountingHomomorphismGroup actionIsometryLinear optimizationColumn vector...
  • Creating sound zones has been an active research field since the idea was first proposed. So far, most sound zone control methods rely on either an optimization of physical metrics such as acoustic contrast and signal distortion or on a mode decomposition of the desired sound field. By using these types of methods, approximately 15 dB of acoustic contrast between the reproduced sound field in the target zone and its leakage to other zone(s) has been reported in practical set-ups, but this is typically not high enough to satisfy the people inside the zones. In this paper, we propose a sound zone control method which shapes the leakage errors so that they are as inaudible as possible for a given acoustic contrast. The shaping of the leakage errors is performed by taking the time-varying input signal characteristics and the human auditory system into account when the loudspeaker control filters are calculated. We show how this can be performed using variable span trade-off filters known from signal enhancement, and we show how these filters can also be used for trading of signal distortion in the target zone for acoustic contrast. Numerical validations under anechoic and reverberant environments were conducted, and the proposed method was evaluated via physical metrics including acoustic contrast and signal distortion as well as perceptual metrics such as the short-time objective intelligibility (STOI). The results confirm that, compared to existing nonadaptive sound zone control methods, a perceptual improvement can be obtained by the proposed signal-adaptive and perceptually optimized variable span trade-off (AP-VAST) control method.
    Multidimensional ArrayParticle-mesh methodOptimizationNonnegativeStatisticsQuantizationMATLABRankPersonal ComputerInterference...
  • This paper is a tutorial for eigenvalue and generalized eigenvalue problems. We first introduce eigenvalue problem, eigen-decomposition (spectral decomposition), and generalized eigenvalue problem. Then, we mention the optimization problems which yield to the eigenvalue and generalized eigenvalue problems. We also provide examples from machine learning, including principal component analysis, kernel supervised principal component analysis, and Fisher discriminant analysis, which result in eigenvalue and generalized eigenvalue problems. Finally, we introduce the solutions to both eigenvalue and generalized eigenvalue problems.
    OptimizationPrincipal component analysisMachine learningRankSpectral decompositionCovariance matrixPositive semi definiteCentering matrixEigenvalueEigenvector...
  • We examine the stacked thermal Sunyaev-Zel\text{'}dovich (SZ) signals for a sample of galaxy cluster candidates from the Spitzer-HETDEX Exploratory Large Area (SHELA) Survey, which are identified in combined optical and infrared SHELA data using the redMaPPer algorithm. We separate the clusters into three richness bins, with average photometric redshifts ranging from 0.70 to 0.80. The richest bin shows a clear temperature decrement at 148 GHz in the Atacama Cosmology Telescope data, which we attribute to the SZ effect. All richness bins show an increment at 220 GHz, which we attribute to dust emission from cluster galaxies. We correct for dust emission using stacked profiles from Herschel Stripe 82 data, and allow for synchrotron emission using stacked profiles created by binning source fluxes from NVSS data. We see dust emission in all three richness bins, but can only confidently detect the SZ decrement in the highest richness bin, finding $M_{500}$ = $8.7^{+1.7}_{-1.3} \times 10^{13} M_\odot$. Neglecting the correction for dust depresses the inferred mass by 26 percent, indicating a partial fill-in of the SZ decrement from thermal dust and synchrotron emission by the cluster member galaxies. We compare our corrected SZ masses to two redMaPPer mass--richness scaling relations and find that the SZ mass is lower than predicted by the richness. We discuss possible explanations for this discrepancy, and note that the SHELA richnesses may differ from previous richness measurements due to the inclusion of IR data in redMaPPer.
    Atacama Cosmology TelescopeSynchrotronDust emissionSynchrotron radiationCosmic microwave backgroundNational Radio Astronomy Observatory VLA Sky SurveySpectral energy distributionGalaxyCovarianceSunyaev-Zel'dovich effect...
  • We present the star formation histories of 39 galaxies with high quality rest-frame optical spectra at 0.5<z<1.3 selected to have strong Balmer absorption lines and/or Balmer break, and compare to a sample of spectroscopically selected quiescent galaxies at the same redshift. Photometric selection identifies a majority of objects that have clear evidence for a recent short-lived burst of star formation within the last 1.5 Gyr, i.e. "post-starburst" galaxies, however we show that good quality continuum spectra are required to obtain physical parameters such as burst mass fraction and burst age. Dust attenuation appears to be the primary cause for misidentification of post-starburst galaxies, leading to contamination in spectroscopic samples where only the [OII] emission line is available, as well as a small fraction of objects lost from photometric samples. The 31 confirmed post-starburst galaxies have formed 40-90% of their stellar mass in the last 1-1.5 Gyr. We use the derived star formation histories to find that the post-starburst galaxies are visible photometrically for 0.5-1 Gyr. This allows us to update a previous analysis to suggest that 25-50% of the growth of the red sequence at z~1 could be caused by a starburst followed by rapid quenching. We use the inferred maximum historical star formation rates of several 100-1000 Msun/yr and updated visibility times to confirm that sub-mm galaxies are likely progenitors of post-starburst galaxies. The short quenching timescales of 100-200 Myr are consistent with cosmological hydrodynamic models in which rapid quenching is caused by the mechanical expulsion of gas due to an AGN.
    GalaxyStarburst galaxyQuenchingStar formationStar formation historiesPhotometryStar formation rateOf starsAbsorption lineStellar mass...
  • We propose a novel method to search for axion-like particles (ALPs) at particle accelerator experiments (exps). ALPs produced at the target via the Primakoff effect subsequently enter a region with a magnetic field, where they are converted to photons that are then detected. Dubbed Particle Accelerator helioScopes for Slim Axion-like-particle deTection (PASSAT), our proposal uses the principle of the axion helioscope but replaces ALPs produced in Sun with those produced in a target material. Since we rely on ALP-photon conversions, our proposal probes light (slim) ALPs that are otherwise inaccessible to lab-based exps which rely on ALP decay, and complements astrophysical probes that are more model-dependent. We first reinterpret existing data from the NOMAD exp in light of PASSAT, and constrain the parameter space for ALPs lighter than ~100 eV and ALP-photon coupling larger than ~$10^{-4}$ GeV$^{-1}$. As benchmarks of feasible low-cost exps improving over the NOMAD limits, we study the possibility of reusing the magnets of the CAST and the proposed BabyIAXO exps and placing them at the proposed BDF facility at CERN, along with some new detectors. We find that these realizations of PASSAT allow for a direct probe of the parameter space for ALPs lighter than ~100 eV and ALP-photon coupling larger than ~4 x $10^{-6}$ GeV$^{-1}$, which are regions that have not been probed yet by exps with lab-produced ALPs. In contrast to other proposals aiming at detecting 1 or 2-photon only events in hadronic beam dump environments, that rely heavily on Monte Carlo simulations, the background (BG) in our proposal can be directly measured in-situ, its suppression optimized, and the irreducible BG statistically subtracted. Sensitivity studies with other beams will be the subject of future work. The measurements suggested here represent an additional physics case for the BDF beyond those already proposed.
    Axion-like particleParticle Accelerator helioScopes for Slim Axion-like-particle deTectionBeam Dump FacilityCERN Axion Solar TelescopeBeam dumpMuonAxion-photon couplingSHiP experimentPrimakoff effectProton beam...
  • We propose a new dark matter detection strategy using the graphene-based Josephson junction microwave single photon detector, a "state-of-the-art" technology. It was recently demonstrated in the laboratory that the device is sensitive even to sub-meV energy deposited at $\pi$-bond electrons. Therefore, the associated detectors are, for the first time, capable of sensing (warm) dark matter of sub-keV mass scattering off electrons, more specifically of $m_\chi \gtrsim 0.1$ keV. We investigate detection prospects with a mg-scale prototypical detector and a g-scale detector, calculating the scattering event rate between dark matter and free electrons in the graphene with Pauli blocking factors included. We expect not only that the proposed detector serves as a complementary probe of superlight dark matter but that due to the extremely low energy threshold it can achieve higher experimental sensitivities than those of other proposed experiments featured by a low threshold, with the same target mass.
    Dark matterGrapheneDark matter candidateJosephson junctionsLight mediatorDark matter detectorsDark matter particleLaboratory dark matter searchScattering cross sectionInfrared limit...