Recently bookmarked papers

with concepts:
  • 2111.15009  ,  ,  et al.,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  show less
    Milky Way dwarf spheroidal galaxies (dSphs) are among the best candidates to search for signals of dark matter annihilation with Imaging Atmospheric Cherenkov Telescopes, given their high mass-to-light ratios and the fact that they are free of astrophysical gamma-ray emitting sources. Since 2011, MAGIC has performed a multi-year observation program in search for Weakly Interacting Massive Particles (WIMPs) in dSphs. Results on the observations of Segue 1 and Ursa Major II dSphs have already been published and include some of the most stringent upper limits (ULs) on the velocity-averaged cross-section $\langle \sigma_{\mathrm{ann}} v \rangle$ of WIMP annihilation from observations of dSphs. In this work, we report on the analyses of 52.1 h of data of Draco dSph and 49.5 h of Coma Berenices dSph observed with the MAGIC telescopes in 2018 and in 2019 respectively. No hint of a signal has been detected from either of these targets and new constraints on the $\langle \sigma_{\mathrm{ann}} v \rangle$ of WIMP candidates have been derived. In order to improve the sensitivity of the search and reduce the effect of the systematic uncertainties due to the $J$-factor estimates, we have combined the data of all dSphs observed with the MAGIC telescopes. Using 354.3 h of dSphs good quality data, 95 % CL ULs on $\langle \sigma_{\mathrm{ann}} v \rangle$ have been obtained for 9 annihilation channels. For most of the channels, these results reach values of the order of $10^{-24} $cm$^3$/s at ${\sim}1$ TeV and are the most stringent limits obtained with the MAGIC telescopes so far.
    Dark matterMAGIC telescopeDraco Dwarf Spheroidal galaxyComa BerenicesSystematic errorWeakly interacting massive particleDark matter particle massCherenkov telescopeMonte Carlo methodProgramming...
  • We present new constraints on the masses of the halos hosting the Milky Way and Andromeda galaxies derived using graph neural networks. Our models, trained on thousands of state-of-the-art hydrodynamic simulations of the CAMELS project, only make use of the positions, velocities and stellar masses of the galaxies belonging to the halos, and are able to perform likelihood-free inference on halo masses while accounting for both cosmological and astrophysical uncertainties. Our constraints are in agreement with estimates from other traditional methods.
    Milky WayAndromeda galaxyGalaxyGraph Neural NetworkStellar massCosmology and Astrophysics with MachinE Learning SimulationsVirial massGraphInferenceLocal group...
  • We investigate the Minimal Theory of Massive Gravity (MTMG) in the light of different observational data sets which are in tension within the $\Lambda$CDM cosmology. In particular, we analyze MTMG model, for the first time, with the Planck-CMB data, and how these precise measurements affect the free parameters of the theory. The MTMG model can affect the CMB power spectrum at large angular scales and cause a suppression on the amplitude of the matter power spectrum. We find that on adding Planck-CMB data, the graviton has a small, positive, but non-zero mass at 68\% confidence level, and from this perspective, we show that the tension between redshift space distortions measurements and Planck-CMB data in the parametric space $S_8 - \Omega_m$ can be resolved within the MTMG scenario. Through a robust and accurate analysis, we find that the $H_0$ tension between the CMB and the local distance ladder measurements still remains but can be reduced to $\sim3.5\sigma$ within the MTMG theory. The MTMG is very well consistent with the CMB observations, and undoubtedly, it can serve as a viable candidate amongst other modified gravity theories.
    Redshift-space distortionBaryon acoustic oscillationsPlanck missionGeneral relativityGravitonMassive gravityKiDS surveyCosmologyModified gravityTheories of gravity...
  • We assess the dominant low-redshift anisotropic signatures in the distance-redshift relation and redshift drift signals. We adopt general-relativistic irrotational dust models allowing for gravitational radiation -- the `quiet universe models' -- which are extensions of the silent universe models. Using cosmological simulations evolved with numerical relativity, we confirm that the quiet universe model is a good description on scales larger than those of collapsing structures. With this result, we reduce the number of degrees of freedom in the fully general luminosity distance and redshift drift cosmographies by a factor of $\sim 2$ and $\sim 2.5$, respectively, for the most simplified case. We predict a dominant dipolar signature in the distance-redshift relation for low-redshift data, with direction along the gradient of the large-scale density field. Further, we predict a dominant quadrupole in the anisotropy of the redshift drift signal, which is sourced by the electric Weyl curvature tensor. The signals we predict in this work should be tested with present and near-future cosmological surveys.
    ShearedWeyl tensorCosmographyAnisotropyCurvature tensorGravitational radiationLuminosity distanceHubble flowEinstein-de Sitter universeVorticity...
  • Big-bang nucleosynthesis (BBN), today a pillar of modern cosmology, began with the trailblazing 1948 paper of Alpher, Bethe and Gamow. In it, they proposed non-equilibrium nuclear processes in the early Universe ($t \sim 1000\,$sec) and an early radiation-dominated phase to explain the abundances of all the chemical elements. Their model was fundamentally flawed, but initiated a complex and interesting path to the modern theory of BBN, which explains only the abundances of the lightest chemical elements (mostly $^4$He) and the discovery of the cosmic microwave background (CMB). The purpose of this paper is to clarify the basic physics of BBN, adding some new insights, and to describe how the modern theory developed. I finish with a discussion of two misunderstandings about BBN that still persist and the tale of the pre-discovery predictions of the temperature of the CMB and the missed opportunity it turned out to be.
    Big bang nucleosynthesisCosmic microwave backgroundNucleosynthesisNeutron captureCoulomb barrierEntropyCMB temperatureCosmologyNuclear statistical equilibriumBig Bang...
  • We consider a dynamical model for dark energy based on an ultralight mass scalar field with very large-scale inhomogeneities. This model may cause observable impacts on the anisotropic properties of the cosmic microwave background (CMB) intensity and luminosity distance. We formulate the model as the cosmological perturbations of the superhorizon scales, focusing on the local region of our universe. Moreover, we investigated the characteristic properties of the late-time evolution of inhomogeneous dark energy. Our numerical solutions show that the model can mimic the standard $\Lambda$CDM cosmology while including spatially dependent dark energy with flexible ranges of the model parameters. We put a constraint on the amplitude of these inhomogeneities of the dark energy on very large scales with the observations of the CMB anisotropies. We also discuss their influence on the estimation of the luminosity distance.
    Dark energyScalar fieldLuminosity distanceScale factorCosmic microwave backgroundLambda-CDM modelAnisotropyCosmologySupernova Type IaMetric perturbation...
  • We calculate the relativistic corrections to hydrostatic X-ray masses for galaxy clusters in Kottler spacetime, which is the spherically symmetric solution to Einstein's equations in General relativity endowed with a cosmological constant. The hydrostatic masses for clusters (calculated assuming Newtonian gravity) have been found to be underestimated compared to lensing masses, and this discrepancy is known as hydrostatic mass bias. Since the relativistic hydrostatic X-ray masses are automatically lower than lensing masses, under the edifice of Kottler metric, we check if the hydrostatic mass bias problem gets alleviated using this {\it ansatz}. We consider a sample of 18 galaxy clusters for this pilot test. We find that the ratio of X-ray to lensing mass is close to unity even in Kottler spacetime. Therefore, the effect of relativistic corrections to hydrostatic X-ray masses for galaxy clusters is negligible.
    Cluster of galaxiesWeak lensing mass estimateHydrostatic massHydrostaticsHydrostatic mass biasCosmological constantGeneral relativityRelativistic correctionGalaxy massVirial cluster mass...
  • 2111.13805  ,  ,  et al.,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  show less
    Lensing Without Borders is a cross-survey collaboration created to assess the consistency of galaxy-galaxy lensing signals ($\Delta\Sigma$) across different data-sets and to carry out end-to-end tests of systematic errors. We perform a blind comparison of the amplitude of $\Delta\Sigma$ using lens samples from BOSS and six independent lensing surveys. We find good agreement between empirically estimated and reported systematic errors which agree to better than 2.3$\sigma$ in four lens bins and three radial ranges. For lenses with $z_{\rm L}>0.43$ and considering statistical errors, we detect a 3-4$\sigma$ correlation between lensing amplitude and survey depth. This correlation could arise from the increasing impact at higher redshift of unrecognised galaxy blends on shear calibration and imperfections in photometric redshift calibration. At $z_{\rm L}>0.54$ amplitudes may additionally correlate with foreground stellar density. The amplitude of these trends is within survey-defined systematic error budgets which are designed to include known shear and redshift calibration uncertainty. Using a fully empirical and conservative method, we do not find evidence for large unknown systematics. Systematic errors greater than 15% (25%) ruled out in three lens bins at 68% (95%) confidence at $z<0.54$. Differences with respect to predictions based on clustering are observed to be at the 20-30% level. Our results therefore suggest that lensing systematics alone are unlikely to fully explain the "lensing is low" effect at $z<0.54$. This analysis demonstrates the power of cross-survey comparisons and provides a promising path for identifying and reducing systematics in future lensing analyses.
    GalaxySystematic errorBaryon Oscillation Spectroscopic SurveyShearedHyper Suprime-CamSloan Digital Sky SurveyLensing surveyCalibrationPhotometric redshiftLensing signal...
  • We explore a new approach for extracting reionization-era contributions to the kinetic Sunyaev-Zel'dovich (kSZ) effect. Our method utilizes the cross-power spectrum between filtered and squared maps of the cosmic microwave background (CMB) and photometric galaxy surveys during the Epoch of Reionization (EoR). This kSZ$^2$-galaxy cross-power spectrum statistic has been successfully detected at lower redshifts ($z \lesssim 1.5$). Here we extend this method to $z \gtrsim 6$ as a potential means to extract signatures of patchy reionization. We model the expected signal across multiple photometric redshift bins using semi-numeric simulations of the reionization process. In principle, the cross-correlation statistic robustly extracts reionization-era contributions to the kSZ signal, while its redshift evolution yields valuable information regarding the timing of reionization. Specifically, the model cross-correlation signal near $\ell \sim 1,000$ peaks during the early stages of the EoR, when about 20% of the volume of the universe is ionized. Detectible $\ell$ modes mainly reflect squeezed triangle configurations of the related bispectrum, quantifying correlations between the galaxy overdensity field on large scales and the smaller-scale kSZ power. We forecast the prospects for detecting this signal using future wide-field samples of Lyman-break galaxies from the Roman Space Telescope and next-generation CMB surveys including the Simons Observatory, CMB-S4, and CMB-HD. We find that a roughly 13$\sigma$ detection is possible for CMB-HD and Roman after summing over all $\ell$ modes. We discuss the possibilities for improving this approach and related statistics, with the aim of moving beyond simple detections to measure the scale and redshift dependence of the cross-correlation signals.
    Cosmic microwave backgroundGalaxyReionizationSignal to noise ratioEpoch of reionizationMilky WayCross-correlationRedshift binsStatisticsIonization fraction...
  • The power spectrum of the nonlinearly evolved large-scale mass distribution recovers only a minority of the information available on the mass fluctuation amplitude. We investigate the recovery of this information in 2D "slabs" of the mass distribution averaged over $\approx100$~$h^{-1}$Mpc along the line of sight, as might be obtained from photometric redshift surveys. We demonstrate a Hamiltonian Monte Carlo (HMC) method to reconstruct the non-Gaussian mass distribution in slabs, under the assumption that the projected field is a point-transformed Gaussian random field, Poisson-sampled by galaxies. When applied to the \textit{Quijote} $N$-body suite at $z=1$ and at a transverse resolution of 2~$h^{-1}$Mpc, the method recovers $\sim 30$ times more information than the 2D power spectrum in the well-sampled limit, recovering the Gaussian limit on information. At a more realistic galaxy sampling density of $0.01$~$h^3$Mpc$^{-3}$, shot noise reduces the information gain to a factor of five improvement over the power spectrum at resolutions of 4~$h^{-1}$Mpc or smaller.
    GalaxyHamiltonian Monte CarloMass distributionCosmologyPoisson samplingMilky WayRandom FieldLine of sightPhotometric redshiftRedshift survey...
  • We show that the entropy of strings that wind around the Euclidean time circle is proportional to the Noether charge associated with translations along the T-dual time direction. We consider an effective target-space field theory which includes a large class of terms in the action with various modes, interactions and $\alpha'$ corrections. The entropy and the Noether charge are shown to depend only on the values of fields at the boundary of space. The classical entropy, which is proportional to the inverse of Newton's constant, is then calculated by evaluating the appropriate boundary term for various geometries with and without a horizon. We verify, in our framework, that for higher-curvature pure gravity theories, the Wald entropy of static neutral black hole solutions is equal to the entropy derived from the Gibbons-Hawking boundary term. We then proceed to discuss horizonless geometries which contain, due to the back-reaction of the strings and branes, a second boundary in addition to the asymptotic boundary. Near this ``punctured'' boundary, the time-time component of the metric and the derivatives of its logarithm approach zero. Assuming that there are such non-singular solutions, we identify the entropy of the strings and branes in this geometry with the entropy of the solution to all orders in $\alpha'$. If the asymptotic region of an $\alpha'$-corrected neutral black hole is connected through the bulk to a puncture, then the black hole entropy is equal to the entropy of the strings and branes. Later, we discuss configurations similar to the charged black p-brane solutions of Horowitz and Strominger, with the second boundary, and show that, to leading order in the $\alpha'$ expansion, the classical entropy of the strings and branes is equal exactly to the Bekenstein-Hawking entropy. This result is extended to a configuration that asymptotes to AdS.
    EntropyBlack holeHorizonBekenstein-Hawking entropyEffective field theoryManifoldDilatonAnti de Sitter spaceString theoryRadiative Recombination...
  • We present a learning-based framework, recurrent transformer network (RTN), to restore heavily degraded old films. Instead of performing frame-wise restoration, our method is based on the hidden knowledge learned from adjacent frames that contain abundant information about the occlusion, which is beneficial to restore challenging artifacts of each frame while ensuring temporal coherency. Moreover, contrasting the representation of the current frame and the hidden knowledge makes it possible to infer the scratch position in an unsupervised manner, and such defect localization generalizes well to real-world degradations. To better resolve mixed degradation and compensate for the flow estimation error during frame alignment, we propose to leverage more expressive transformer blocks for spatial restoration. Experiments on both synthetic dataset and real-world old films demonstrate the significant superiority of the proposed RTN over existing solutions. In addition, the same framework can effectively propagate the color from keyframes to the whole video, ultimately yielding compelling restored films. The implementation and model will be released at https://github.com/raywzy/Bringing-Old-Films-Back-to-Life.
    TransformerHidden stateConvolution Neural NetworkAttentionRecurrent neural networkSynthetic DataArchitectureSuper-resolutionGround truthOptimization...
  • Hadwiger's Conjecture from 1943 states that every graph with no $K_{t}$ minor is $(t-1)$-colorable; it remains wide open for all $t\ge 7$. For positive integers $t$ and $s$, let $\mathcal{K}_t^{-s}$ denote the family of graphs obtained from the complete graph $K_t$ by removing $s$ edges. We say that a graph $G$ has no $\mathcal{K}_t^{-s}$ minor if it has no $H$ minor for every $H\in \mathcal{K}_t^{-s}$. Jakobsen in 1971 proved that every graph with no $\mathcal{K}_7^{-2}$ minor is $6$-colorable. In this paper we consider the next step and prove that every graph with no $\mathcal{K}_8^{-4}$ minor is $7$-colorable. Our result implies that $H$-Hadwiger's Conjecture, suggested by Paul Seymour in 2017, is true for every graph $H$ on eight vertices such that the complement of $H$ has maximum degree at least four, a perfect matching, a triangle and a cycle of length four. Our proof utilizes an extremal function for $\mathcal{K}_8^{-4}$ minors obtained in this paper, generalized Kempe chains of contraction-critical graphs by Rolek and the second author, and the method for finding $K_7$ minors from three different $K_5$ subgraphs by Kawarabayashi and Toft; this method was first developed by Robertson, Seymour and Thomas in 1993 to prove Hadwiger's Conjecture for $t=6$.
    Graph
  • Motivated by the famous Hadwiger's Conjecture, we study the properties of $8$-contraction-critical graphs with no $K_7$ minor; we prove that every $8$-contraction-critical graph with no $K_7$ minor has at most one vertex of degree $8$, where a graph $G$ is $8$-contraction-critical if $G$ is not $7$-colorable but every proper minor of $G$ is $7$-colorable. This is one step in our effort to prove that every graph with no $K_7$ minor is $7$-colorable, which remains open.
    Graph
  • In this article, energy decay is established for the damped wave equation on compact Riemannian manifolds where the damping coefficient is allowed to depend on time. It is shown that the energy of solutions decays at an exponential rate if the damping coefficient is sufficiently regular and satisfies a time dependent analogue of the classical geometric control condition. This is shown via a positive commutator estimate where the escape function is explicitly constructed in terms of the damping coefficient.
    Wave equationEnergyRiemannian manifoldCommutator...
  • This expository note gives a digest version of Hormander's propagation of singularities theorem for the wave equation.
    Wave equationSobolev spaceRegularizationDilute magnetic semiconductorsAttentionBounded operatorIntegral curveCommutantPropagatorFourier integral operator...
  • Standard two-parameter compressions of the infinite dimensional dark energy model space show crippling limitations even with current SN-Ia data. Firstly they cannot cope with rapid evolution - the best-fit to the latest SN-Ia data shows late and very rapid evolution to w_0 = -2.85. However all of the standard parametrisations (incorrectly) claim that this best-fit is ruled out at more than 2-sigma, primarily because they track it well only at very low redshifts, z < 0.2. Further they incorrectly rule out the observationally acceptable region w << -1 for z > 1. Secondly the parametrisations give wildly different estimates for the redshift of acceleration, which vary from z_{acc}=0.14 to z_{acc}=0.59. Although these failings are largely cured by including higher-order terms (3 or 4 parameters) this results in new degeneracies which open up large regions of previously ruled-out parameter space. Finally we test the parametrisations against a suite of theoretical quintessence models. The widely used linear expansion in z is generally the worst, with errors of up to 10% at z=1 and 20% at z > 2. All of this casts serious doubt on the usefulness of the standard two-parameter compressions in the coming era of high-precision dark energy cosmology and emphasises the need for decorrelated compressions with at least three parameters.
    CompressibilityDark energySupernova Type IaQuintessenceCosmologyEquation of stateConfidence intervalEquation of state of dark energyFitness modelSupergravity...
  • The emergence of the bulk Hilbert space is a mysterious concept in holography. In the double scaled SYK model, slicing open the chord diagrams of arXiv:1811.02584 explicitly defines the bulk Hilbert space. The bulk Hilbert space resembles that of a lattice field theory where the length of the lattice is dynamical and determined by the chord number. Under an explicit bulk-to-boundary map, states of fixed chord number map to particular entangled 2-sided states with a corresponding size. This bulk reconstruction is well-defined even when quantum gravity effects are important. Acting on the double scaled Hilbert space is a Type II$_1$ algebra of observables $\mathcal{A}$, which includes the Hamiltonian and matter operators. We identify a subalgebra of $\mathcal{A}$ that becomes the JT gravitational algebra in the quantum Schwarzian limit, including the SL(2,R) symmetry generators.
    Sachdev-Ye-Kitaev modelHamiltonianWormholeScaling limitLattice (order)Toronto Face DatabasePartition functionWavefunctionEntropyMaximally entangled states...
  • With the rapid increase of fast radio burst (FRB) detections within the past few years, there is now a catalogue being developed for all-sky extragalactic dispersion measure (DM) observations in addition to the existing collection of all-sky extragalactic Faraday rotation measurements (RMs) of radio galaxies. We present a method of reconstructing all-sky information of the Galactic magnetic field component parallel to the line of sight, $B_{\parallel}$, using simulated observations of the RM and DM along lines of sight to radio galaxies and FRB populations, respectively. This technique is capable of distinguishing between different input Galactic magnetic field and thermal electron density models. Significant extragalactic contributions to the DM are the predominant impediment in accurately reconstructing the Galactic DM and $\left<B_{\parallel}\right>$ skies. We look at ways to improve the reconstruction by applying a filtering algorithm on the simulated DM lines of sight and we derive generalized corrections for DM observations at $|b|$ > 10 deg that help to disentangle Galactic and extragalactic DM contributions. Overall, we are able to reconstruct both large-scale Galactic structure and local features in the Milky Way's magnetic field from the assumed models. We discuss the application of this technique to future FRB observations and address possible differences between our simulated model and observed data, namely: adjusting the priors of the inference model, an unevenly distributed population of FRBs on the sky, and localized extragalactic DM structures.
    Dispersion measureFast Radio BurstsLine of sightGalactic magnetic fieldRadio galaxyInferencePulsarCanadian Hydrogen Intensity Mapping ExperimentGalactic structureMilky Way...
  • Fake news detection is a critical yet challenging problem in Natural Language Processing (NLP). The rapid rise of social networking platforms has not only yielded a vast increase in information accessibility but has also accelerated the spread of fake news. Thus, the effect of fake news has been growing, sometimes extending to the offline world and threatening public safety. Given the massive amount of Web content, automatic fake news detection is a practical NLP problem useful to all online content providers, in order to reduce the human time and effort to detect and prevent the spread of fake news. In this paper, we describe the challenges involved in fake news detection and also describe related tasks. We systematically review and compare the task formulations, datasets and NLP solutions that have been developed for this task, and also discuss the potentials and limitations of them. Based on our insights, we outline promising research directions, including more fine-grained, detailed, fair, and practical detection models. We also highlight the difference between fake news detection and other related tasks, and the importance of NLP solutions for fake news detection.
    Computational linguisticsLong short term memoryAttentionFact checkingConvolution Neural NetworkMachine learningNeural networkTwitterFacebookNetwork model...
  • Homotopy type theory is a formal language for doing abstract homotopy theory --- the study of identifications. But in unmodified homotopy type theory, there is no way to say that these identifications come from identifying the path-connected points of a space. In other words, we can do abstract homotopy theory, but not algebraic topology. Shulman's Real Cohesive HoTT remedies this issue by introducing a system of modalities that relate the spatial structure of types to their homotopical structure. In this paper, we develop a theory of modal fibrations for a general modality, and apply it in particular to the shape modality of Real Cohesion. We then give examples of modal fibrations in Real Cohesive HoTT, and develop the theory of covering spaces.
    FibrationCohesionHopf fibrationFactorization systemBundleVector spaceAlgebraic topologyAutomorphismSpecial orthogonalOrientation...
  • Warm dark matter (WDM) can potentially explain small-scale observations that currently challenge the cold dark matter (CDM) model, as warm particles suppress structure formation due to free-streaming effects. Observing small-scale matter distribution provides a valuable way to distinguish between CDM and WDM. In this work, we use observations from the Dark Energy Survey and PanSTARRS1, which observe 270 Milky-Way satellites after completeness corrections. We test WDM models by comparing the number of satellites in the Milky Way with predictions derived from the Semi-Analytical SubHalo Inference ModelIng (SASHIMI) code, which we develop based on the extended Press-Schechter formalism and subhalos' tidal evolution prescription. We robustly rule out WDM with masses lighter than 4.4 keV at 95% confidence level for the Milky-Way halo mass of $10^{12} M_\odot$. The limits are a weak function of the (yet uncertain) Milky-Way halo mass, and vary as $m_{\rm WDM}>3.6$-$5.1$ keV for $(0.6$-$2.0) \times 10^{12} M_\odot$. For the sterile neutrinos that form a subclass of WDM, we obtain the constraints of $m_{\nu_s}>11.6$ keV for the Milky-Way halo mass of $10^{12} M_{\odot}$. These results based on SASHIMI do not rely on any assumptions of galaxy formation physics or are not limited by numerical resolution. The models, therefore, offer a robust and fast way to constrain the WDM models. By applying a satellite forming condition, however, we can rule out the WDM mass lighter than 9.0 keV for the Milky-Way halo mass of $10^{12} M_\odot$.
    Dark matter subhaloWarm dark matterVirial massCold dark matterMilky Way haloWDM particlesGalaxy FormationWDM particle massHalo accretion historyExtended Press-Schechter formalism...
  • The second catalogue of Planck Sunyaev-Zeldovich (SZ) sources, hereafter PSZ2, represents the largest galaxy cluster sample selected by means of their SZ signature in a full-sky survey. Using telescopes at the Canary Island observatories, we conducted the long-term observational program 128- MULTIPLE-16/15B (hereafter LP15), a large and complete optical follow-up campaign of all the unidentified PSZ2 sources in the northern sky, with declinations above $-15^\circ$ and no correspondence in the first Planck catalogue PSZ1. This paper is the third and last in the series of LP15 results, after Streblyanska et al. (2019) and Aguado-Barahona et al. (2019), and presents all the spectroscopic observations of the full program. We complement these LP15 spectroscopic results with Sloan Digital Sky Survey (SDSS) archival data and other observations from a previous program (ITP13-08), and present a catalog of 388 clusters and groups of galaxies including estimates of their velocity dispersion. The majority of them (356) are the optical counterpart of a PSZ2 source. A subset of 297 of those clusters is used to construct the $M_{\rm SZ}-M_{\rm dyn}$ scaling relation, based on the estimated SZ mass from Planck measurements and our dynamical mass estimates. We discuss and correct for different statistical and physical biases in the estimation of the masses, such as the Eddington bias when estimating $M_{SZ}$ and the aperture and the number of galaxies used to calculate $M_{dyn}$. The SZ-to-dynamical mass ratio for those 297 PSZ2 clusters is $(1-B) = 0.80\pm0.04$ (stat) $\pm 0.05$ (sys), with only marginal evidence for a possible mass dependence of this factor. Our value is consistent with previous results in the literature, but presents a significantly smaller uncertainty due to the use of the largest sample size for this type of studies.
    Velocity dispersionScaling lawProgrammingCluster of galaxiesSloan Digital Sky SurveyGalaxyOptical identificationRegressionIntrinsic scatterSignal to noise ratio...
  • LUCI is an general-purpose spectral line-fitting pipeline which natively integrates machine learning algorithms to initialize fit functions. LUCI currently uses point-estimates obtained from a convolutional neural network (CNN) to inform optimization algorithms; this methodology has shown great promise by reducing computation time and reducing the chance of falling into a local minimum using convex optimization methods. In this update to LUCI, we expand upon the CNN developed in Rhea et al. 2020 so that it outputs Gaussian posterior distributions of the fit parameters of interest (the velocity and broadening) rather than simple point-estimates. Moreover, these posteriors are then used to inform the priors in a Bayesian inference scheme, either emcee or dynesty. The code is publicly available at https://github.com/crhea93/LUCI.
    Convolution Neural NetworkOptimizationBayesian approachCanada-France-Hawaii TelescopeMachine learningSpectral lineStandard deviationProgrammingVelocity dispersionGaussian distribution...
  • 2108.10347  ,  ,  et al.,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  show less
    Euclid is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately 20 million of these galaxies will have their spectroscopy available, allowing us to map the 3D large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmology. In particular, we study the imprints of dynamic and geometric distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with Euclid. We arranged the data into four adjacent redshift bins, each of which contains about 11000 voids and estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on $f/b$ and $D_M H$, where $f$ is the linear growth rate of density fluctuations, $b$ the galaxy bias, $D_M$ the comoving angular diameter distance, and $H$ the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects. With this approach, Euclid will be able to reach a relative precision of about 4% on measurements of $f/b$ and 0.5% on $D_M H$ in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in Euclid will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter $w$ for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts.
    Cosmic voidGalaxyCosmologyRedshift binsEuclid missionRedshift spaceNuisance parameterRedshift-space distortionCross-correlation functionCalibration...
  • New early dark energy (NEDE) makes the cosmic microwave background consistent with a higher value of the Hubble constant inferred from supernovae observations. It is an improvement over the old early dark energy model (EDE) because it explains naturally the decay of the extra energy component in terms of a vacuum first-order phase transition that is triggered by a subdominant scalar field at zero temperature. With hot NEDE, we introduce a new mechanism to trigger the phase transition. It relies on thermal corrections that subside as a subdominant radiation fluid in a dark gauge sector cools. We explore the phenomenology of hot NEDE and identify the strong supercooled regime as the scenario favored by phenomenology. In a second step, we propose different microscopic embeddings of hot NEDE. This includes the (non-)Abelian dark matter model, which has the potential to also resolve the LSS tension through interactions with the dark radiation fluid. We also address the coincidence problem generically present in EDE models by relating NEDE to the mass generation of neutrinos via the inverse seesaw mechanism. We finally propose a more complete dark sector model, which embeds the NEDE field in a larger symmetry group and discuss the possibility that the hot NEDE field is central for spontaneously breaking lepton number symmetry.
    Phase transitionsDark sectorEarly dark energyDark RadiationSterile neutrinoBubble wallNeutrino massSupercoolingCosmic microwave backgroundScalar field...
  • Recently, Active Galactic Nuclei (AGNs) have been proposed as standardizable candles, thanks to an observed non-linear relation between their X-ray and optical-ultraviolet (UV) luminosities, which provides an independent measurement of their distances. In this paper, we use these observables for the first time to estimate the parameters of f(R) gravity models (specifically the Hu-Sawicki and the exponential models) together with the cosmological parameters. The importance of this type of modified gravity theories lies in the fact that they can explain the late time accelerated expansion of the universe without the inclusion of a dark energy component. We have also included other observable data to the analyses such as estimates of the Hubble parameter H(z) from Cosmic Chronometers, the Pantheon Type Ia supernovae compilation, and Baryon Acoustic Oscillations measurements. Our results show that the allowed space parameter is restricted when both AGN and BAO data are added to CC and SnIa data, being the BAO data set the most restrictive one. We can also conclude that even though our results are consistent with the ones from the LCDM model, small deviations from General Relativity, than can be successfully described by the f(R) models studied in this paper, are also allowed by the considered data sets.
    Baryon acoustic oscillationsActive Galactic NucleiLambda-CDM modelQuasarCosmological modelCold dark matterHubble parameterLuminosityFriedmann equationsGalaxy...
  • This invited white paper, submitted to the National Science Foundation in January of 2020, discusses the current challenges faced by the United States astronomical instrumentation community in the era of extremely large telescopes. Some details may have changed since submission, but the basic tenets are still very much valid. The paper summarizes the technical, funding, and personnel challenges the US community faces, provides an informal census of current instrumentation groups in the US, and compares the state-of-affairs in the US with that of the European community, which builds astronomical instruments from consortia of large hard-money funded instrument centers in a coordinated fashion. With the recent release of the Decadal Survey on Astronomy and Astrophysics 2020 (Astro2020), it is clear that strong community support exists for this next generation of large telescopes in the US. Is the US ready? Is there sufficient talent, facilities, and resources in the community today to meet the challenge of developing the complex suite of instruments envisioned for two US ELTs? These questions are addressed, along with thoughts on how the National Science Foundation can help build a more viable and stable instrumentation program in the US. These thoughts are intended to serve as a starting point for a broader discussion, with the end goal being a plan that puts the US astronomical instrumentation community on solid footing and poised to take on the challenges presented by the ambitious goals we have set in the era of ELTs.
    EngineeringTelescopesObservatoriesEuropean Extremely Large TelescopeEquipment and apparatusProgrammingGiant Magellan TelescopeEuropean Southern ObservatorySpectrographsSoftware...
  • The unknown physical nature of the Dark Energy motivates in cosmology the study of modifications of the gravity theory at large distances. One of these types of modifications is to consider gravity theories, generally termed as $f(R)$. In this paper we use observational data to both constrain and test the Starobinsky $f(R)$ model \cite{Starobinsky2007}, using updated measurements from the dynamics of the expansion of the universe, $H(z)$; and the growth rate of cosmic structures, $[f\sigma_8](z)$, where the distinction between the concordance $\Lambda $CDM model and modified gravity models $f(R)$ becomes clearer. We use MCMC likelihood analyses to explore the parameters space of the $f(R)$ model using $H(z)$ and $[f\sigma_8](z)$ data, both individually and jointly, and further, examine which of the models best fits the joint data. To further test the Starobinsky model, we use a method proposed by Linder \cite{Linder2017}, where the data from the observables is jointly binned in redshift space. This allows to further explore the model's parameter that better fits the data in comparison to the $\Lambda$CDM model. The joint analysis of $H(z)$ and $[f\sigma_8](z)$ show that the $n=2$--Starobinsky $f(R)$ model fits well the observational data. In the end, we confirm that this joint analysis is able to break the degenerescence between modified gravity models as proposed in the original work \cite{Starobinsky2007}. Our results indicate that the $f(R)$ Starobinsky model provides a good fit to the currently available data for a set of values of its parameters, being, therefore, a possible alternative to the $\Lambda$CDM model.
    Lambda-CDM modelExpansion of the UniverseDark energyModified gravityCosmologyGeneral relativitySolar systemHubble parameterSigma8Cosmological model...
  • Observational astrophysics consists of making inferences about the Universe by comparing data and models. The credible intervals placed on model parameters are often as important as the maximum a posteriori probability values, as the intervals indicate concordance or discordance between models and with measurements from other data. Intermediate statistics (e.g. the power spectrum) are usually measured and inferences made by fitting models to these rather than the raw data, assuming that the likelihood for these statistics has multivariate Gaussian form. The covariance matrix used to calculate the likelihood is often estimated from simulations, such that it is itself a random variable. This is a standard problem in Bayesian statistics, which requires a prior to be placed on the true model parameters and covariance matrix, influencing the joint posterior distribution. As an alternative to the commonly-used Independence-Jeffreys prior, we introduce a prior that leads to a posterior that has approximately frequentist matching coverage. This is achieved by matching the covariance of the posterior to that of the distribution of true values of the parameters around the maximum likelihood values in repeated trials, under certain assumptions. Using this prior, credible intervals derived from a Bayesian analysis can be interpreted approximately as confidence intervals, containing the truth a certain proportion of the time for repeated trials. Linking frequentist and Bayesian approaches that have previously appeared in the astronomical literature, this offers a consistent and conservative approach for credible intervals quoted on model parameters for problems where the covariance matrix is itself an estimate.
    CovarianceCovariance matrixFrequentist approachCredible intervalBayesianBayesian approachFisher information matrixGaussian distributionConfidence intervalCoverage probability...
  • Next-generation cosmological surveys will observe larger cosmic volumes than ever before, enabling us to access information on the primordial Universe, as well as on relativistic effects. In a companion paper, we applied a Fisher analysis to forecast the expected precision on $f_{\rm NL}$ and the detectability of the lensing magnification and Doppler contributions to the power spectrum. Here we assess the bias on the best-fit values of $f_{\rm NL}$ and other parameters, from neglecting these light-cone effects. We consider forthcoming 21cm intensity mapping surveys (SKAO) and optical galaxy surveys (DESI and Euclid), both individually and combined together. We conclude that lensing magnification at higher redshifts must be included in the modelling of spectroscopic surveys. If lensing is neglected in the analysis, this produces a bias of more than 1$\sigma$ - not only on $f_{\rm NL}$, but also on the standard cosmological parameters.
    Cosmological parametersLight conesLarge scale structure surveySpectroscopic surveyEuclid missionFisher information matrix21 cm tomographyDark Energy Spectroscopic InstrumentCovarianceCompanion...
  • It has been recognized that the observables of large-scale structure (LSS) is susceptible to long-wavelength density and tidal fluctuations whose wavelengths exceed the accessible scale of a finite-volume observation, referred to as the super-sample modes. The super-sample modes modulate the growth and expansion rate of local structures, thus affecting the cosmological information encoded in the statistics of galaxy clustering data. In this paper, based on the Lagrangian perturbation theory, we develop a new formalism to systematically compute the response of a biased tracer of LSS, which is expressed perturbatively in terms of the matter density field of sub-survey modes, to the super-sample modes at the field level. The formalism presented here reproduces the power spectrum responses that have been previously derived, and provides an alternative way to compute statistical quantities with super-sample modes. As an application, we consider the statistics of the intrinsic alignments of galaxies and halos, and derive the field response of the galaxy/halo shape bias to the super-sample modes. Possible impacts of the long-mode contributions on the covariance of the three-dimensional power spectra of the intrinsic alignment are also discussed, and the signal-to-noise ratios are estimated.
    EllipticityCovarianceMilky WayRedshift spaceGalaxyGalaxy density fieldLarge scale structureSignal to noise ratioIntrinsic alignmentB-modes...
  • In general relativity (GR), the internal dynamics of a self-gravitating system under free-fall in an external gravitational field should not depend on the external field strength. Recent work has claimed a statistical detection of an `external field effect' (EFE) using galaxy rotation curve data. We show that large uncertainties in rotation curve analyses and inaccuracies in published simulation-based external field estimates compromise the significance of the claimed EFE detection. We further show analytically that a qualitatively similar statistical signal is, in fact, expected in a $\Lambda$-cold dark matter ($\Lambda$CDM) universe without any violation of the strong equivalence principle. Rather, such a signal arises simply because of the inherent correlations between galaxy clustering strength and intrinsic galaxy properties. We explicitly demonstrate the effect in a baryonified mock catalog of a $\Lambda$CDM universe. Although the detection of an EFE-like signal is not, by itself, evidence for physics beyond GR, our work shows that the $\textit{sign}$ of the EFE-like correlation between the external field strength and the shape of the radial acceleration relation can be used to probe new physics: e.g., in MOND, the predicted sign is opposite to that in our $\Lambda$CDM mocks.
    GalaxyCold dark matterRotation CurveSpitzer Photometry and Accurate Rotation CurvesMock catalogGeneral relativityMilky WayModified Newtonian DynamicsGravitational fieldsVirial mass...
  • Astronomical observations reveal a major deficiency in our understanding of physics $-$ the detectable mass is insufficient to explain the observed motions in a huge variety of systems given our current understanding of gravity, Einstein's General theory of Relativity (GR). This missing gravity problem may indicate a breakdown of GR at low accelerations, as postulated by Milgromian dynamics (MOND). We review the MOND theory and its consequences, including in a cosmological context where we advocate a hybrid approach involving light sterile neutrinos to address MOND's cluster-scale issues. We then test the novel predictions of MOND using evidence from galaxies, galaxy groups, galaxy clusters, and the large-scale structure of the Universe. We also consider whether the standard cosmological paradigm ($\Lambda$CDM) can explain the observations and review several previously published highly significant falsifications of it. Our overall assessment considers both the extent to which the data agree with each theory and how much flexibility each has when accommodating the data, with the gold standard being a clear $a~priori$ prediction not informed by the data in question. Our conclusion is that MOND is favoured by a wealth of data across a huge range of astrophysical scales, ranging from the kpc scales of galactic bars to the Gpc scale of the local supervoid and the Hubble tension, which is alleviated in MOND through enhanced cosmic variance. We also consider several future tests, mostly on scales much smaller than galaxies.
    GalaxyRotation CurveMilky WayLocal groupAndromeda galaxyGeneral relativityCluster of galaxiesDisk galaxySunDark matter...
  • We perform a general test of the $\Lambda{\rm CDM}$ and $w {\rm CDM}$ cosmological models by comparing constraints on the geometry of the expansion history to those on the growth of structure. Specifically, we split the total matter energy density, $\Omega_M$, and (for $w {\rm CDM}$) dark energy equation of state, $w$, into two parameters each: one that captures the geometry, and another that captures the growth. We constrain our split models using current cosmological data, including type Ia supernovae, baryon acoustic oscillations, redshift space distortions, gravitational lensing, and cosmic microwave background (CMB) anisotropies. We focus on two tasks: (i) constraining deviations from the standard model, captured by the parameters $\Delta\Omega_M \equiv \Omega_M^{\rm grow}-\Omega_M^{\rm geom}$ and $\Delta w \equiv w^{\rm grow}-w^{\rm geom}$, and (ii) investigating whether the $S_8$ tension between the CMB and weak lensing can be translated into a tension between geometry and growth, i.e. $\Delta\Omega_M \neq 0$, $\Delta w \neq 0$. In both the split $\Lambda{\rm CDM}$ and $w {\rm CDM}$ cases, our results from combining all data are consistent with $\Delta\Omega_M = 0$ and $\Delta w = 0$. If we omit BAO/RSD data and constrain the split $w {\rm CDM}$ cosmology, we find the data prefers $\Delta w<0$ at $3.6\sigma$ significance and $\Delta\Omega_M>0$ at $4.2\sigma$ evidence. We also find that for both CMB and weak lensing, $\Delta\Omega_M$ and $S_8$ are correlated, with CMB showing a slightly stronger correlation. The general broadening of the contours in our extended model does alleviate the $S_8$ tension, but the allowed nonzero values of $\Delta\Omega_M$ do not encompass the $S_8$ values that would point toward a mismatch between geometry and growth as the origin of the tension.
    Cosmic microwave backgroundBaryon acoustic oscillationsRedshift-space distortionWeak lensingSupernova Type IaStandard cosmological modelCosmological parametersCosmological modelStandard ModelMatter power spectrum...
  • We present component-separated maps of the primary cosmic microwave background/kinematic Sunyaev-Zel'dovich (SZ) amplitude and the thermal SZ Compton-$y$ parameter, created using data from the South Pole Telescope (SPT) and the Planck satellite. These maps, which cover the $\sim$2500 square degrees of the Southern sky imaged by the SPT-SZ survey, represent a significant improvement over previous such products available in this region by virtue of their higher angular resolution (1.25 arcminutes for our highest resolution Compton-$y$ maps) and lower noise at small angular scales. In this work we detail the construction of these maps using linear combination techniques, including our method for limiting the correlation of our lowest-noise Compton-$y$ map products with the cosmic infrared background. We perform a range of validation tests on these data products to test our sky modeling and combination algorithms, and we find good performance in all of these tests. Recognizing the potential utility of these data products for a wide range of astrophysical and cosmological analyses, including studies of the gas properties of galaxies, groups, and clusters, we make these products publicly available at http://pole.uchicago.edu/public/data/sptsz_ymap and on the NASA/LAMBDA website.
    Cosmic infrared backgroundSpectral energy distributionAtacama Cosmology TelescopeCovarianceTransfer functionSPT-SZ surveyCovariance matrixInternal linear combinationPlanck missionFull width at half maximum...
  • We review the origins, motivations, and implications for cosmology and black holes, of our proposal that "dark energy" is not a quantum vacuum energy, but rather arises from a Weyl scaling invariant nonderivative component of the gravitational action.
    Black holeDark energyVacuum energyScale invarianceCosmologyHorizonEvent horizonEinstein field equationsProper timeApparent horizon...
  • The information content of the minimum spanning tree (MST), used to capture higher-order statistics and information from the cosmic web, is compared to that of the power spectrum for a $\nu\Lambda$CDM model. The measurements are made in redshift space using haloes from the Quijote simulation of mass $\geq 3.2\times 10^{13}\,h^{-1}{\rm M}_{\odot}$ in a box of length $L_{\rm box}=1\,h^{-1}{\rm Gpc}$. The power spectrum multipoles (monopole and quadrupole) are computed for Fourier modes in the range $0.006 < k < 0.5\, h{\rm Mpc}^{-1}$. For comparison the MST is measured with a minimum length scale of $l_{\min}\simeq13\,h^{-1}{\rm Mpc}$. Combining the MST and power spectrum allows for many of the individual degeneracies to be broken; on its own the MST provides tighter constraints on the sum of neutrino masses $M_{\nu}$ and cosmological parameters $h$, $n_{\rm s}$, and $\Omega_{\rm b}$ but the power spectrum alone provides tighter constraints on $\Omega_{\rm m}$ and $\sigma_{8}$. Combined we find constraints that are a factor of two (or greater) on all parameters with respect to the power spectrum (for $M_{\nu}$ there is a factor of four improvement). These improvements appear to be driven by the MST's sensitivity to small scale clustering, where the effect of neutrino free-streaming becomes relevant, and high-order statistical information in the cosmic web. The MST is shown to be a powerful tool for cosmology and neutrino mass studies, and therefore could play a pivotal role in ongoing and future galaxy redshift surveys (such as DES, DESI, \emph{Euclid}, and Rubin-LSST).
    Minimum spanning treeStatisticsNeutrino massStatistical estimatorCosmologyFisher information matrixCosmic webNeutrinoCosmological parametersGalaxy...
  • We use the galaxy rotation curves in the SPARC database to compare 9 different dark matter and modified gravity models on an equal footing, paying special attention to the stellar mass-to-light ratios. We compare three non-interacting dark matter models, a self interacting DM (SIDM) model, two hadronically interacting DM (HIDM) models, and three modified Newtonian dynamics type models: MOND, Radial Acceleration Relation (RAR) and a maximal-disk model. The models with DM-gas interactions generate a disky component in the dark matter, which significantly improves the fits to the rotation curves compared to all other models except an Einasto halo; the MOND-type models give significantly worse fits.
    GalaxyRotation CurveSelf-interacting dark matterModified Newtonian DynamicsSpitzer Photometry and Accurate Rotation CurvesMass to light ratioMilky WayNavarro-Frenk-White profileDark matter haloCold dark matter...
  • We derive structure formation limits on dark matter (DM) composed of keV-scale axion-like particles (ALPs), produced via freeze-in through the interactions with photons and Standard Model (SM) fermions. We employ Lyman-alpha (Ly-{\alpha}) forest data sets as well as the observed number of Milky Way (MW) subhalos. We compare results obtained using Maxwell-Boltzmann and quantum statistics for describing the SM bath. It should be emphasized that the presence of logarithmic divergences complicates the calculation of the production rate, which can not be parameterized with a simple power law behaviour. The obtained results, in combination with X-ray bounds, exclude the possibility for a photophilic "frozen-in" ALP DM with mass below $\sim 19\,\mathrm{keV}$. For the photophobic ALP scenario, in which DM couples primarily to SM fermions, the ALP DM distribution function is peaked at somewhat lower momentum and hence for such realization we find weaker limits on DM mass. Future facilities, such as the upcoming Vera C. Rubin observatory, will provide measurements with which the current bounds can be significantly improved to $\sim 80\,\mathrm{keV}$.
    Axion-like particleDark matterStructure formationStatisticsStandard ModelMilky WayStandard Model fermionDark matter subhaloInflationMaxwell-Boltzmann statistics...
  • The kinetic Sunyaev Zel'dovich (kSZ) and moving lens effects, secondary contributions to the cosmic microwave background (CMB), carry significant cosmological information due to their dependence on the large-scale peculiar velocity field. Previous work identified a promising means of extracting this cosmological information using a set of quadratic estimators for the radial and transverse components of the velocity field. These estimators are based on the statistically anisotropic components of the cross-correlation between the CMB and a tracer of large scale structure, such as a galaxy redshift survey. In this work, we assess the challenges to the program of velocity reconstruction posed by various foregrounds and systematics in the CMB and galaxy surveys, as well as biases in the quadratic estimators. To do so, we further develop the quadratic estimator formalism and implement a numerical code for computing properly correlated spectra for all the components of the CMB (primary/secondary blackbody components and foregrounds) and a photometric redshift survey, with associated redshift errors, to allow for accurate forecasting. We create a simulation framework for generating realizations of properly correlated CMB maps and redshift binned galaxy number counts, assuming the underlying fields are Gaussian, and use this to validate a velocity reconstruction pipeline and assess map-based systematics such as masking. We highlight the most significant challenges for velocity reconstruction, which include biases associated with: modelling errors, characterization of redshift errors, and coarse graining of cosmological fields on our past light cone. Despite these challenges, the outlook for velocity reconstruction is quite optimistic, and we use our reconstruction pipeline to confirm that these techniques will be feasible with near-term CMB experiments and photometric galaxy redshift surveys.
    Statistical estimatorCosmic microwave backgroundGalaxyRadial velocityMilky WayLight conesPrincipal componentLarge scale structure surveyCosmic infrared backgroundAnisotropy...
  • The elliptical power-law (EPL) model of the mass in a galaxy is widely used in strong gravitational lensing analyses. However, the distribution of mass in real galaxies is more complex. We quantify the biases due to this model mismatch by simulating and then analysing mock {\it Hubble Space Telescope} imaging of lenses with mass distributions inferred from SDSS-MaNGA stellar dynamics data. We find accurate recovery of source galaxy morphology, except for a slight tendency to infer sources to be more compact than their true size. The Einstein radius of the lens is also robustly recovered with 0.1% accuracy, as is the global density slope, with 2.5% relative systematic error, compared to the 3.4% intrinsic dispersion. However, asymmetry in real lenses also leads to a spurious fitted `external shear' with typical strength, $\gamma_{\rm ext}=0.015$. Furthermore, time delays inferred from lens modelling without measurements of stellar dynamics are typically underestimated by $\sim$5%. Using such measurements from a sub-sample of 37 lenses would bias measurements of the Hubble constant $H_0$ by $\sim$9%. Although this work is based on a particular set of MaNGA galaxies, and the specific value of the detected biases may change for another set of strong lenses, our results strongly suggest the next generation cosmography needs to use more complex lens mass models.
    GalaxyMass distributionShearedTime delaySystematic errorMass profileMass-sheet degeneracyEinstein radiusDeflection angleEarly-type galaxy...
  • Cross-correlating cosmic microwave background (CMB) lensing and galaxy clustering has been shown to greatly improve the constraints on the local primordial non-Gaussianity (PNG) parameter $f_{\rm NL}$ by reducing sample variance and also parameter degeneracies. To model the full use of the 3D information of galaxy clustering, we forecast $f_{\rm NL}$ measurements using the decomposition in the spherical Fourier-Bessel (SFB) basis, which can be naturally cross-correlated with 2D CMB lensing in spherical harmonics. In the meantime, such a decomposition would also enable us to constrain the growth rate of structure, a probe of gravity, through the redshift-space distortion (RSD). As a comparison, we also consider the tomographic spherical harmonic (TSH) analysis of galaxy samples with different bin sizes. Assuming galaxy samples that mimic a few future surveys, we perform Fisher forecasts using linear modes for $f_{\rm NL}$ and the growth rate exponent $\gamma$, marginalized over standard $\Lambda$ cold dark matter ($\Lambda$CDM) cosmological parameters and two nuisance parameters that account for clustering bias and magnification bias. Compared to TSH analysis using only one bin, SFB analysis could improve $\sigma(f_{\rm NL})$ by factors 3 to 12 thanks to large radial modes. With future wide-field and high-redshift photometric surveys like the LSST, the constraint $\sigma(f_{\rm NL}) < 1$ could be achieved using linear angular multipoles up to $\ell_{\rm min}\simeq 20$. Compared to using galaxy auto-power spectra only, joint analyses with CMB lensing could improve $\sigma(\gamma)$ by factors 2 to 5 by reducing degeneracies with other parameters, especially the clustering bias. For future spectroscopic surveys like the DESI or $\textit{Euclid}$, using linear scales, $\gamma$ could be constrained to $3\,\%$ precision assuming the GR fiducial value.
    CMB lensingGalaxyGalaxy clusteringMilky WayPrimordial Non-GaussianitiesRedshift-space distortionCovarianceCosmological parametersStatistical estimatorSignal to noise ratio...
  • 2105.13539  ,  ,  et al.,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  show less
    We present reconstructed convergence maps, \textit{mass maps}, from the Dark Energy Survey (DES) third year (Y3) weak gravitational lensing data set. The mass maps are weighted projections of the density field (primarily dark matter) in the foreground of the observed galaxies. We use four reconstruction methods, each is a \textit{maximum a posteriori} estimate with a different model for the prior probability of the map: Kaiser-Squires, null B-mode prior, Gaussian prior, and a sparsity prior. All methods are implemented on the celestial sphere to accommodate the large sky coverage of the DES Y3 data. We compare the methods using realistic $\Lambda$CDM simulations with mock data that are closely matched to the DES Y3 data. We quantify the performance of the methods at the map level and then apply the reconstruction methods to the DES Y3 data, performing tests for systematic error effects. The maps are compared with optical foreground cosmic-web structures and are used to evaluate the lensing signal from cosmic-void profiles. The recovered dark matter map covers the largest sky fraction of any galaxy weak lensing map to date.
    B-modesDark Energy SurveyShearedWeak lensingGalaxyMean squared errorSparsityE-modesSystematic errorWiener filter...
  • We study the spectral properties and accretion flow behavior of an ultraluminous X-ray source M82\,X-1 using {\it NuSTAR} observations. We use the physical two component advective flow (TCAF) model to fit the data and to derive the accretion flow properties of the source. From the model fitted parameters, we found that M82\,X-1 is harboring an intermediate mass black hole at its centre, where the mass varies from $156.04^{+13.51}_{-15.30}$ to $380.96^{28.38}_{-29.76}$ M$_\odot$. The error weighted average mass of the black hole is $273\pm43$ M$_\odot$, which accreted in nearly super-Eddington rate. The Compton cloud was compact with a size of $\sim13 r_g$ and the shock compression ratio had \textcolor{black}{nearly intermediate values except for the epoch four}. These indicate a possible significant mass outflow from the inner region of the disk. The quasi periodic oscillation (QPO) frequencies estimated from the model fitted parameters can reproduce the observed QPOs. The robustness of the model parameters is verified by drawing the confidence contours among them.
    Messier 82Quasi-Periodic OscillationsUltraluminous X-rayIntermediate-mass black holeAccretion flowAccretionAstronomical X-ray sourceM82 X-1Nuclear Spectroscopic Telescope ArrayMass accretion rate...
  • A recurrent structure is a popular framework choice for the task of video super-resolution. The state-of-the-art method BasicVSR adopts bidirectional propagation with feature alignment to effectively exploit information from the entire input video. In this study, we redesign BasicVSR by proposing second-order grid propagation and flow-guided deformable alignment. We show that by empowering the recurrent framework with the enhanced propagation and alignment, one can exploit spatiotemporal information across misaligned video frames more effectively. The new components lead to an improved performance under a similar computational constraint. In particular, our model BasicVSR++ surpasses BasicVSR by 0.82 dB in PSNR with similar number of parameters. In addition to video super-resolution, BasicVSR++ generalizes well to other video restoration tasks such as compressed video enhancement. In NTIRE 2021, BasicVSR++ obtains three champions and one runner-up in the Video Super-Resolution and Compressed Video Enhancement Challenges. Codes and models will be released to MMEditing.
    Flow networkInstabilityArchitectureMarkov chainConvolution Neural NetworkAttentionGradient flowObject detectionSemantic segmentationGround truth...
  • Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge.
    ArchitectureConvolution Neural NetworkOptimizationNetwork modelScale factorDeep Neural NetworksImage ProcessingRankDeep learningNeural network...
  • Most video super-resolution methods super-resolve a single reference frame with the help of neighboring frames in a temporal sliding window. They are less efficient compared to the recurrent-based methods. In this work, we propose a novel recurrent video super-resolution method which is both effective and efficient in exploiting previous frames to super-resolve the current frame. It divides the input into structure and detail components which are fed to a recurrent unit composed of several proposed two-stream structure-detail blocks. In addition, a hidden state adaptation module that allows the current frame to selectively use information from hidden state is introduced to enhance its robustness to appearance change and error accumulation. Extensive ablation study validate the effectiveness of the proposed modules. Experiments on several benchmark datasets demonstrate the superior performance of the proposed method compared to state-of-the-art methods on video super-resolution.
    Hidden stateArchitectureAblationVideo analysisDeep learningMotion estimationConvolution Neural NetworkAttentionImage ProcessingRecurrent neural network...
  • Video super-resolution (VSR) approaches tend to have more components than the image counterparts as they need to exploit the additional temporal dimension. Complex designs are not uncommon. In this study, we wish to untangle the knots and reconsider some most essential components for VSR guided by four basic functionalities, i.e., Propagation, Alignment, Aggregation, and Upsampling. By reusing some existing components added with minimal redesigns, we show a succinct pipeline, BasicVSR, that achieves appealing improvements in terms of speed and restoration quality in comparison to many state-of-the-art algorithms. We conduct systematic analysis to explain how such gain can be obtained and discuss the pitfalls. We further show the extensibility of BasicVSR by presenting an information-refill mechanism and a coupled propagation scheme to facilitate information aggregation. The BasicVSR and its extension, IconVSR, can serve as strong baselines for future VSR approaches.
    Statistical estimatorAttentionGround truthInferenceRandom resistor networkHidden stateArchitectureInformation flowMATLABFlow network...
  • Most modern solar observatories deliver data products formatted as 3D spatio-temporal data cubes, that contain additional, higher dimensions with spectral and/or polarimetric information. This multi-dimensional complexity presents a major challenge when browsing for features of interest in several dimensions simultaneously. We developed the COlor COllapsed PLOTting (COCOPLOT) software as a quick-look and context image software, to convey spectral profile or time evolution from all the spatial pixels ($x,y$) in a 3D [$n_x,n_y,n_\lambda$] or [$n_x,n_y,n_t$] data cube as a single image, using color. This can avoid the need to scan through many wavelengths, creating difference and composite images when searching for signals satisfying multiple criteria. Filters are generated for the red, green, and blue channels by selecting values of interest to highlight in each channel, and their weightings. These filters are combined with the data cube over the third dimension axis to produce an $n_x \times n_y \times 3$ cube displayed as one true color image. Some use cases are presented for data from the Swedish 1-m Solar Telescope (SST) and IRIS, including H$\alpha$ solar flare data, a comparison with $k$-means clustering for identifying asymmetries in the Ca II K line and off-limb coronal rain in IRIS C II slit-jaw images. These illustrate identification by color alone using COCOPLOT of locations including line wing or central enhancement, broadening, wing absorption, and sites with intermittent flows or time-persistent features. COCOPLOT is publicly available in both IDL and Python.
    SoftwareTime SeriesMultidimensional ArrayIntensityField of viewSpectral lineSolar flareSolar physicsPythonStandard deviation...