Recently bookmarked papers

with concepts:
  • Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memory-efficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.
    ArchitectureHamiltonianOrdinary differential equationsNeural networkDeep Neural NetworksHidden layerImage ProcessingComputational linguisticsDiscretizationRegularization...
  • Deep neural networks have become invaluable tools for supervised machine learning, e.g., classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Important issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper we propose new forward propagation techniques inspired by systems of Ordinary Differential Equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.
    ArchitectureClassificationRegularizationDeep Neural NetworksDeep learningOrdinary differential equationsHamiltonianInverse problemsOptimizationDiscretization...
  • We present a proof of concept of a new galaxy group finder method, Markov graph Clustering (MCL; Van Dongen 2000) that naturally handles probabilistic linking criteria. We introduce a new figure of merit, the variation of information statistic (VI; Meila 2003), used to optimise the free parameter(s) of the MCL algorithm. We explain that the common Friends-of-Friends (FoF) method is a subset of MCL. We test MCL in real space on a realistic mock galaxy catalogue constructed from a N-body simulation using the GALFORM model. With a fixed linking length FoF produces the best group catalogues as quantified by the VI statistic. By making the linking length sensitive to the local galaxy density, the quality of the FoF and MCL group catalogues improve significantly, with MCL being preferred over FoF due to a smaller VI value. The MCL group catalogue recovers accurately the underlying halo multiplicity function at all multiplicities. MCL provides better and more consistent group purity and halo completeness values at all multiplicities than FoF. As MCL allows for probabilistic pairwise connections, it is a promising algorithm to find galaxy groups in photometric surveys.
    Friends of friends algorithmGalaxyGroup of galaxiesCompletenessInflationStatisticsGraph clusteringReal spaceMock galaxy cataloguesGraph...
  • Each year, countless hours of productive research time is spent brainstorming creative acronyms for surveys, simulations, codes, and conferences. We present ACRONYM, a command-line program developed specifically to assist astronomers in identifying the best acronyms for ongoing projects. The code returns all approximately-English-language words that appear within an input string of text, regardless of whether the letters occur at the beginning of the component words (in true astronomer fashion).
    AcronymPythonConjunctionObservatoriesSimulationsAstronomySurveysLanguageFieldForce...
  • The discovery of many giant planets in close-in orbits and the effect of planetary and stellar tides in their subsequent orbital decay have been extensively studied in the context of planetary formation and evolution theories. Planets orbiting close to their host stars undergo close encounters, atmospheric photoevaporation, orbital evolution, and tidal interactions. In many of these theoretical studies, it is assumed that the interior properties of gas giants remain static during orbital evolution. Here we present a model that allows for changes in the planetary radius as well as variations in the planetary and stellar dissipation parameters, caused by the planet's contraction and change of rotational rates from the strong tidal fields. In this semi-analytical model, giant planets experience a much slower tidal-induced circularization compared to models that do not consider these instantaneous changes. We predict that the eccentricity damping time-scale increases about an order of magnitude in the most extreme case for too inflated planets, large eccentricities, and when the planet's tidal properties are calculated according to its interior structural composition. This finding potentially has significant implications on interpreting the period-eccentricity distribution of known giant planets as it may naturally explain the large number of non-circularized, close period currently known. Additionally, this work may help to constrain some models of planetary interiors, and contribute to a better insight about how tides affect the orbital evolution of extrasolar systems.
    PlanetStarEccentricityTidesExtrasolar planetOrbital evolutionDissipationGiant planetGas giantTidal interaction...
  • Debris disks are exoplanetary systems containing planets, minor bodies (such as asteroids and comets) and debris dust. Unseen planets are presumed to perturb the minor bodies into crossing orbits, generating small dust grains that are detected via remote sensing. Debris disks have been discovered around main sequence stars of a variety of ages (from 10 Myr to several Gyr) and stellar spectral types (from early A-type to M-type stars). As a result, they serve as excellent laboratories for understanding whether the architecture and the evolution of our Solar System is common or rare. This white paper addresses two outstanding questions in debris disk science: (1) Are debris disk minor bodies similar to asteroids and comets in our Solar System? (2) Do planets separate circumstellar material into distinct reservoirs and/or mix material during planet migration? We anticipate that SOFIA/HIRMES, JWST, and WFIRST/CGI will greatly improve our understanding of debris disk composition, enabling the astronomical community to answer these questions. However, we note that despite their observational power, these facilities will not provide large numbers of detections or detailed characterization of cold ices and silicates in the Trans Neptunian zone. Origins Space Telescope is needed to revolutionize our understanding of the bulk composition and mixing in exoplanetary systems.
    Debris discPlanetPlanet formationAsteroidsSolar systemCometRemote sensingArchitectureDust grainSpace telescopes...
  • The leading tensions to the collisionless cold dark matter (CDM) paradigm are the "small-scale controversies", discrepancies between observations at the dwarf-galactic scale and their simulational counterparts. In this work we consider methods to infer 3D morphological information on Local Group dwarf spheroidals, and test the fitness of CDM+hydrodynamics simulations to the observed galaxy shapes. We find that the subpopulation of dwarf galaxies with mass-to-light ratio $\gtrsim 100 M_\odot/L_\odot$ reflects an oblate morphology. This is discrepant with the dwarf galaxies with mass-to-light ratio $\lesssim 100 M_\odot/L_\odot$, which reflect prolate morphologies, and more importantly with simulations of CDM-sourced galaxies which are explicitly prolate. Although more simulations and data are called for, if evidence of oblate pressure-supported stellar distributions persists, we argue that an underlying oblate non-CDM dark matter halo may be required, and present this as motivation for future studies.
    GalaxyDwarf galaxyLocal groupEllipticityFIRE simulationsVelocity dispersionSurface brightnessDark matterStellar distributionDwarf spheroidal galaxy...
  • Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.
    CompressibilityQuantizationNeural networkMobilityDeep Neural NetworksCachingIntensityOptimizationFitness modelConvolutional neural network...
  • We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
    Generative modelMarkov chainMultilayer perceptronInferenceDiscriminative modelBackpropagationMinimaxDeep Boltzmann machineRestricted Boltzmann MachinesDeep learning...
  • A major challenge in brain tumor treatment planning and quantitative evaluation is determination of the tumor extent. The noninvasive magnetic resonance imaging (MRI) technique has emerged as a front-line diagnostic tool for brain tumors without ionizing radiation. Manual segmentation of brain tumor extent from 3D MRI volumes is a very time-consuming task and the performance is highly relied on operator's experience. In this context, a reliable fully automatic segmentation method for the brain tumor segmentation is necessary for an efficient measurement of the tumor extent. In this study, we propose a fully automatic method for brain tumor segmentation, which is developed using U-Net based deep convolutional networks. Our method was evaluated on Multimodal Brain Tumor Image Segmentation (BRATS 2015) datasets, which contain 220 high-grade brain tumor and 54 low-grade tumor cases. Cross-validation has shown that our method can obtain promising segmentation efficiently.
    Magnetic resonance imagingImage segmentationIonizing radiationNetworksMeasurement...
  • We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.
    Meta learningRegressionClassificationReinforcement learningOptimizationNeural networkArchitectureNetworksAlgorithms...
  • Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effects on the underlying loss landscape, are not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple "filter normalization" method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.
    ArchitectureNon-convexityNeural networkCurvatureGeneralization errorOptimizationPrincipal component analysisScale invarianceAttractorHigh Performance Computing...
  • This report is targeted to groups who are subject matter experts in their application but deep learning novices. It contains practical advice for those interested in testing the use of deep neural networks on applications that are novel for deep learning. We suggest making your project more manageable by dividing it into phases. For each phase this report contains numerous recommendations and insights to assist novice practitioners.
    Deep learningDeep Neural Networks
  • Although deep learning has produced dazzling successes for applications of image, speech, and video processing in the past few years, most trainings are with suboptimal hyper-parameters, requiring unnecessarily long training times. Setting the hyper-parameters remains a black art that requires years of experience to acquire. This report proposes several efficient ways to set the hyper-parameters that significantly reduce training time and improves performance. Specifically, this report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point. Then it discusses how to increase/decrease the learning rate/momentum to speed up training. Our experiments show that it is crucial to balance every manner of regularization for each dataset and architecture. Weight decay is used as a sample regularizer to show how its optimal value is tightly coupled with the learning rates and momentums. Files to help replicate the results reported here are available.
    ArchitectureRegularizationOverfittingDeep learningNeural networkSchedulingDeep Neural NetworksGeneralization errorOptimizationVideo analysis...
  • Different empirical models have been developed for cloud detection. There is a growing interest in using the ground-based sky/cloud images for this purpose. Several methods exist that perform binary segmentation of clouds. In this paper, we propose to use a deep learning architecture (U-Net) to perform multi-label sky/cloud image segmentation. The proposed approach outperforms recent literature by a large margin.
    ArchitectureDeep learningImage segmentationRemote sensingGround truthConvolutional neural networkDeep Neural NetworksElectron microscopySatellite ImageTraining set...
  • Chest X-ray is one of the most accessible medical imaging technique for diagnosis of multiple diseases. With the availability of ChestX-ray14, which is a massive dataset of chest X-ray images and provides annotations for 14 thoracic diseases; it is possible to train Deep Convolutional Neural Networks (DCNN) to build Computer Aided Diagnosis (CAD) systems. In this work, we experiment a set of deep learning models and present a cascaded deep neural network that can diagnose all 14 pathologies better than the baseline and is competitive with other published methods. Our work provides the quantitative results to answer following research questions for the dataset: 1) What loss functions to use for training DCNN from scratch on ChestX-ray14 dataset that demonstrates high class imbalance and label co occurrence? 2) How to use cascading to model label dependency and to improve accuracy of the deep learning model?
    ClassificationDeep learningArchitectureEntropyConvolutional neural networkDeep Neural NetworksComputational linguisticsBinary classificationNeural networkObject detection...
  • The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples -- ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.
    Recurrent neural networkClassificationConvolutional neural networkLong short term memoryGround truthArchitectureMachine learningInferenceNeural networkTraining set...
  • Supermassive Black Holes (BHs) residing in brightest cluster galaxies (BCGs) are overly massive when considering the local relationships between the BH mass and stellar bulge mass or velocity dispersion. Due to the location of these BHs within the cluster, large-scale cluster processes may aid the growth of BHs in BCGs. In this work, we study a sample of 71 galaxy clusters to explore the relationship between the BH mass, stellar bulge mass of the BCG, and the total gravitating mass of the host clusters. Due to difficulties in obtaining dynamically measured BH masses in distant galaxies, we use the Fundamental Plane relationship of BHs to infer their masses. We utilize X-ray observations taken by $Chandra$ to measure the temperature of the intra-cluster medium (ICM), which is a proxy for the total mass of the cluster. We analyze the $\rm M_{BH}-kT$ and $\rm M_{BH}-M_{Bulge}$ relationships and establish the best-fitting power laws:$\log_{10}(M_{\rm BH} /10^9 M_{\odot})=-0.35+2.08 \log_{10}(kT / 1 \rm keV)$ and $\log_{10}(\rm M_{BH}/10^9M_{\odot})= -1.09+ 1.92 \log_{10}(M_{\rm bulge}/10^{11}M_{\odot})$. Both relations are comparable with that established earlier for a sample of brightest group/cluster galaxies with dynamically measured BH masses. Although both the $\rm M_{BH}-kT$ and the $\rm M_{BH}-M_{Bulge}$ relationships exhibit large intrinsic scatter, based on Monte Carlo simulations we conclude that dominant fraction of the scatter originates from the Fundamental Plane relationship. We split the sample into cool core and non-cool core resembling clusters, but do not find statistically significant differences in the $\rm M_{BH}-kT$ relation. We speculate that the overly massive BHs in BCGs may be due to frequent mergers and cool gas inflows onto the cluster center.
    Black holeCool core galaxy clusterCluster of galaxiesIntra-cluster mediumIntrinsic scatterGalaxyFundamental Plane RelationLuminosityMilky WayStellar mass...
  • We present a new method for inferring galaxy star formation histories (SFH) using machine learning methods coupled with two cosmological hydrodynamic simulations, EAGLE and Illustris. We train Convolutional Neural Networks to learn the relationship between synthetic galaxy spectra and high resolution SFHs. To evaluate our SFH reconstruction we use Symmetric Mean Absolute Percentage Error (SMAPE), which acts as a true percentage error in the low-error regime. On dust-attenuated spectra we achieve high test accuracy (median SMAPE = $12.0\%$). Including the effects of simulated experimental noise increases the error ($13.2\%$), however this is alleviated by including multiple realisations of the noise, which increases the training set size and reduces overfitting ($11.4\%$). We also make estimates for the experimental and modelling errors. To further evaluate the generalisation properties we apply models trained on one simulation to spectra from the other, which leads to only a small increase in the error ($\sim 16\%$). We apply each trained model to SDSS DR7 spectra, and find smoother histories than in the VESPA catalogue. This new approach complements the results of existing SED fitting techniques, providing star formation histories directly motivated by the results of the latest cosmological simulations.
    Star formation historiesGalaxyIllustris simulationEAGLE simulation projectSpectral energy distributionStellar massConvolutional neural networkSloan Digital Sky SurveyStar formationOverfitting...
  • Nearby dwarf galaxies are local analogues of high-redshift and metal-poor stellar populations. Most of these systems ceased star formation long ago, but they retain signatures of their past that can be unraveled by detailed study of their resolved stars. Archaeological examination of dwarf galaxies with resolved stellar spectroscopy provides key insights into the first stars and galaxies, galaxy formation in the smallest dark matter halos, stellar populations in the metal-free and metal-poor universe, the nature of the first stellar explosions, and the origin of the elements. Extremely large telescopes with multi-object R=5,000-30,000 spectroscopy are needed to enable such studies for galaxies of different luminosities throughout the Local Group.
    Dwarf galaxyGalaxyStarStellar populationsAbundanceTelescopesOf starsStar formationPopulation IIIGalaxy Formation...
  • There exist a range of exciting scientific opportunities for Big Bang Nucleosynthesis (BBN) in the coming decade. BBN, a key particle astrophysics "tool" for decades, is poised to take on new capabilities to probe beyond standard model (BSM) physics. This development is being driven by experimental determination of neutrino properties, new nuclear reaction experiments, advancing supercomputing/simulation capabilities, the prospect of high-precision next-generation cosmic microwave background (CMB) observations, and the advent of 30m class telescopes.
    Big bang nucleosynthesisNeutrinoCosmic microwave backgroundEntropyThe early UniverseQuasarStandard ModelTelescopesCosmologyBSM physics...
  • 1903.09208  ,  ,  et al.,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  show less
    The expansion of the Universe is understood to have accelerated during two epochs: in its very first moments during a period of Inflation and much more recently, at $z < 1$, when Dark Energy is hypothesized to drive cosmic acceleration. The undiscovered mechanisms behind these two epochs represent some of the most important open problems in fundamental physics. The large cosmological volume at $2 < z < 5$, together with the ability to efficiently target high-$z$ galaxies with known techniques, enables large gains in the study of Inflation and Dark Energy. A future spectroscopic survey can test the Gaussianity of the initial conditions up to a factor of ~50 better than our current bounds, crossing the crucial theoretical threshold of $\sigma(f_{NL}^{\rm local})$ of order unity that separates single field and multi-field models. Simultaneously, it can measure the fraction of Dark Energy at the percent level up to $z = 5$, thus serving as an unprecedented test of the standard model and opening up a tremendous discovery space.
    Dark energyInflationCosmic microwave backgroundGalaxySpectroscopic surveyCosmic accelerationLarge scale structurePhotometric redshiftMilky WayExpansion of the Universe...
  • LSST will open new vistas for cosmology in the next decade, but it cannot reach its full potential without data from other telescopes. Cosmological constraints can be greatly enhanced using wide-field ($>20$ deg$^2$ total survey area), highly-multiplexed optical and near-infrared multi-object spectroscopy (MOS) on 4-15m telescopes. This could come in the form of suitably-designed large surveys and/or community access to add new targets to existing projects. First, photometric redshifts can be calibrated with high precision using cross-correlations of photometric samples against spectroscopic samples at $0 < z < 3$ that span thousands of sq. deg. Cross-correlations of faint LSST objects and lensing maps with these spectroscopic samples can also improve weak lensing cosmology by constraining intrinsic alignment systematics, and will also provide new tests of modified gravity theories. Large samples of LSST strong lens systems and supernovae can be studied most efficiently by piggybacking on spectroscopic surveys covering as much of the LSST extragalactic footprint as possible (up to $\sim20,000$ square degrees). Finally, redshifts can be measured efficiently for a high fraction of the supernovae in the LSST Deep Drilling Fields (DDFs) by targeting their hosts with wide-field spectrographs. Targeting distant galaxies, supernovae, and strong lens systems over wide areas in extended surveys with (e.g.) DESI or MSE in the northern portion of the LSST footprint or 4MOST in the south could realize many of these gains; DESI, 4MOST, Subaru/PFS, or MSE would all be well-suited for DDF surveys. The most efficient solution would be a new wide-field, highly-multiplexed spectroscopic instrument in the southern hemisphere with $>6$m aperture. In two companion white papers we present gains from deep, small-area MOS and from single-target imaging and spectroscopy.
    Large Synoptic Survey TelescopeSupernovaDark Energy Spectroscopic InstrumentGalaxyCosmologyDark energyCross-correlationWeak lensingCalibrationTelescopes...
  • Single-object imaging and spectroscopy on telescopes with apertures ranging from ~4 m to 40 m have the potential to greatly enhance the cosmological constraints that can be obtained from LSST. Two major cosmological probes will benefit greatly from LSST follow-up: accurate spectrophotometry for nearby and distant Type Ia supernovae will expand the cosmological distance lever arm by unlocking the constraining power of high-z supernovae; and cosmology with time delays of strongly-lensed supernovae and quasars will require additional high-cadence imaging to supplement LSST, adaptive optics imaging or spectroscopy for accurate lens and source positions, and IFU or slit spectroscopy to measure detailed properties of lens systems. We highlight the scientific impact of these two science drivers, and discuss how additional resources will benefit them. For both science cases, LSST will deliver a large sample of objects over both the wide and deep fields in the LSST survey, but additional data to characterize both individual systems and overall systematics will be key to ensuring robust cosmological inference to high redshifts. Community access to large amounts of natural-seeing imaging on ~2-4 m telescopes, adaptive optics imaging and spectroscopy on 8-40 m telescopes, and high-throughput single-target spectroscopy on 4-40 m telescopes will be necessary for LSST time domain cosmology to reach its full potential. In two companion white papers we present the additional gains for LSST cosmology that will come from deep and from wide-field multi-object spectroscopy.
    Large Synoptic Survey TelescopeSupernovaTelescopesTime delayCosmologyAdaptive opticsDark energyIntegral field unitsSupernova Type IaCompanion...
  • Community access to deep (i ~ 25), highly-multiplexed optical and near-infrared multi-object spectroscopy (MOS) on 8-40m telescopes would greatly improve measurements of cosmological parameters from LSST. The largest gain would come from improvements to LSST photometric redshifts, which are employed directly or indirectly for every major LSST cosmological probe; deep spectroscopic datasets will enable reduced uncertainties in the redshifts of individual objects via optimized training. Such spectroscopy will also determine the relationship of galaxy SEDs to their environments, key observables for studies of galaxy evolution. The resulting data will also constrain the impact of blending on photo-z's. Focused spectroscopic campaigns can also improve weak lensing cosmology by constraining the intrinsic alignments between the orientations of galaxies. Galaxy cluster studies can be enhanced by measuring motions of galaxies in and around clusters and by testing photo-z performance in regions of high density. Photometric redshift and intrinsic alignment studies are best-suited to instruments on large-aperture telescopes with wider fields of view (e.g., Subaru/PFS, MSE, or GMT/MANIFEST) but cluster investigations can be pursued with smaller-field instruments (e.g., Gemini/GMOS, Keck/DEIMOS, or TMT/WFOS), so deep MOS work can be distributed amongst a variety of telescopes. However, community access to large amounts of nights for surveys will still be needed to accomplish this work. In two companion white papers we present gains from shallower, wide-area MOS and from single-target imaging and spectroscopy.
    Large Synoptic Survey TelescopeGalaxyPhotometric redshiftTelescopesDark energyIntrinsic alignmentCosmologyGalactic evolutionCluster of galaxiesCompanion...
  • The ALMA Spectroscopic Survey in the Hubble Ultra Deep Field (ASPECS) provides new constraints for galaxy formation models on the molecular gas properties of galaxies. We compare results from ASPECS to predictions from two cosmological galaxy formation models: the IllustrisTNG hydrodynamical simulations and the Santa Cruz semi-analytic model (SC SAM). We explore several recipes to model the H$_2$ content of galaxies, finding them to be consistent with one another, and take into account the sensitivity limits and survey area of ASPECS. For a canonical CO-to-H$_2$ conversion factor of $\alpha_{\rm CO} = 3.6\,\rm{M}_\odot/(\rm{K}\,\rm{km/s}\,\rm{pc}^{2})$ the results of our work include: (1) the H$_2$ mass of $z>1$ galaxies predicted by the models as a function of their stellar mass is a factor of 2-3 lower than observed; (2) the models do not reproduce the number of H$_2$-rich ($M_{\rm H2} > 3\times 10^{10}\,\rm{M}_\odot$) galaxies observed by ASPECS; (3) the H$_2$ cosmic density evolution predicted by IllustrisTNG (the SC SAM) is in tension (only just agrees) with the observed cosmic density, even after accounting for the ASPECS selection function and field-to-field variance effects. The tension between models and observations at $z>1$ can be alleviated by adopting a CO-to-H$_2$ conversion factor in the range $\alpha_{\rm CO} = 2.0 - 0.8\,\rm{M}_\odot/(\rm{K}\,\rm{km/s}\,\rm{pc}^{2})$. Additional work on constraining the CO-to-H$_2$ conversion factor and CO excitation conditions of galaxies through observations and theory will be necessary to more robustly test the success of galaxy formation models.
    GalaxyIllustrisTNG simulationMass functionStellar massStar formation rateStar formationMilky WayGalaxy FormationAtacama Large Millimeter ArrayGalaxy mass...
  • [KKS2000]04 (NGC1052-DF2) has become a controversial and well-studied galaxy after the claims suggesting a lack of dark matter and the presence of an anomalously bright globular cluster (GC) system around it. A precise determination of its overall star formation history (SFH) as well as a better characterisation of its GC or planetary nebulae (PN) systems are crucial aspects to: i) understand its real nature, in particular placing it within the family of ultra diffuse galaxies; ii) shed light on its possible formation, evolution, and survival in the absence of dark matter. With this purpose we expand on the knowledge of [KKS2000]04 from the analysis of OSIRIS@GTC spectroscopic data. On the one hand, we claim the possible detection of two new PNe and confirm membership of 5 GCs. On the other hand, we find that the stars shaping [KKS2000]04 are intermediate-age to old (90\% of its stellar mass older than 5 Gyr, average age of 8.7 $\pm$ 0.7 Gyr) and metal-poor ([M/H] $\sim$ -1.18 $\pm$ 0.05), in general agreement with previous results. We do not find any clear hints of significant changes in its stellar content with radius. In addition, the possibility of [KKS2000]04 being a tidal dwarf galaxy with no dark matter is highly disfavoured.
    Globular clusterPlanetary nebulaGalaxyUltra-diffuse galaxy-like objectDark matterStar formation historiesMilky WayDwarf galaxyMetallicityVelocity dispersion...
  • Data traffic demand in cellular networks has been tremendously growing and has led to creating congested RF environment. Accordingly, innovative approaches for spectrum sharing have been proposed and implemented to accommodate several systems within the same frequency band. Spectrum sharing between radar and communication systems is one of the important research and development areas. In this paper, we present the fundamental spectrum sharing concepts and technologies, then we provide an updated and comprehensive survey of spectrum sharing techniques that have been developed to enable some of the wireless communication systems to coexist in the same band as radar systems.
    InterferenceSignal to noise ratioSpace researchOptimizationPositioning systemMulti-Objective OptimizationMutual informationSchedulingFuzzy logicMarket...
  • We discuss the topology of Bogoliubov excitation bands from a Bose-Einstein condensate in an optical lattice. Since the Bogoliubov equation for a bosonic system is non-Hermitian, complex eigenvalues often appear and induce dynamical instability. As a function of momentum, the onset of appearance and disappearance of complex eigenvalues is an exceptional point (EP), which is a point where the Hamiltonian is not diagonalizable and hence the Berry connection and curvature are ill-defined, preventing defining topological invariants. In this paper, we propose a systematic procedure to remove EPs from the Brillouin zone by introducing an imaginary part of the momentum. We then define the Berry phase for a one-dimensional bosonic Bogoliubov system. Extending the argument for Hermitian systems, the Berry phase for an inversion-symmetric system is shown to be $Z_2$. As concrete examples, we numerically investigate two toy models and confirm the bulk-edge correspondence even in the presence of complex eigenvalues. The $Z_2$ invariant associated with particle-hole symmetry and the winding number for a time-reversal-symmetric system are also discussed.
    Berry phaseTopological invariantHamiltonianParticle-hole symmetryTime-reversal symmetryBrillouin zoneWinding numberBose-Einstein condensateChiral symmetryComplex plane...
  • "Magnetic monopole" is an exotic quantum excitation in pyrochlore U(1) spin liquid, and its emergence is purely of quantum origin and has no classical analogue. We predict topological thermal Hall effect (TTHE) of "magnetic monopoles" and present this prediction through non-Kramers doublets. We observe that, when the external magnetic field polarizes the Ising component of the local moment, internally this corresponds to the induction of emergent dual U(1) gauge flux for the "magnetic monopoles". The motion of "magnetic monopoles" is then twisted by the induced dual gauge flux. This emergent Lorentz force on "magnetic monopoles" is the fundamental origin of TTHE. Therefore, TTHE would be a direct evidence of the "monopole"-gauge coupling and the emergent U(1) gauge structure in pyrochlore U(1) spin liquid. Our result does not depend strongly on our choice of non-Kramers doublets for our presentation, and can be well extended to Kramers doublets. Our prediction can be readily tested among the pyrochlore spin liquid candidate materials. We give a detailed discussion about the expectation for different pyrochlore magnets.
    Magnetic monopolePyrochloreGauge fieldKramers theoremSpinonHall effectDiamond cubicHamiltonianSpin liquidBerry phase...
  • We analyse velocity fluctuations in the solar wind at magneto-fluid scales in two datasets, extracted from Wind data in the period 2005-2015, that are characterised by strong or weak expansion. Expansion affects measurements of anisotropy because it breaks axisymmetry around the mean magnetic field. Indeed, the small-scale three-dimensional local anisotropy of magnetic fluctuations ({\delta}B) as measured by structure functions (SF_B) is consistent with tube-like structures for strong expansion. When passing to weak expansion, structures become ribbon-like because of the flattening of SFB along one of the two perpendicular directions. The power-law index that is consistent with a spectral slope -5/3 for strong expansion now becomes closer to -3/2. This index is also characteristic of velocity fluctuations in the solar wind. We study velocity fluctuations ({\delta}V) to understand if the anisotropy of their structure functions (SF_V ) also changes with the strength of expansion and if the difference with the magnetic spectral index is washed out once anisotropy is accounted for. We find that SF_V is generally flatter than SF_B. When expansion passes from strong to weak, a further flattening of the perpendicular SF_V occurs and the small-scale anisotropy switches from tube-like to ribbon-like structures. These two types of anisotropy, common to SF_V and SF_B, are associated to distinct large-scale variance anisotropies of {\delta}B in the strong- and weak-expansion datasets. We conclude that SF_V shows anisotropic three-dimensional scaling similar to SF_B, with however systematic flatter scalings, reflecting the difference between global spectral slopes.
    AnisotropyVelocity fluctuationsMean fieldSolar windEddyMagnetohydrodynamicsRadial velocityTurbulenceNumerical simulationEllipticity...
  • The use of Large Eddy Simulations in compressible, turbulent MHD is often limited to the implementation of purely dissipative sub-grid scale models. While such models work well in the hydrodynamic case due to the universality of the spectrum, they do not fully describe the complex dynamics of MHD, where the transfer of energy between internal, kinetic and magnetic energies at small scales are less trivial. For this reason, a sub-grid scale model based on the gradients of the fields entering in each non-linear term of the equations has already been proposed and studied in the literature, for the momentum and induction equations. Here, we consider the full set of compressible, ideal MHD equations, with an ideal gas equation of state, and we proposed a generalization of the gradient model, including the energy equation. We focus on the residuals coming from the whole set of equations, by filtering accurate high-resolution simulations of the turbulent Newtonian Kelvin-Helmholtz instability in a periodic box. We employ the same high-resolution shock capturing methods typically used in relativistic MHD, applicable in particular to neutron star mergers. The a-priori test, i.e. the fit between the sub-filter residuals and the model, allows us to confirm that the gradient model outperforms any other, in terms of accuracy of the fit and small deviations of the best-fit pre-coefficient from the expected value. Such results are validated for 2D and 3D, for a range of different problems, and are shown, for the first time, to hold also for the energy evolution equation. This paper is the first step, based on a solid theoretical and numerical basis, towards the near-future extension of the sub-grid scale gradient model to the relativistic MHD, and a future implementation in a full General Relativity LES.
    Large eddy simulationMagnetic energyTurbulenceHelicityKelvin-Helmholtz instabilityVorticityMHD equationsFaraday's law of inductionDissipationDynamo theory...
  • The distribution of cosmic rays in the Galaxy at energies above few TeVs is still uncertain and this affects the expectations for the diffuse gamma flux produced by hadronic interactions of cosmic rays with the interstellar gas. We show that the TeV gamma-ray sky can provide interesting constraints. Namely, we compare the flux from the galactic plane measured by Argo-YBJ, HESS, HAWC and Milagro with the expected flux due to diffuse emission and point-like and extended sources observed by HESS showing that experimental data can already discriminate among different hyphoteses for cosmic ray distribution. The constraints can be strengthened if the contribution of sources not resolved by HESS is taken into account.
    Cosmic rayHESS telescopeMilky WayUnresolved sourcesSupernova remnantTeV gamma raysDiffuse emissionLuminosityGalactic planeDiffuse source...
  • By means of three dimensional high-resolution hybrid simulations we study the properties of the magnetic field spectral anisotropies near and beyond ion kinetic scales. By using both a Fourier analysis and a local analysis based on multi-point 2nd-order structure function techniques, we show that the anisotropy observed is less than what expected by standard wave normal modes turbulence theories although the non linear energy transfer is still in the perpendicular direction, only advected in the parallel direction as expected balancing the non-linear energy transfer time and the decorrelation time. Such result can be explained by a phenomenological model based on the formation of strong intermittent two-dimensional structures in the plane perpendicular to the local mean field that have some prescribed aspect ratio eventually depending on the scale. This model support the idea that small scales structures, such as reconnecting current sheets, contribute significantly to the formation of the turbulent cascade at kinetic scales.
    AnisotropyTurbulenceMean fieldNumerical simulationSolar windLocal analysisFilling fractionNormal modeSmall scale structurePlasma Beta...
  • TRAPPIST-1 is a nearby 0.08 M M-star, which was recently found to harbor a planetary system of at least seven Earth-mass planets, all within 0.1 au. The configuration confounds theorists as the planets are not easily explained by either in situ or migration models. In this Paper we present a scenario for the formation and orbital architecture of the TRAPPIST-1 system. In our model, planet formation starts at the H2O iceline, where pebble-size particles -- whose origin is the outer disk -- concentrate to trigger streaming instabilities. After their formation, planetary embryos quickly mature by pebble accretion. Planet growth stalls at Earth masses, where the planet's gravitational feedback on the disk keeps pebbles at bay. Planets are transported by Type I migration to the inner disk, where they stall at the magnetospheric cavity and end up in mean motion resonances. During disk dispersal, the cavity radius expands and the inner-most planets escape resonance. We argue that the model outlined here can also be applied to other compact systems and that the many close-in super-Earth systems are a scaled-up version of TRAPPIST-1. We also hypothesize that few close-in compact systems harbor giant planets at large distances, since they would have stopped the pebble flux from the outer disk.
    PlanetAccretionEarthSilicateStarTwo-stream instabilityPlanetesimalGiant planetM starArchitecture...
  • Most models of volatile delivery to accreting terrestrial planets assume that the carriers for water are similar in water content to the carbonaceous chondrites in our Solar System. Here we suggest that the water content of primitive bodies in many planetary systems may actually be much higher, as carbonaceous chondrites have lost some of their original water due to heating from short-lived radioisotopes that drove parent body alteration. Using N-body simulations, we explore how planetary accretion would be different if bodies beyond the water line contained a water mass fraction consistent with chemical equilibrium calculations, and more similar to comets, as opposed to the more traditional water-depleted values. We apply this model to consider planet formation around stars of different masses and identify trends in the properties of Habitable Zone planets and planetary system architecture which could be tested by ongoing exoplanet census data collection. Comparison of such data with the model predicted trends will serve to evaluate how well the N-body simulations and the initial conditions used in studies of planetary accretion can be used to understand this stage of planet formation.
    Protoplanetary diskAsteroidsMeteoritesSolar nebulaeCondensationEarthAbundanceAccretionLuminosityThermalisation...
  • The recent detection of planets around very low mass stars raises the question of the formation, composition and potential habitability of these objects. We use planetary system formation models to infer the properties, in particular their radius distribution and water content, of planets that may form around stars ten times less massive than the Sun. Our planetary system formation and composition models take into account the structure and evolution of the protoplanetary disk, the planetary mass growth by accretion of solids and gas, as well as planet-planet, planet-star and planet-disk interactions. We show that planets can form at small orbital period in orbit about low mass stars. We show that the radius of the planets is peaked at about 1 rearth and that they are, in general, volatile rich especially if proto-planetary discs orbiting this type of stars are long-lived. Close-in planets orbiting low-mass stars similar in terms of mass and radius to the ones recently detected can be formed within the framework of the core accretion paradigm as modeled here. The properties of protoplanetary disks, and their correlation with the stellar type, are key to understand their composition.
    PlanetStarLow-mass starsProtoplanetary diskAccretionVolatilesStellar massPlanetary system formationEarthLuminosity...
  • Massive growth in human mobility has dramatically increased the risk and rate of pandemic spread. Macro-level descriptors of the topology of the World Airline Network (WAN) explains middle and late stage dynamics of pandemic spread mediated by this network, but necessarily regard early stage variation as stochastic. We propose that much of early stage variation can be explained by appropriately characterizing the local topology surrounding the debut location of an outbreak. We measure for each airport the expected force of infection (AEF) which a pandemic originating at that airport would generate. We observe, for a subset of world airports, the minimum transmission rate at which a disease becomes pandemically competent at each airport. We also observe, for a larger subset, the time until a pandemically competent outbreak achieves pandemic status given its debut location. Observations are generated using a highly sophisticated metapopulation reaction-diffusion simulator under a disease model known to well replicate the 2009 influenza pandemic. The robustness of the AEF measure to model misspecification is examined by degrading the network model. AEF powerfully explains pandemic risk, showing correlation of 0.90 to the transmission level needed to give a disease pandemic competence, and correlation of 0.85 to the delay until an outbreak becomes a pandemic. The AEF is robust to model misspecification. For 97% of airports, removing 15% of airports from the model changes their AEF metric by less than 1%. Appropriately summarizing the size, shape, and diversity of an airport's local neighborhood in the WAN accurately explains much of the macro-level stochasticity in pandemic outcomes.
    Branching processDegree distributionNetwork modelGaussian distributionNull hypothesisConfidence intervalStatisticsTransport networkCommunity structureBetweenness centrality...
  • We present a machine learning (ML) approach for the prediction of galaxies' dark matter halo masses that achieves an improved performance over conventional methods. We train three ML algorithms (\texttt{XGBoost}, Random Forests, and neural network) to predict halo masses using a set of synthetic galaxy catalogues that are built by populating dark matter haloes in N-body simulations with galaxies, and that match both the clustering and the joint-distributions of properties of galaxies in the Sloan Digital Sky Survey (SDSS). We explore the correlation of different galaxy- and group-related properties with halo mass, and extract the set of nine features that contribute the most to the prediction of halo mass. We find that mass predictions from the ML algorithms are more accurate than those from halo abundance matching (\texttt{HAM}) or dynamical mass (\texttt{DYN}) estimates. Since the danger of this approach is that our training data might not accurately represent the real Universe, we explore the effect of testing the model on synthetic catalogues built with different assumptions than the ones used in the training phase. We test a variety of models with different ways of populating dark matter haloes, such as adding velocity bias for satellite galaxies. We determine that, though training and testing on different data can lead to systematic errors in predicted masses, the ML approach still yields substantially better masses than either \texttt{HAM}or \texttt{DYN}. Finally, we apply the trained model to a galaxy and group catalogue from the SDSS DR7 and present the resulting halo masses.
    GalaxyVirial massSloan Digital Sky SurveyMilky WayLuminosityDark matter haloGroup of galaxiesHalo Occupation DistributionGalaxy massVelocity bias...
  • We measured evolution of the $K$-band luminosity function and stellar mass function for red and blue galaxies at $z<1.2$ using a sample of 353 594 $I<24$ galaxies in 8.26 square degrees of Bo\"otes. We addressed several sources of systematic and random error in measurements of total galaxy light, photometric redshift and absolute magnitude. We have found that the $K$-band luminosity density for both red and blue galaxies increased by a factor of 1.2 from $z\sim1.1$ to $z\sim0.3$, while the most luminous red (blue) galaxies decreased in luminosity by 0.19 (0.33) mag or $\times0.83 (0.74)$. These results are consistent with $z<0.2$ studies while our large sample size and area result in smaller Poisson and cosmic variance uncertainties than most $z >0.4$ luminosity and mass function measurements. Using an evolving relation for $K$-band mass to light ratios as a function of $(B-V)$ color, we found a slowly decreasing rate of growth in red galaxy stellar mass density of $\times2.3$ from $z\sim1.1$ to $z\sim0.3$, indicating a slowly decreasing rate of migration from the blue cloud to the red sequence. Unlike some studies of the stellar mass function, we find that massive red galaxies grow by a factor of $\times1.7$ from $z\sim1.1$ to $z\sim0.3$, with the rate of growth due to mergers decreasing with time. These results are comparable with measurements of merger rates and clustering, and they are also consistent with the red galaxy stellar mass growth implied by comparing $K$-band luminosity evolution with the fading of passive stellar population models.
    GalaxyStellar mass functionLuminosityLuminosity functionBlue galaxiesStellar massStellar populationsRandom errorAbsolute magnitudeMass function...
  • Most surveys use maximum-likelihood (ML) methods to fit models when extracting photometry from images. We show these ML estimators systematically overestimate the flux as a function of the signal-to-noise ratio (SNR) and the number of model parameters involved in the fit. This bias is substantially worse for galaxies: while a 1% bias is expected for a 10-sigma point source, a 10-sigma galaxy with a simplified Gaussian profile suffers a 2.5% bias. This bias also behaves differently depending how multiple bands are used in the fit: simultaneously fitting all bands leads the flux bias to become roughly evenly distributed between them, while fixing the position in `non-detection' bands (i.e. forced photometry) gives flux estimates in those bands that are biased low, compounding a bias in derived colors. We show that these effects are present in idealized simulations, outputs from the HSC fake object pipeline (SynPipe), and observations from SDSS Stripe 82. Prescriptions to correct for these biases are provided along with more detailed results related to biases in ML error estimation.
    Point spread functionPhotometrySignal to noise ratioCovariancePoint sourceGalaxyMaximum likelihoodSloan Digital Sky SurveyStatistical estimatorHyper Suprime-Cam...
  • One of the last missing pieces in the puzzle of galaxy formation and evolution through cosmic history is a detailed picture of the role of the cold gas supply in the star-formation process. Cold gas is the fuel for star formation, and thus regulates the buildup of stellar mass, both through the amount of material present through a galaxy's gas mass fraction, and through the efficiency at which it is converted to stars. Over the last decade, important progress has been made in understanding the relative importance of these two factors along with the role of feedback, and the first measurements of the volume density of cold gas out to redshift 4, (the "cold gas history of the Universe") has been obtained. To match the precision of measurements of the star formation and black-hole accretion histories over the coming decades, a two orders of magnitude improvement in molecular line survey speeds is required compared to what is possible with current facilities. Possible pathways towards such large gains include significant upgrades to current facilities like ALMA by 2030 (and beyond), and eventually the construction of a new generation of radio-to-millimeter wavelength facilities, such as the next generation Very Large Array (ngVLA) concept.
    Star formationAccretion historyAccreting black holeVery Large ArrayAtacama Large Millimeter ArrayStellar massStarGalaxyGalaxy FormationGas...
  • We have compelling evidence for stellar-mass black holes (BHs) of ~5-80 M_sun that form through the death of massive stars. We also have compelling evidence for so-called supermassive BHs (10^5-10^10 M_sun) that are predominantly found in the centers of galaxies. We have very good reason to believe there must be BHs with masses in the gap between these ranges: the first ~10^9 M_sun BHs are observed only hundreds of millions of years after the Big Bang, and all theoretically viable paths to making supermassive BHs require a stage of "intermediate" mass. However, no BHs have yet been reliably detected in the 100-10}^5 M_sun mass range. Uncovering these intermediate-mass BHs of 10^3-10^5 M_sun is within reach in the coming decade. In this white paper we highlight the crucial role that 30-m class telescopes will play in dynamically detecting intermediate-mass black holes, should they exist.
    Black holeIntermediate-mass black holeGlobular clusterKinematicsStarGalaxyAdaptive opticsProper motionIntegral field unitsAccretion...
  • In this paper we study the molecular gas content of a representative sample of 67 of the most massive early-type galaxies in the local universe, drawn uniformly from the MASSIVE survey. We present new IRAM-30m telescope observations of 30 of these galaxies, allowing us to probe the molecular gas content of the entire sample to a fixed molecular-to-stellar mass fraction of 0.1%. The total detection rate in this representative sample is 25$^{+5.9}_{-4.4}$%, and by combining the MASSIVE and ATLAS$^{\rm 3D}$ molecular gas surveys we find a joint detection rate of 22.4$^{+2.4}_{-2.1}$%. This detection rate seems to be independent of galaxy mass, size, position on the fundamental plane, and local environment. We show here for the first time that true slow rotators can host molecular gas reservoirs, but the rate at which they do so is significantly lower than for fast-rotators. Objects with a higher velocity dispersion at fixed mass (a higher kinematic bulge fraction) are less likely to have detectable molecular gas, and where gas does exist, have lower molecular gas fractions. In addition, satellite galaxies in dense environments have $\approx$0.6 dex lower molecular gas-to-stellar mass ratios than isolated objects. In order to interpret these results we created a toy model, which we use to constrain the origin of the gas in these systems. We are able to derive an independent estimate of the gas-rich merger rate in the low-redshift universe. These gas rich mergers appear to dominate the supply of gas to ETGs, but stellar mass loss, hot halo cooling and transformation of spiral galaxies also play a secondary role.
    Early-type galaxyMASSIVE SurveyStellar massLocal UniverseCoolingVelocity dispersionGalaxy massTelescopesStellar mass lossMass ratio...
  • In the near future galaxy surveys will target Lyman alpha emitting galaxies (LAEs) to unveil the nature of the dark energy. It has been suggested that the observability of LAEs is coupled to the large scale properties of the intergalactic medium. Such coupling could introduce distortions into the observed clustering of LAEs, adding a new potential difficulty to the interpretation of upcoming surveys. We present a model of LAEs that incorporates Lyman-alpha radiative transfer processes in the interstellar and intergalactic medium. The model is implemented in the GALFORM semi-analytic model of galaxy of formation and evolution. We find that the radiative transfer inside galaxies produces selection effects over galaxy properties. In particular, observed LAEs tend to have low metallicities and intermediate star formation rates. At low redshift we find no evidence of a correlation between the spatial distribution of LAEs and the intergalactic medium properties. However, at high redshift the LAEs are linked to the line of sight velocity and density gradient of the intergalactic medium. The strength of the coupling depends on the outflow properties of the galaxies and redshift. This effect modifies the clustering of LAEs on large scales, adding non linear features. In particular, our model predicts modifications to the shape and position of the baryon acoustic oscillation peak. This work highlights the importance of including radiative transfer physics in the cosmological analysis of LAEs.
    GalaxyMilky WayLine of sightGalactic windIntergalactic mediumRadiative transferLuminosity functionLuminosityReal spaceRedshift space...
  • We present new measurements of the flux power-spectrum P(k) of the $z<0.5$ HI Lyman-$\alpha$ forest spanning scales k ~ 0.001-0.1 s/km. These results were derived from 65 far ultraviolet quasar spectra (resolution R~18000) observed with the Cosmic Origin Spectrograph (COS) on board the Hubble Space Telescope. The analysis required careful masking of all contaminating, coincident absorption from HI and metal-line transitions of the Galactic interstellar medium and intervening absorbers as well as proper treatment of the complex COS line-spread function. From the P(k) measurements, we estimate the HI photoionization rate ($\Gamma_{\rm HI}$) in the z<0.5 intergalactic medium. Our results confirm most of the previous $\Gamma_{\rm HI}$ estimates. We conclude that previous concerns of a photon underproduction crisis are now resolved by demonstrating that the measured $\Gamma_{\rm HI}$ can be accounted for by ultraviolet emission from quasars alone. In a companion paper, we will present constraints on the thermal state of the $z<0.5$ intergalactic medium from the P(k) measurements presented here.
    Lyman-alpha forestQuasarUltraviolet backgroundCosmic Origins SpectrographLine spread functionHubble Space TelescopeRedshift binsSignal to noise ratioFluctuating Gunn-Peterson ApproximationCovariance matrix...
  • We present a measurement of the baryon acoustic oscillation (BAO) scale at redshift $z=2.35$ from the three-dimensional correlation of Lyman-$\alpha$ (Ly$\alpha$) forest absorption and quasars. The study uses 266,590 quasars in the redshift range $1.77<z<3.5$ from the Sloan Digital Sky Survey (SDSS) Data Release 14 (DR14). The sample includes the first two years of observations by the SDSS-IV extended Baryon Oscillation Spectroscopic Survey (eBOSS), providing new quasars and re-observations of BOSS quasars for improved statistical precision. Statistics are further improved by including Ly$\alpha$ absorption occurring in the Ly$\beta$ wavelength band of the spectra. From the measured BAO peak position along and across the line of sight, we determine the Hubble distance $D_{H}$ and the comoving angular diameter distance $D_{M}$ relative to the sound horizon at the drag epoch $r_{d}$: $D_{H}(z=2.35)/r_{d}=9.20\pm 0.36$ and $D_{M}(z=2.35)/r_{d}=36.3\pm 1.8$. These results are consistent at $1.5\sigma$ with the prediction of the best-fit flat $\Lambda$CDM cosmological model reported for the Planck (2016) analysis of CMB anisotropies. Combined with the Ly$\alpha$ auto-correlation measurement presented in a companion paper (de Sainte Agathe et al. 2019) the BAO measurements at $z=2.34$ are within $1.7\sigma$ of the predictions of this model.
    QuasarBaryon acoustic oscillationsCross-correlationTwo-point correlation functionSloan Digital Sky SurveyPlanck missioneBOSS surveyLine of sightBaryon Oscillation Spectroscopic SurveyRedshift bins...
  • Intergalactic medium temperature is a powerful probe of the epoch of reionisation, as information is retained long after reionisation itself. However, mean temperatures are highly degenerate with the timing of reionisation, with the amount heat injected during the epoch, and with the subsequent cooling rates. We post-process a suite of semi-analytic galaxy formation models to characterise how different thermal statistics of the intergalactic medium can be used to constrain reionisation. Temperature is highly correlated with redshift of reionisation for a period of time after the gas is heated. However as the gas cools, thermal memory of reionisation is lost, and a power-law temperature-density relation is formed, $T = T_0(1+\delta)^{1-\gamma}$ with $\gamma \approx 1.5$. Constraining our model against electron optical depth and temperature at mean density observations, we find that reionisation likely finished at $z \approx 6.5 \pm 0.5$ with a soft spectral slope of $\alpha \approx 2.8 \pm 1$. We find that, in the future, the degeneracies between reionisation timing and background spectrum can be broken using the scatter in temperatures and integrated thermal history.
    ReionizationIntergalactic mediumEpoch of reionizationMean mass densityCoolingTemperature-density relationIGM temperatureRecombinationIonizing radiationCooling timescale...
  • Quantitative characterization of galaxy morphology is vital in enabling comparison of observations to predictions from galaxy formation theory. However, without significant overlap between the observational footprints of deep and shallow galaxy surveys, the extent to which structural measurements for large galaxy samples are robust to image quality (e.g., depth, spatial resolution) cannot be established. Deep images from the Sloan Digital Sky Survey (SDSS) Stripe 82 co-adds provide a unique solution to this problem - offering $1.6-1.8$ magnitudes improvement in depth with respect to SDSS Legacy images. Having similar spatial resolution to Legacy, the co-adds make it possible to examine the sensitivity of parametric morphologies to depth alone. Using the Gim2D surface-brightness decomposition software, we provide public morphology catalogs for 16,908 galaxies in the Stripe 82 $ugriz$ co-adds. Our methods and selection are completely consistent with the Simard et al. (2011) and Mendel et al. (2014) photometric decompositions. We rigorously compare measurements in the deep and shallow images. We find no systematics in total magnitudes and sizes except for faint galaxies in the $u$-band and the brightest galaxies in each band. However, characterization of bulge-to-total fractions is significantly improved in the deep images. Furthermore, statistics used to determine whether single-S\'ersic or two-component (e.g., bulge+disc) models are required become more bimodal in the deep images. Lastly, we show that asymmetries are enhanced in the deep images and that the enhancement is positively correlated with the asymmetries measured in Legacy images.
    GalaxySloan Digital Sky SurveyStripe phasesSurface brightnessPoint spread functionSoftwareStatisticsIntensityMilky WayLarge scale structure survey...
  • Do black holes rotate, and if yes, how fast? This question is fundamental and has broad implications, but still remains open. There are significant observational challenges in current spin determinations, and future facilities offer prospects for precision measurements.
    Black holeBlack hole spinActive Galactic NucleiAccretion diskX-ray binarySpectral energy distributionOpacityInclinationSpinning Black HoleElectron scattering...