Recently bookmarked papers

with concepts:
  • The cosmological X-ray emission associated to the possible radiative decay of sterile neutrinos is composed by a collection of lines at different energies. For a given mass, each line corresponds to a given redshift. In this work, we cross correlate such line emission with catalogs of galaxies tracing the dark matter distribution at different redshifts. We derive observational prospects by correlating the X-ray sky that will be probed by the eROSITA and Athena missions with current and near future photometric and spectroscopic galaxy surveys. A relevant and unexplored fraction of the parameter space of sterile neutrinos can be probed by this technique.
    Dark matterSterile neutrinoGalaxyLine intensity mappingCross-correlationSterile neutrino decayX-ray spectrumIntegral field unitsRedshift binsActive Galactic Nuclei...
  • We study the interplay of superconductivity and disorder by solving numerically the Bogoliubov-de-Gennes equations in a two dimensional lattice of size $80\times80$ which makes possible to investigate the weak-coupling limit. In contrast with results in the strong coupling region, we observe enhancement of superconductivity and intriguing multifractal-like features such as a broad log-normal spatial distribution of the order parameter, a parabolic singularity spectrum, and level statistics consistent with those of a disordered metal at the Anderson transition. The calculation of the superfluid density, including small phase fluctuations, reveals that, despite this intricate spatial structure, phase coherence still holds for sufficiently weak disorder. It only breaks down for stronger disorder but before the insulating transition takes place.
    DisorderSuperconductivityStatisticsSuperconductorCoherence lengthSuperfluidMultifractalityBardeen Cooper SchriefferInsulatorsSingularity spectrum...
  • The construction and classification of symmetry protected topological (SPT) phases in interacting bosonic and fermionic systems have been intensively studied in the past few years. Very recently, a complete classification and construction of space group SPT phases were also proposed for interacting bosonic systems. In this paper, we attempt to generalize this classification and construction scheme into interacting fermion systems systematically. In particular, we construct and classify point group SPT phases for 2D interacting fermion systems via lower-dimensional block-state decorations. We discover several intriguing fermionic SPT states that can only be realized in interacting fermion systems (i.e., no free-fermion and bosonic SPT realizations). Moreover, we also verify the recently conjectured crystalline equivalence principle for 2D interacting fermion systems. Finally, a potential experimental realization of these new classes of point group SPT phases in 2D correlated superconductors is also addressed.
    ClassificationSymmetry protected topological orderHamiltonianCosmological mirror symmetrySymmetry groupEquivalence principleMajorana fermionSymmetry protected topological stateDihedralFree fermions...
  • In this work we study the behavior of massless fermions in a graphene wormhole and in the presence of an external magnetic field. The graphene wormhole is made from two sheets of graphene that play the roles of asymptotically flat spaces connected through a carbon nanotube with a zig-zag boundary. We solve the massless Dirac equation in this geometry and analyze its wave function. We show that the energy spectra of these solutions exhibit similar behavior to Landau levels.
    GrapheneWormholeTopological defectGauge fieldCurvatureGeneral relativityLandau levelFermi pointDislocationCarbon nanotubes...
  • The splashback radius ($R_{\rm sp}$) of dark matter halos has recently been detected using weak gravitational lensing and cross-correlations with galaxies. However, different methods have been used to measure $R_{\rm sp}$ and to assess the significance of its detection. In this paper, we use simulations to study the precision and accuracy to which we can detect the splashback radius with 3D density, 3D subhalo, and weak lensing profiles. We study how well various methods and tracers recover $R_{\rm sp}$ by comparing it with the value measured directly from particle dynamics. We show that estimates of $R_{\rm sp}$ from density and subhalo profiles correspond to different percentiles of the underlying $R_{\rm sp}$ distribution of particle orbits. At low accretion rates, a second caustic appears and can bias results. Finally, we show that upcoming lensing surveys may be able to constrain the splashback-accretion rate relation directly.
    Splashback radiusDark matter subhaloMass accretion rateWeak lensingAccretionDark matter haloPhase space causticVirial massPlanck missionDark Energy Spectroscopic Instrument...
  • A non-parametric higher-order Jeans analysis method, GravSphere, was recently used to constrain the density profiles of Local Group dwarf galaxies, classifying them into those that are more likely to have an inner dark matter cusp and those that are likely to have a core (Read et al.). In this work we test this method using 31 simulated galaxies, comparable to Fornax, selected from the APOSTLE suite of cosmological hydrodynamics simulations, which include CDM and Self-Interacting Dark Matter (SIDM) cosmologies. We find that the mass profiles of the simulated dwarfs are often, but not always, well recovered by GravSphere. The less successful cases may be identified using a chi-squared diagnostic. Although the uncertainties are large in the inner regions, the inferred mass profiles are unbiased and exhibit smaller scatter than comparable Jeans methods. We find that GravSphere recovers the density profiles of simulated dwarfs below the half-light radius and down to the resolution limit of our simulations with better than 10+-30 per cent accuracy, making it a promising Jeans-based approach for modelling dark matter distributions in dwarf galaxies.
    GalaxyAnisotropyDark matterMass profileHalf-light radiusSelf-interacting dark matterStarAnisotropy profileAPOSTLE simulationDwarf galaxy...
  • We use a sample of galaxies with high-quality rotation curves to assess the role of the luminous component ("baryons") in the dwarf galaxy rotation curve diversity problem. As in earlier work, we find that the shape of the rotation curve correlates with baryonic surface density; high surface density galaxies have rapidly-rising rotation curves consistent with cuspy cold dark matter halos, slowly-rising rotation curves (characteristic of galaxies with inner mass deficits or "cores") occur only in low surface density galaxies. The correlation, however, seems too weak in the dwarf galaxy regime to be the main driver of the diversity. In particular, the observed dwarf galaxy sample includes "cuspy" systems where baryons are unimportant in the inner regions and "cored" galaxies where baryons actually dominate the inner mass budget. These features are important diagnostics of the viability of various scenarios proposed to explain the diversity, such as (i) baryonic inflows and outflows; (ii) dark matter self-interactions (SIDM); (iii) variations in the baryonic acceleration through the "mass discrepancy-acceleration relation" (MDAR); or (iv) non-circular motions in gaseous discs. A reanalysis of existing data shows that MDAR does not hold in the inner regions of dwarf galaxies and thus cannot explain the diversity. Together with analytical modeling and cosmological hydrodynamical simulations, our analysis shows that each of the remaining scenarios has promising features, but none seems to fully account for the observed diversity. The origin of the dwarf galaxy rotation curve diversity and its relation to the small structure of cold dark matter remains an open issue.
    Rotation CurveGalaxySelf-interacting dark matterDwarf galaxyDark matterAPOSTLE simulationMass discrepancy-acceleration relationNIHAO simulationCircular velocityStar formation...
  • Recent observations have found that many $z\sim 6$ quasar fields lack galaxies. This unexpected lack of galaxies may potentially be explained by quasar radiation feedback. In this paper I present a suite of 3D radiative transfer cosmological simulations of quasar fields. I find that quasar radiation suppresses star formation in low mass galaxies, mainly by photo-dissociating their molecular hydrogen. Photo-heating also plays a role, but only after $\sim$100 Myr. However, galaxies which already have stellar mass above $10^5 M_\odot$ when the quasar turns on will not be suppressed significantly. Quasar radiative feedback suppresses the faint end of the galaxy luminosity function (LF) within $1$ pMpc, but to a far lesser degree than the field-to-field variation of the LF. My study also suggests that by using the number of bright galaxies ($M_{1500}<-16$) around quasars, we can potentially recover the underlying mass overdensity, which allows us to put reliable constraints on quasar environments.
    QuasarStar formationVirial massStar formation historiesStellar massLuminosityGalaxyStar formation rateStarLuminosity function...
  • It has been suggested in recent work that the Page curve of Hawking radiation can be recovered using computations in semi-classical gravity provided one allows for "islands" in the gravity region of quantum systems coupled to gravity. The explicit computations so far have been restricted to black holes in two-dimensional Jackiw-Teitelboim gravity. In this note, we numerically construct a five-dimensional asymptotically AdS geometry whose boundary realizes a four-dimensional Hartle-Hawking state on an eternal AdS black hole in equilibrium with a bath. We also numerically find two types of extremal surfaces: ones that correspond to having or not having an island. The version of the information paradox involving the eternal black hole exists in this setup, and it is avoided by the presence of islands. Thus, recent computations exhibiting islands in two-dimensional gravity generalize to higher dimensions as well.
    Black holeHorizonEntropyEntanglementAnti de Sitter spacePlanck missionAdS black holeLine elementHawking radiationInduced metric...
  • Neural network-based approaches have become widespread for abstractive text summarization. Though previously proposed models for abstractive text summarization addressed the problem of repetition of the same contents in the summary, they did not explicitly consider its information structure. One of the reasons these previous models failed to account for information structure in the generated summary is that standard datasets include summaries of variable lengths, resulting in problems in analyzing information flow, specifically, the manner in which the first sentence is related to the following sentences. Therefore, we use a dataset containing summaries with only three bullet points, and propose a neural network-based abstractive summarization model that considers the information structures of the generated summaries. Our experimental results show that the information structure of a summary can be controlled, thus improving the performance of the overall summarization.
    Hidden stateNeural networkConvolutional neural networkLong short term memoryClassificationInformation flowSubcategoryKeyphraseWord embeddingSigmoid function...
  • We present new observations of Fornax A taken at 1 GHz with the MeerKAT telescope and at 6 GHz with the Sardinia Radio Telescope (SRT). The sensitive (noise ~16 micro-Jy beam$^{-1}$), high resolution ( < 10'') MeerKAT images show that the lobes of Fornax A have a double-shell morphology, where dense filaments are embedded in a diffuse and extended cocoon. We study the spectral properties of these components by combining the MeerKAT and SRT observations with archival data between 84 MHz and 217 GHz. For the first time, we show that multiple episodes of nuclear activity must have formed the extended radio lobes. The modelling of the radio spectrum suggests that the last episode of injection of relativistic particles into the lobes started ~ 24 Myr ago and stopped approximately 12 Myr ago. More recently (~ 3 Myr ago), a less powerful and short ( < 1 Myr) phase of nuclear activity generated the central jets. Currently, the core may be in a new active phase. It appears that Fornax A is rapidly flickering. The dense environment in which Fornax A lives has lead to a complex recent merger history for this galaxy, including mergers spanning a range of gas contents and mass ratios, as shown by the analysis of the galaxy's stellar- and cold-gas phases. This complex recent history may be the cause of the rapid, recurrent nuclear activity of Fornax A.
    Active Galactic NucleiRadio lobesMeerKATNGC 1316Milky WayVery Large ArrayIntergalactic mediumCalibrationField of viewHost galaxy...
  • The Pan-STARRS1 (PS1) $3\pi$ survey is a comprehensive optical imaging survey of three quarters of the sky in the $grizy$ broad-band photometric filters. We present the methodology used in assembling the source classification and photometric redshift (photo-z) catalogue for PS1 $3\pi$ Data Release 1, titled Pan-STARRS1 Source Types and Redshifts with Machine learning (PS1-STRM). For both main data products, we use neural network architectures, trained on a compilation of public spectroscopic measurements that has been cross-matched with PS1 sources. We quantify the parameter space coverage of our training data set, and flag extrapolation using self-organizing maps. We perform a Monte-Carlo sampling of the photometry to estimate photo-z uncertainty. The final catalogue contains $2,902,054,648$ objects. On our validation data set, for non-extrapolated sources, we achieve an overall classification accuracy of $98.1\%$ for galaxies, $97.8\%$ for stars, and $96.6\%$ for quasars. Regarding the galaxy photo-z estimation, we attain an overall bias of $\left<\Delta z_{\mathrm{norm}}\right>=0.0005$, a standard deviation of $\sigma(\Delta z_{\mathrm{norm}})=0.0322$, a median absolute deviation of $\mathrm{MAD}(\Delta z_{\mathrm{norm}})=0.0161$, and an outlier fraction of $O=1.89\%$. The catalogue will be made available as a high-level science product via the Mikulski Archive for Space Telescopes at https://doi.org/10.17909//t9-rnk7-gr88.
    Pan-STARRS 1GalaxyClassificationNeural networkPhotometryPhotometric redshiftQuasarTraining setStarExtinction...
  • In 1965 the Nobel Foundation honored Sin-Itiro Tomonaga, Julian Schwinger, and Richard Feynman for their fundamental work in quantum electrodynamics and the consequences for the physics of elementary particles. In contrast to both of his colleagues only Richard Feynman appeared as a genius before the public. In his autobiographies he managed to connect his behavior, which contradicted several social and scientific norms, with the American myth of the "practical man". This connection led to the image of a common American with extraordinary scientific abilities and contributed extensively to enhance the image of Feynman as genius in the public opinion. Is this image resulting from Feynman's autobiographies in accordance with historical facts? This question is the starting point for a deeper historical analysis that tries to put Feynman and his actions back into historical context. The image of a "genius" appears then as a construct resulting from the public reception of brilliant scientific research.
    Quantum electrodynamicsContradictionParticlesAction...
  • We define rules for cellular automata played on quasiperiodic tilings of the plane arising from the multigrid method in such a way that these cellular automata are isomorphic to Conway's Game of Life. Although these tilings are nonperiodic, determining the next state of each tile is a local computation, requiring only knowledge of the local structure of the tiling and the states of finitely many nearby tiles. As an example, we show a version of a "glider" moving through a region of a Penrose tiling. This constitutes a potential theoretical framework for a method of executing computations in non-periodically structured substrates such as quasicrystals.
    GraphMultigrid methodQuasicrystalIsomorphismMultigraphOrientationEmbeddingComplex planeUnit cellPhase space...
  • Observational SETI has concentrated on using electromagnetism as the carrier , namely radio waves and laser radiation. Michael Hippke [2] has pointed out that it may be possible to use neutrinos or gravitational waves as signals. Gravitational waves demand the command of the generation of very large scale amounts of energy, Jackson and Benford [3]. This paper describes a beacon that uses beamed neutrinos as the signal. Neutrinos, like gravitational waves, have the advantage of extremely low extinction in the interstellar medium. To make use of neutrinos an advanced civilization can use a gravitational lens as a focus and amplifier. The lens can be a neutron star or a black hole. Using wave optics one can calculate the advantage of gravitational lensing for amplification of a beam and along the optical axis it is exceptionally large. Even though the amplification is very large the dimeter of the beam is quite small, less that a centimeter. This implies that a large constellation of neutrino transmitters would have to enclose the local neutron star or black hole to cover the sky. This means that such a beacon would have to be built by a Kardashev Type II civilization.
    NeutrinoGravitational waveNeutron starBlack holeGravitational lensingConstellationsExtinctionSETI projectElectromagnetismInterstellar medium...
  • Malware attacks represent a significant part of today's security threats. Software guard extensions (SGX) are a set of hardware instructions introduced by Intel in their recent lines of processors that are intended to provide a secure execution environment for user-developed applications. To our knowledge, there was no serious attempt yet to overcome the SGX protection by leveraging the software supply chain infrastructure, such as weaknesses in the development, build or signing servers. While SGX protection does not specifically take into consideration such threats, we show in the current paper that a simple malware attack exploiting a separation between the build and signing processes can have a serious damaging impact, practically nullifying the SGX integrity protection measures. Finally, we also suggest some possible mitigations against the attack.
    SoftwareSecurityCachingOperating systemRandom sequential adsorptionApplication programming interfaceArchitectureConfidence intervalData structuresEmbedding...
  • Compiler correctness is, in its simplest form, defined as the inclusion of the set of traces of the compiled program into the set of traces of the original program, which is equivalent to the preservation of all trace properties. Here traces collect, for instance, the externally observable events of each execution. This definition requires, however, the set of traces of the source and target languages to be exactly the same, which is not the case when the languages are far apart or when observations are fine grained. To overcome this issue, we study a generalized compiler correctness definition, which uses source and target traces drawn from potentially different sets and connected by an arbitrary relation. We set out to understand what guarantees this generalized compiler correctness definition gives us when instantiated with a non-trivial relation on traces. When this trace relation is not equality, it is no longer possible to preserve the trace properties of the source program unchanged. Instead, we provide a generic characterization of the target trace property ensured by correctly compiling a program that satisfies a given source property, and dually, of the source trace property one is required to show in order to obtain a certain target property for the compiled code. We show that this view on compiler correctness can naturally account for undefined behavior, resource exhaustion, different source and target values, side-channels, and various abstraction mismatches. Finally, we show that the same generalization also applies to many secure compilation definitions, which characterize the protection of a compiled program against linked adversarial code.
    OpacitySecurityCompiled CodeInteractive theorem proverInterferenceClosed operatorLower and upperArchitectureInformation flowAcronym...
  • The recent Spectre attacks has demonstrated the fundamental insecurity of current computer microarchitecture. The attacks use features like pipelining, out-of-order and speculation to extract arbitrary information about the memory contents of a process. A comprehensive formal microarchitectural model capable of representing the forms of out-of-order and speculative behavior that can meaningfully be implemented in a high performance pipelined architecture has not yet emerged. Such a model would be very useful, as it would allow the existence and non-existence of vulnerabilities, and soundness of countermeasures to be formally established. In this paper we present such a model targeting single core processors. The model is intentionally very general and provides an infrastructure to define models of real CPUs. It incorporates microarchitectural features that underpin all known Spectre vulnerabilities. We use the model to elucidate the security of existing and new vulnerabilities, as well as to formally analyze the effectiveness of proposed countermeasures. Specifically, we discover three new (potential) vulnerabilities, including a new variant of Spectre v4, a vulnerability on speculative fetching, and a vulnerability on out-of-order execution, and analyze the effectiveness of three existing countermeasures: constant time, Retpoline, and ARM's Speculative Store Bypass Safe (SSBS).
    SecurityCachingMicroarchitectureOut-of-order executionArithmeticArchitectureGraphSoftwareAttacker modelTransition rule...
  • Research on transient execution attacks including Spectre and Meltdown showed that exception or branch misprediction events might leave secret-dependent traces in the CPU's microarchitectural state. This observation led to a proliferation of new Spectre and Meltdown attack variants and even more ad-hoc defenses (e.g., microcode and software patches). Both the industry and academia are now focusing on finding effective defenses for known issues. However, we only have limited insight on residual attack surface and the completeness of the proposed defenses. In this paper, we present a systematization of transient execution attacks. Our systematization uncovers 6 (new) transient execution attacks that have been overlooked and not been investigated so far: 2 new exploitable Meltdown effects: Meltdown-PK (Protection Key Bypass) on Intel, and Meltdown-BND (Bounds Check Bypass) on Intel and AMD; and 4 new Spectre mistraining strategies. We evaluate the attacks in our classification tree through proof-of-concept implementations on 3 major CPU vendors (Intel, AMD, ARM). Our systematization yields a more complete picture of the attack surface and allows for a more systematic evaluation of defenses. Through this systematic evaluation, we discover that most defenses, including deployed ones, cannot fully mitigate all attack variants.
    CachingSoftwareClassificationMultidimensional ArrayMicroarchitectureOperating systemSecurityArchitectureTaxonomyOptimization...
  • This retrospective paper describes the RowHammer problem in Dynamic Random Access Memory (DRAM), which was initially introduced by Kim et al. at the ISCA 2014 conference~\cite{rowhammer-isca2014}. RowHammer is a prime (and perhaps the first) example of how a circuit-level failure mechanism can cause a practical and widespread system security vulnerability. It is the phenomenon that repeatedly accessing a row in a modern DRAM chip causes bit flips in physically-adjacent rows at consistently predictable bit locations. RowHammer is caused by a hardware failure mechanism called {\em DRAM disturbance errors}, which is a manifestation of circuit-level cell-to-cell interference in a scaled memory technology. Researchers from Google Project Zero demonstrated in 2015 that this hardware failure mechanism can be effectively exploited by user-level programs to gain kernel privileges on real systems. Many other follow-up works demonstrated other practical attacks exploiting RowHammer. In this article, we comprehensively survey the scientific literature on RowHammer-based attacks as well as mitigation techniques to prevent RowHammer. We also discuss what other related vulnerabilities may be lurking in DRAM and other types of memories, e.g., NAND flash memory or Phase Change Memory, that can potentially threaten the foundations of secure systems, as the memory technologies scale to higher densities. We conclude by describing and advocating a principled approach to memory reliability and security research that can enable us to better anticipate and prevent such vulnerabilities.
    SecuritySoftwareGoogle.comInterferenceArchitectureOperating systemCachingCapacitorTransistorsSolid state drive...
  • In this work, we reexamine sulfur chemistry occurring on and in the ice mantles of interstellar dust grains, and report the effects of two new modifications to standard astrochemical models; namely, (a) the incorporation of cosmic ray-driven radiation chemistry and (b) the assumption of fast, non-diffusive reactions for key radicals in the bulk. Results from our models of dense molecular clouds show that these changes can have a profound influence on the abundances of sulfur-bearing species in ice mantles, including a reduction in the abundance of solid-phase H$_2$S and HS, and a significant increase in the abundances of OCS, SO$_2$, as well as pure allotropes of sulfur, especially S$_8$. These pure-sulfur species - though nearly impossible to observe directly - have long been speculated to be potential sulfur reservoirs and our results represent possibly the most accurate estimates yet of their abundances in the dense ISM. Moreover, the results of these updated models are found to be in good agreement with available observational data. Finally, we examine the implications of our findings with regard to the as-yet-unknown sulfur reservoir thought to exist in dense interstellar environments.
    MantleCosmic rayRadiation chemistryDust grainInterstellar mediumSulfideMolecular cloudInterstellar dustDense cloudsAbundance ratio...
  • Very-high-energy (VHE) BL Lac spectra extending above $10 \, \rm TeV$ provide a unique opportunity for testing physics beyond the standard model of elementary particle and alternative blazar emission models. We consider the hadron beam, the photon to axion-like particle (ALP) conversion, and the Lorentz invariance violation (LIV) scenarios by analyzing their consequences and induced modifications to BL Lac spectra. In particular, we consider how different processes can provide similar spectral features (e.g. hard tails) and we discuss the ways they can be disentangled. We use HEGRA data of a high state of Markarian 501 and the HESS spectrum of the extreme BL Lac (EHBL) 1ES 0229+200. In addition, we consider two hypothetical EHBLs similar to 1ES 0229+200 located at redshifts $z=0.3$ and $z=0.5$. We observe that both the hadron beam and the photon-ALP oscillations predict a hard tail extending to energies larger than those possible in the standard scenario. Photon-ALP interaction predicts a peak in the spectra of distant BL Lacs at about $20-30 \, \rm TeV$, while LIV produces a strong peak in all BL Lac spectra around $\sim 100 \, \rm TeV$. The peculiar feature of the photon-ALP conversion model is the production of oscillations in the spectral energy distribution, so that its detection/absence can be exploited to distinguish among the considered models. The above mentioned features coming from the three models may be detected by the upcoming Cherenkov Telescope Array (CTA). Thus, future observations of BL Lac spectra could eventually shed light about new physics and alternative blazar emission models, driving fundamental research towards a specific direction.
    Axion-like particleBL LacertaeLorentz violationCherenkov Telescope ArrayALP interactionsBlazarActive Galactic NucleiFlat spectrum radio quasarExtragalactic background lightSteady state...
  • We study displaced vertex signatures of long-lived particles (LLPs) from exotic Higgs decays in the context of a Higgs-portal model and a neutral-naturalness model at the CEPC and FCC-ee. Such two models feature two representative mass ranges for LLPs, which show very different behavior in their decay signatures. The Higgs-portal model contains a very light sub-GeV scalar boson stemming from a singlet scalar field appended to the Standard Model. Such a light scalar LLP decays into a pair of muons or pions, giving rise to a distinctive signature of collimated muon-jet or pion-jet, thanks to the sub-GeV mass. On the other hand, the neutral-naturalness model, e.g., folded supersymmetry, predicts the lightest mirror glueball of mass $O(10)$ GeV, giving rise to long decays with a large transverse impact parameter because of the relatively large mass. Utilizing such distinct characteristics to remove the background, we estimate the sensitivities of searches for light scalar bosons and mirror glueballs at the CEPC and FCC-ee. We find either complementary or stronger coverage compared to the previous results in the similar contexts.
    Long Lived ParticleHiggs bosonGlueballStandard ModelScalar bosonCEPCFFC-ee experimentHiggs portalLight scalarDisplaced vertices...
  • We study the moduli spaces of polarised irreducible symplectic manifolds. By a comparison with locally symmetric varieties of orthogonal type of dimension 20, we show that the moduli space of 2d polarised (split type) symplectic manifolds which are deformation equivalent to degree 2 Hilbert schemes of a K3 surface is of general type if d is at least 12.
    Symplectic manifoldSubgroupManifoldRankingCoprimePeriod mappingIsomorphismHilbert schemeQuasiprojective varietyHolomorph...
  • The Ly-$\alpha$ forest 1D flux power spectrum is a powerful probe of several cosmological parameters. Assuming a $\Lambda$CDM cosmology including massive neutrinos, we find that the latest SDSS DR14 BOSS and eBOSS Ly-$\alpha$ forest data is in very good agreement with current weak lensing constraints on $(\Omega_m, \sigma_8)$ and has the same small level of tension with Planck. We did not identify a systematic effect in the data analysis that could explain this small tension, but we show that it can be reduced in extended cosmological models where the spectral index is not the same on the very different times and scales probed by CMB and Ly-$\alpha$ data. A particular case is that of a $\Lambda$CDM model including a running of the spectral index on top of massive neutrinos. With combined Ly-$\alpha$ and Planck data, we find a slight (3$\sigma$) preference for negative running, $\alpha_s= -0.010 \pm 0.004$ (68% CL). Neutrino mass bounds are found to be robust against different assumptions. In the $\Lambda$CDM model with running, we find $\sum m_\nu <0.11$ eV at the 95% confidence level for combined Ly-$\alpha$ and Planck (temperature and polarisation) data, or $\sum m_\nu < 0.09$ eV when adding CMB lensing and BAO data. We further provide strong and nearly model-independent bounds on the mass of thermal warm dark matter: $m_X > 10\;\mathrm{keV}$ (95% CL) from Ly-$\alpha$ data alone.
    Planck missionFlux power spectrumCosmic microwave backgroundWarm dark matterNeutrino massBaryon acoustic oscillationsCosmological parametersBaryon Oscillation Spectroscopic SurveyFrequentist approachSloan Digital Sky Survey...
  • The light travel time differences in strong gravitational lensing systems allows an independent determination of the Hubble constant. This method has been successfully applied to several lens systems. The formally most precise measurements are, however, in tension with the recent determination of $H_0$ from the Planck satellite for a spatially flat six-parameters $\Lambda CDM$ cosmology. We reconsider the uncertainties of the method, concerning the mass profile of the lens galaxies, and show that the formal precision relies on the assumption that the mass profile is a perfect power law. Simple analytical arguments and numerical experiments reveal that mass-sheet like transformations yield significant freedom in choosing the mass profile, even when exquisite Einstein rings are observed. Furthermore, the characterization of the environment of the lens does not break that degeneracy which is not physically linked to extrinsic convergence. We present an illustrative example where the multiple imaging properties of a composite (baryons + dark matter) lens can be extremely well reproduced by a power-law model having the same velocity dispersion, but with predictions for the Hubble constant that deviate by $\sim 20%$. Hence we conclude that the impact of degeneracies between parametrized models have been underestimated in current $H_0$ measurements from lensing, and need to be carefully reconsidered.
    Mass-sheet degeneracyMass profileMass distributionTime delayGravitational lens galaxyHubble constantGalaxyEinstein radiusStrong gravitational lensingGravitational lensing...
  • It is well known that measurements of H0 from gravitational lens time delays scale as H0~1-k_E where k_E is the mean convergence at the Einstein radius R_E but that all available lens data other than the delays provide no direct constraints on k_E. The properties of the radial mass distribution constrained by lens data are R_E and the dimensionless quantity x=R_E a''(R_E)/(1-k_E)$ where a''(R_E) is the second derivative of the deflection profile at R_E. Lens models with too few degrees of freedom, like power law models with densities ~r^(-n), have a one-to-one correspondence between x and k_E (for a power law model, x=2(n-2) and k_E=(3-n)/2=(2-x)/4). This means that highly constrained lens models with few parameters quickly lead to very precise but inaccurate estimates of k_E and hence H0. Based on experiments with a broad range of plausible dark matter halo models, it is unlikely that any current estimates of H0 from gravitational lens time delays are more accurate than ~10%, regardless of the reported precision.
    Navarro-Frenk-White profileMass distributionTime delayGravitational lensingEinstein radiusDegree of freedomMass to light ratioHalo modelDark matter haloHubble constant...
  • The vacuum expectation values of conserved currents play an essential role in the generalized hydrodynamics of integrable quantum field theories. We use analytic continuation to extend these results for the excited state expectation values in a finite volume. Our formulas are valid for diagonally scattering theories and incorporate all finite size corrections.
    Expectation ValueExcited stateThermodynamic Bethe AnsatzVacuum expectation valueFinite sizeQuantum field theoryScattering theoryRapidityLocal thermal equilibriumAnalytic continuation...
  • We exploit null vectors of the fractional Virasoro algebra of the symmetric product orbifold to compute correlation functions of twist fields in the large $N$ limit. This yields a new method to derive correlation functions in these orbifold CFTs that is purely based on the symmetry algebra. We explore various generalisations, such as subleading (torus) contributions or correlation functions of other fields than the bare twist fields. We comment on the consequences of our computation for the $\text{AdS}_3/\text{CFT}_2$ correspondence.
    OrbifoldCovering spaceTwist fieldsTwo-point correlation functionConformal field theoryTwisted sectorOperator product expansionConjugacy classMonodromyWorldsheet...
  • The production of heavy quarkonium in heavy ion collisions has been used as an important probe of the quark-gluon plasma (QGP). Due to the plasma screening effect, the color attraction between the heavy quark antiquark pair inside a quarkonium is significantly suppressed at high temperature and thus no bound states can exist, i.e., they "melt". In addition, a bound heavy quark antiquark pair can dissociate if enough energy is transferred to it in a dynamical process inside the plasma. So one would expect the production of quarkonium to be considerably suppressed in heavy ion collisions. However, experimental measurements have shown that a large amount of quarkonia survive the evolution inside the high temperature plasma. It is realized that the in-medium recombination of unbound heavy quark pairs into quarkonium is as crucial as the melting and dissociation. Thus, phenomenological studies have to account for static screening, dissociation and recombination in a consistent way. But recombination is less understood theoretically than the melting and dissociation. Many studies using semi-classical transport equations model the recombination effect from the consideration of detailed balance at thermal equilibrium. However, these studies cannot explain how the system of quarkonium reaches equilibrium and estimate the time scale of the thermalization. Recently, another approach based on the open quantum system formalism started being used. In this framework, one solves a quantum evolution for in-medium quarkonium. Dissociation and recombination are accounted for consistently. However, the connection between the semi-classical transport equation and the quantum evolution is not clear. In this dissertation, I will try to address the issues raised above. As a warm-up project, I will first study a similar problem: $\alpha$-$\alpha$ scattering at the $^8$Be resonance inside an $e^-e^+\gamma$ plasma. By applying pionless effective field theory and thermal field theory, I will show how the plasma screening effect modifies the $^8$Be resonance energy and width. I will discuss the need to use the open quantum system formalism when studying the time evolution of a system embedded inside a plasma. Then I will use effective field theory of QCD and the open quantum system formalism to derive a Lindblad equation for bound and unbound heavy quark antiquark pairs inside a weakly-coupled QGP. Under the Markovian approximation and the assumption of weak coupling between the system and the environment, the Lindblad equation will be shown to turn to a Boltzmann transport equation if a Wigner transform is applied to the open system density matrix. These assumptions will be justified by using the separation of scales, which is assumed in the construction of effective field theory. I will show the scattering amplitudes that contribute to the collision terms in the Boltzmann equation are gauge invariant and infrared safe. By coupling the transport equation of quarkonium with those of open heavy flavors and solving them using Monte Carlo simulations, I will demonstrate how the system of bound and unbound heavy quark antiquark pairs reaches detailed balance and equilibrium inside the QGP. Phenomenologically, my calculations can describe the experimental data on bottomonium production. Finally I will extend the framework to study the in-medium evolution of heavy diquarks and estimate the production rate of the doubly charmed baryon $\Xi_{cc}^{++}$ in heavy ion collisions.
    Quark-gluon plasmaQuarkoniumHeavy quarkHeavy ion collisionEffective field theoryRecombinationTransport equationMeltingRelativistic Heavy Ion ColliderScreening effect...
  • We consider a hypothetic planet with the same mass $m$, radius $R$, angular momentum $\mathbf S$, oblateness $J_2$, semimajor axis $a$, eccentricity $e$, inclination $I$, and obliquity $\varepsilon$ of the Earth orbiting a main sequence star with the same mass $M_\star$ and radius $R_\star$ of the Sun at a distance $r_\bullet \simeq 1\,\mathrm{parsec}\,\left(\mathrm{pc}\right)$ from a supermassive black hole in the center of the hosting galaxy with the same mass $M_\bullet$ of, say, $\mathrm{M87}^\ast$. We preliminarily investigate some dynamical consequences of its presence in the neighbourhood of such a stellar system on the planet's possibility of sustaining complex life over the eons. In particular, we obtain general analytic expressions for the long-term rates of change, doubly averaged over both the planetary and the galactocentric orbital periods $P_\mathrm{b}$ and $P_\bullet$, of $e,\,I,\,\varepsilon$, which are the main quantities directly linked to the stellar insolation. We find that, for certain orbital configurations, the planet's perihelion distance $q=a\left(1-e\right)$ may greatly shrink leading, in some cases, even to an impact with the star. Also $I$ may notably change, with variations of the order even of tens of degrees. On the other hand, $\varepsilon$ does not seem to be particularly affected, being shifted, at most, by $\simeq 0.02\,\mathrm{deg}$ over a Myr. Our results strongly depend the eccentricity $e_\bullet$ of the galactocentric motion.
    PlanetSupermassive black holeEarthStarEccentricityInclinationSunAxial tiltPerihelionEarth-like planet...
  • The Galaxy Zoo (GZ) project has provided quantitative visual morphologies for over a million galaxies, and has been part of a reinvigoration of interest in the morphologies of galaxies and what they reveal about galaxy evolution. Morphological information collected by GZ has shown itself to be a powerful tool for studying galaxy evolution, and GZ continues to collect classifications - currently serving imaging from DECaLS in its main site, and running a variety of related projects hosted by the Zooniverse; the citizen science platform which came out of the early success of GZ. I highlight some of the results from the last twelve years, with a particular emphasis on linking morphology and dynamics, look forward to future projects in the GZ family, and provide a quick start guide for how you can easily make use of citizen science techniques to analysis your own large and complex data sets.
    ClassificationGalaxyCitizen scienceGalaxy ZooGalactic evolutionCrowdsourcingSpiral armSloan Digital Sky SurveyDark Energy Camera Legacy SurveyDisk galaxy...
  • The WiggleZ Dark Energy Survey measured the redshifts of over 200,000 UV-selected (NUV<22.8 mag) galaxies on the Anglo-Australian Telescope. The survey detected the baryon acoustic oscillation signal in the large scale distribution of galaxies over the redshift range 0.2<z<1.0, confirming the acceleration of the expansion of the Universe and measuring the rate of structure growth within it. Here we present the final data release of the survey: a catalogue of 225415 galaxies and individual files of the galaxy spectra. We analyse the emission-line properties of these UV-luminous Lyman-break galaxies by stacking the spectra in bins of luminosity, redshift, and stellar mass. The most luminous (-25 mag < MFUV <-22 mag) galaxies have very broad H-beta emission from active nuclei, as well as a broad second component to the [OIII] (495.9 nm, 500.7 nm) doublet lines that is blue shifted by 100 km/s, indicating the presence of gas outflows in these galaxies. The composite spectra allow us to detect and measure the temperature-sensitive [OIII] (436.3 nm) line and obtain metallicities using the direct method. The metallicities of intermediate stellar mass (8.8<log(M*/Msun)<10) WiggleZ galaxies are consistent with normal emission-line galaxies at the same masses. In contrast, the metallicities of high stellar mass (10<log(M*/Msun)<12) WiggleZ galaxies are significantly lower than for normal emission-line galaxies at the same masses. This is not an effect of evolution as the metallicities do not vary with redshift; it is most likely a property specific to the extremely UV-luminous WiggleZ galaxies.
    GalaxyWiggleZStellar massMetallicityEmission line galaxyExpansion of the UniverseTelescopesBaryon acoustic oscillationsLuminosityMilky Way...
  • The aim of this paper is to describe how to use regularization and renormalization to construct a perturbative quantum field theory from a Lagrangian. We first define renormalizations and Feynman measures, and show that although there need not exist a canonical Feynman measure, there is a canonical orbit of Feynman measures under renormalization. We then construct a perturbative quantum field theory from a Lagrangian and a Feynman measure, and show that it satisfies perturbative analogues of the Wightman axioms, extended to allow time-ordered composite operators over curved spacetimes.
    RenormalizationQuantum field theoryRegularizationFeynman diagramsIsomorphismNormal orderHopf algebraSymmetric algebraMeromorphic functionInfinitesimal...
  • Observational systematics complicate comparisons with theoretical models limiting understanding of galaxy evolution. In particular, different empirical determinations of the stellar mass function imply distinct mappings between the galaxy and halo masses, leading to diverse galaxy evolutionary tracks. Using our state-of-the-art STatistical sEmi-Empirical modeL, STEEL, we show fully self-consistent models capable of generating galaxy growth histories that simultaneously and closely agree with the latest data on satellite richness and star-formation rates at multiple redshifts and environments. Central galaxy histories are generated using the central halo mass tracks from state-of-the-art statistical dark matter accretion histories coupled to abundance matching routines. We show that too flat high-mass slopes in the input stellar-mass-halo-mass relations as predicted by previous works, imply non-physical stellar mass growth histories weaker than those implied by satellite accretion alone. Our best-fit models reproduce the satellite distributions at the largest masses and highest redshifts probed, the latest data on star formation rates and its bi-modality in the local Universe, and the correct fraction of ellipticals. Our results are important to predict robust and self-consistent stellar-mass-halo-mass relations and to generate reliable galaxy mock catalogues for the next generations of extra-galactic surveys such as Euclid and LSST.
    GalaxyStar formation rateStellar mass functionStellar-to-halo mass relationAccretionStellar massHalo abundance matchingHigh massVirial massStar formation...
  • The ALMA-ALPINE [CII] survey (A2C2S) aims at characterizing the properties of a sample of normal star-forming galaxies (SFGs). ALPINE, the ALMA Large Program to INvestigate 118 galaxies observed in the [CII]-158$\mu$m line and far Infrared (FIR) continuum emission in the period of rapid mass assembly, right after HI reionization ended, at redshifts 4<z<6. We present the survey science goals, the observational strategy and the sample selection of the 118 galaxies observed with ALMA, with a typical beam size of about 0.7\arcsec, or $<$ 6 kpc at the median redshift of the survey. The properties of the sample are described, including spectroscopic redshifts derived from UV-rest frame, stellar masses and star-formation rates obtained from spectral energy distribution (SED) fitting. The observed properties derived from the ALMA data are presented and discussed in terms of the overall detection rate in [CII] and FIR continuum, with the observed signal-to-noise distribution. The sample is representative of the SFG population on the main sequence at these redshifts. The overall detection rate in [CII] is 64\%. From a visual inspection of the [CII] data cubes together with the large wealth of ancillary data we find a surprisingly wide range of galaxy types, including 40\% mergers, 20\% extended and dispersion dominated, 13\% compact and 11\% rotating discs, the remaining 16\% being too faint to be classified. This diversity indicates that a wide array of physical processes must be at work at this epoch, first and foremost galaxy merging. This paper sets a reference sample for the gas distribution in normal SFGs at 4<z<6, a key epoch in galaxy assembly, ideally suited for studies with future facilities like the James Webb Space Telescope and extremely large telescopes.
    GalaxyAtacama Large Millimeter ArrayStar formationSpectral energy distributionMain sequence starClassificationStar-forming galaxyStellar massSpectroscopic redshiftGalaxy merger...
  • Here we report the discovery with the Giant Metrewave Radio Telescope of an extremely large ($\sim$115 kpc in diameter) HI ring off-centered from a massive quenched galaxy, AGC 203001. This ring does not have any bright extended optical counterpart, unlike several other known ring galaxies. Our deep $g$, $r$, and $i$ optical imaging of the HI ring, using the MegaCam instrument on the Canada-France-Hawaii Telescope, however, shows several regions with faint optical emission at a surface brightness level of $\sim$28 mag/arcsec$^2$. Such an extended HI structure is very rare with only one other case known so far -- the Leo ring. Conventionally, off-centered rings have been explained by a collision with an "intruder" galaxy leading to expanding density waves of gas and stars in the form of a ring. However, in such a scenario the impact also leads to large amounts of star formation in the ring which is not observed in the ring presented in this paper. We discuss possible scenarios for the formation of such HI dominated rings.
    GalaxyStar formationCanada-France-Hawaii TelescopeQuenchingOptical identificationEarly-type galaxyLeo RingHost galaxyRing galaxyMilky Way...
  • The minimum spanning tree (MST), a graph constructed from a distribution of points, draws lines between pairs of points so that all points are linked in a single skeletal structure that contains no loops and has minimal total edge length. The MST has been used in a broad range of scientific fields such as particle physics (to distinguish classes of events in collider collisions), in astronomy (to detect mass segregation in star clusters) and cosmology (to search for filaments in the cosmic web). Its success in these fields has been driven by its sensitivity to the spatial distribution of points and the patterns within. MiSTree, a public Python package, allows a user to construct the MST in a variety of coordinates systems, including Celestial coordinates used in astronomy. The package enables the MST to be constructed quickly by initially using a k-nearest neighbour graph (kNN, rather than a matrix of pairwise distances) which is then fed to Kruskal's algorithm to construct the MST. MiSTree enables a user to measure the statistics of the MST and provides classes for binning the MST statistics (into histograms) and plotting the distributions. Applying the MST will enable the inclusion of high-order statistics information from the cosmic web which can provide additional information to improve cosmological parameter constraints. This information has not been fully exploited due to the computational cost of calculating N-point statistics. MiSTree was designed to be used in cosmology but could be used in any field which requires extracting non-Gaussian information from point distributions. The source code for MiSTree is available on GitHub at https://github.com/knaidoo29/mistree
    Minimum spanning treeStatisticsGraphPythonCosmologyCosmic webMass segregationKruskal's algorithmCosmological parametersGalaxy...
  • We give an enumeration of all positive definite primitive Z-lattices in dimension >= 3 whose genus consists of a single isometry class. This is achieved by using bounds obtained from the Smith-Minkowski-Siegel mass formula to computationally construct the square-free determinant lattices with this property, and then repeatedly calculating pre-images under a mapping first introduced by G. L. Watson. We hereby complete the classification of single-class genera in dimensions 4 and 5 and correct some mistakes in Watson's classifications in other dimensions. A list of all single-class primitive Z-lattices has been compiled and incorporated into the Catalogue of Lattices.
    Square-freeClassificationIsometryRankingGeneralized Riemann hypothesisOrthogonal groupZeta functionNumber theoryEmbeddingMass...
  • We construct a tower of arithmetic generators of the bigraded polynomial ring J_{*,*}^{w, O}(D_n) of weak Jacobi modular forms invariant with respect to the full orthogonal group O(D_n) of the root lattice D_n for 2\le n\le 8. This tower corresponds to the tower of strongly reflective modular forms on the orthogonal groups of signature (2,n) which determine the Lorentzian Kac-Moody algebras related to the BCOV (Bershadsky-Cecotti-Ooguri-Vafa)-analytic torsions. We prove that the main three generators of index one of the graded ring satisfy a special system of modular differential equations. We found also a general modular differential equation of the generator of weight 0 and index 1 which generates the automorphic discriminant of the moduli space of Enriques surfaces.
    Jacobi formModular formOrthogonal groupPolynomial ringArithmeticRoot systemSubgroupKac-Moody algebraWeyl groupAnalytic torsion...
  • Introduction: The tau statistic uses geolocation and, usually, symptom onset time to assess global spatiotemporal clustering from epidemiological data. We explore how computation and analysis methods may bias estimates. Methods: Following a previous review of the statistic, we tested several aspects that could affect graphical hypothesis testing of clustering or bias clustering range estimates, by comparison with a baseline analysis of an open access measles dataset: these aspects included bootstrap sampling method and confidence interval (CI) type. Correct practice of hypothesis testing of no clustering and clustering range estimation of the tau statistic are explained. Results: Our re-analysis of the dataset found evidence against no spatiotemporal clustering p-value $\in$ [0,0.014] (global envelope test). We developed a tau-specific modification of the Loh & Stein bootstrap sampling method, whose more precise bootstrapped tau estimates led to the clustering endpoint estimate being 20% higher than previously published (36.0m,95% bias-corrected and accelerated (BCa) CI (14.9,46.6), vs 30m). The estimated bias reduction led to an increase in the clustering area of elevated disease odds by 44%. We argue that the BCa CI is essential for asymmetric sample bootstrap distributions of tau estimates. Discussion: Bootstrap sampling method and CI type can bias the clustering range estimated. Moderate radial bias to the range estimate are more than doubled when considered on the areal scale, which public health resources are proportional to. We advocate proper implementation of this useful statistic, ultimately to reduce inaccuracies in control policy decisions made during disease clustering analysis.
    Confidence intervalStatisticsStatistical estimatorGraphNull hypothesisRankK-functionTwo-point correlation functionSoftwareThermodynamic Bethe Ansatz...
  • When studying quantum field theories and lattice models, it is often useful to analytically continue the number of field or spin components from an integer to a real number. In spite of this, the precise meaning of such analytic continuations has never been fully clarified, and in particular the symmetry of these theories is obscure. We clarify these issues using Deligne categories and their associated Brauer algebras, and show that these provide logically satisfactory answers to these questions. Simple objects of the Deligne category generalize the notion of an irreducible representations, avoiding the need for such mathematically nonsensical notions as vector spaces of non-integer dimension. We develop a systematic theory of categorical symmetries, applying it in both perturbative and non-perturbative contexts. A partial list of our results is: categorical symmetries are preserved under RG flows; continuous categorical symmetries come equipped with conserved currents; CFTs with categorical symmetries are necessarily non-unitary.
    Lattice modelQuantum field theoryRenormalisation group flowIrreducible representationBrauer algebraAnalytic continuationConformal field theoryVector spaceSymmetryTheory...
  • Noise sources are ubiquitous in Nature and give rise to a description of quantum systems in terms of stochastic Hamiltonians. Decoherence dominates the noise-averaged dynamics and leads to dephasing and the decay of coherences in the eigenbasis of the fluctuating operator. For energy-diffusion processes stemming from fluctuations of the system Hamiltonian the characteristic decoherence time is shown to be proportional to the heat capacity. We analyze the decoherence dynamics of entangled CFTs and characterize the dynamics of the purity, and logarithmic negativity, that are shown to decay monotonically as a function of time. The converse is true for the quantum Renyi entropies. From the short-time asymptotics of the purity, the decoherence rate is identified and shown to be proportional to the central charge. The fixed point characterizing long times of evolution depends on the presence degeneracies in the energy spectrum. We show how information loss associated with decoherence can be attributed to its leakage to an auxiliary environment and discuss how gravity duals of decoherence dynamics in holographic CFTs looks like in AdS/CFT. We find that the inner horizon region of eternal AdS black hole is highly squeezed due to the decoherence.
    Conformal field theoryHamiltonianDensity matrixEntanglementPartition functionRenyi entropyAnti de Sitter spaceSaddle pointEntropyMaster equation...
  • Modelling and interpreting the SEDs of galaxies has become one of the key tools at the disposal of extragalactic astronomers. Ideally, we could hope that, through a detailed study of its SED, we can infer the correct physical properties and the evolutionary history of a galaxy. In the past decade, panchromatic SED fitting, i.e. modelling the SED over the entire UV-submm wavelength regime, has seen an enormous advance. Several advanced new codes have been developed, nearly all based on Bayesian inference modelling. In this review, we briefly touch upon the different ingredients necessary for panchromatic SED modelling, and discuss the methodology and some important aspects of Bayesian SED modelling. The current uncertainties and limitations of panchromatic SED modelling are discussed, and we explore some avenues how the models and techniques can potentially be improved in the near future.
    Spectral energy distributionGalaxyStar formation historiesRadiative transferBayesianBayesian approachStellar evolutionStellar populationsInterstellar mediumInterstellar dust...
  • MOND is a paradigm that contends to account for the mass discrepancies in the Universe without invoking `dark' components, such as `dark matter' and `dark energy'. It does so by supplanting Newtonian dynamics and General Relativity, departing from them at very low accelerations. Having in mind readers who are historians and philosophers of science, as well as physicists and astronomers, I describe in this review the main aspects of MOND -- its statement, its basic tenets, its main predictions, and the tests of these predictions -- contrasting it with the dark-matter paradigm. I then discuss possible wider ramifications of MOND, for example the potential significance of the MOND constant, $a_0$, with possible implications for the roots of MOND in cosmology. Along the way I point to parallels with several historical instances of nascent paradigms. In particular, with the emergence of the Copernican world picture, that of quantum physics, and that of relativity, as regards their initial advent, their development, their schematic structure, and their ramifications. For example, the interplay between theories and their corollary laws, and the centrality of a new constant with converging values as deduced from seemingly unrelated manifestations of these laws. I demonstrate how MOND has already unearthed a number of unsuspected laws of galactic dynamics (to which, indeed, $a_0$ is central) predicting them a priori, and leading to their subsequent verification. I parallel the struggle of the new with the old paradigms, and the appearance of hybrid paradigms at such times of struggle. I also try to identify in the history of those established paradigms a stage that can be likened to that of MOND today.
    Modified Newtonian DynamicsDark matterGalaxyNewtonian dynamicsdeep-MOND limitGeneral relativityRotation CurveCosmologyMilky WayDisk galaxy...
  • Dust plays an important role in shaping a galaxy's spectral energy distribution (SED). It absorbs ultraviolet (UV) to near-infrared (NIR) radiation and re-emits this energy in the far-infrared (FIR). The FIR is essential to understand dust in galaxies. However, deep FIR observations require a space mission, none of which are still active today. We aim to infer the FIR emission across six Herschel bands, along with dust luminosity, mass, and effective temperature, based on the available UV to mid-infrared (MIR) observations. We also want to estimate the uncertainties of these predictions, compare our method to energy balance SED fitting, and determine possible limitations of the model. We propose a machine learning framework to predict the FIR fluxes from 14 UV-MIR broadband fluxes. We used a low redshift sample by combining DustPedia and H-ATLAS, and extracted Bayesian flux posteriors through SED fitting. We trained shallow neural networks to predict the far-infrared fluxes, uncertainties, and dust properties. We evaluated them on a test set using a root mean square error (RMSE) in log-space. Our results (RMSE = 0.19 dex) significantly outperform UV-MIR energy balance SED fitting (RMSE = 0.38 dex), and are inherently unbiased. We can identify when the predictions are off, for example when the input has large uncertainties on WISE 22, or when the input does not resemble the training set. The galaxies for which we have UV-FIR observations can be used as a blueprint for galaxies that lack FIR data. This results in a 'virtual FIR telescope', which can be applied to large optical-MIR galaxy samples. This helps bridge the gap until the next FIR mission.
    GalaxySpectral energy distributionLuminosityWide-field Infrared Survey ExplorerNeural networkATLAS Experiment at CERNBayesianMachine learningTraining setNear-infrared...
  • We review the features of Dark Matter as a particle, presenting some old and new instructive models, and looking for their physical implications in the early universe and in the process of structure formation. We also present a schematic of Dark Matter searches and introduce the most promising candidates to the role of Dark Matter particle.
    Dark matterWeakly interacting massive particleDark matter particleNeutrinoAxionStandard ModelSterile neutrinoCosmic rayFreeze-outStructure formation...
  • We report a B-mode power spectrum measurement from the cosmic microwave background (CMB) polarization anisotropy observations made using the SPTpol instrument on the South Pole Telescope. This work uses 500 deg$^2$ of SPTpol data, a five-fold increase over the last SPTpol B-mode release. As a result, the bandpower uncertainties have been reduced by more than a factor of two, and the measurement extends to lower multipoles: $52 < \ell < 2301$. Data from both 95 and 150 GHz are used, allowing for three cross-spectra: 95 GHz x 95 GHz, 95 GHz x 150 GHz, and 150 GHz x 150 GHz. B-mode power is detected at very high significance; we find $P(BB < 0) = 5.8 \times 10^{-71}$, corresponding to a $18.1 \sigma$ detection of power. An upper limit is set on the tensor-to-scalar ratio, $r < 0.44$ at 95% confidence (the expected $1 \sigma$ constraint on $r$ given the measurement uncertainties is 0.22). We find the measured B-mode power is consistent with the Planck best-fit $\Lambda$CDM model predictions. Scaling the predicted lensing B-mode power in this model by a factor Alens, the data prefer Alens = $1.17 \pm 0.13$. These data are currently the most precise measurements of B-mode power at $\ell > 320$.
    Cosmic microwave backgroundBundleB-modesCalibrationSouth Pole TelescopeCosmological magnetic fieldCosmologyInflationary gravitational waveSample varianceVenus...