Recently bookmarked papers

with concepts:
  • We introduce and apply a new approach to probe the response of galactic stellar haloes to the interplay between cosmological merger histories and galaxy formation physics. We perform dark-matter-only, zoomed simulations of two Milky Way-mass hosts and make targeted, controlled changes to their cosmological histories using the genetic modification technique. Populating each history's stellar halo with a semi-empirical, particle-tagging approach then enables a controlled study, with all instances converging to the same large-scale structure, dynamical and stellar mass at $z=0$ as their reference. These related merger scenarios alone generate an extended spread in stellar halo mass fractions (1.5 dex) comparable to the observed population. Largest scatter is achieved by growing late ($z\leq1$) major mergers that spread out existing stars to create massive, in-situ dominated stellar haloes. Increasing a last major merger at $z\sim2$ brings more accreted stars into the inner regions, resulting in smaller scatter in the outskirts which are predominantly built by subsequent minor events. Exploiting the flexibility of our semi-empirical approach, we show that the diversity of stellar halo masses across scenarios is reduced by allowing shallower slopes in the stellar mass--halo mass relation for dwarf galaxies, while it remains conserved when central stars are born with hotter kinematics across cosmic time. The merger-dependent diversity of stellar haloes thus responds distinctly to assumptions in modelling the central and dwarf galaxies respectively, opening exciting prospects to constrain star formation and feedback at different galactic mass-scales with the coming generation of deep, photometric observatories.
    Stellar haloVirial massGalaxyStellar massStarDwarf galaxyGalaxy FormationDark matterMilky WayStar formation...
  • We present a simple regulator-type framework designed specifically for modelling formation of dwarf galaxies. We explore sensitivity of model predictions for the stellar mass--halo mass and stellar mass--metallicity relations to different modelling choices and parameter values. Despite its simplicity, when coupled with realistic mass accretion histories of haloes from simulations and reasonable choices for model parameter values, the framework can reproduce a remarkably broad range of observed properties of dwarf galaxies over seven orders of magnitude in stellar mass. In particular, we show that the model can simultaneously match observational constraints on the stellar mass-halo mass relation, as well as observed relations between stellar mass and gas phase and stellar metallicities, gas mass, size, and star formation rate, as well as general form and diversity of star formation histories (SFHs) of observed dwarf galaxies. The model can thus be used to predict photometric properties of dwarf galaxies hosted by dark matter haloes in $N$-body simulations, such as colors, surface brightnesses, and mass-to-light ratios and to forward model observations of dwarf galaxies. We present examples of such modelling and show that colors and surface brightness distributions of model galaxies are in good agreement with observed distributions for dwarfs in recent observational surveys. We also show that in contrast with the common assumption, the absolute magnitude-halo mass relation is generally predicted to have a non-power law form in the dwarf regime, and that the fraction of haloes that host detectable ultrafaint galaxies is sensitive to reionization redshift (zrei) and is predicted to be consistent with observations for zrei<~9.
    GalaxyDwarf galaxyStellar massStar formationReionizationVirial massAccretionMetallicityStarStar formation rate...
  • Let $X=\mathrm{SL}_3(\mathbb{R})/\mathrm{SL}_3(\mathbb{Z})$, and $g_t=\mathrm{diag}(e^{2t}, e^{-t}, e^{-t})$. Let $\nu$ denote the push-forward of the normalized Lebesgue measure on a straight line segment in the expanding horosphere of $\{g_t\}_{t>0}$ by the map $h\mapsto h\mathrm{SL}_3(\mathbb{Z})$, $\mathrm{SL}_3(\mathbb{R})\to X$. We explicitly give necessary and sufficient Diophantine conditions on the line for equidistribution of each of the following families in $X$: (1) $g_t$-translates of $\nu$ as $t\to\infty$. (2) averages of $g_t$-translates of $\nu$ over $t\in[0,T]$ as $T\to\infty$. (3) $g_{t_i}$-translates of $\nu$ for some $t_i\to\infty$. We apply the dynamical result to show that Lebesgue-almost every point on the planar line $y=ax+b$ is not Dirichlet-improvable if and only if $(a,b)\notin\mathbb{Q}^2$.
    SubgroupLattice (order)Irreducible representationUnipotentTorusDiophantine approximationIrreducible componentFundamental representationContrapositionRatner's theorems...
  • It is a long-standing conjecture that any CFT with a large central charge and a large gap $\Delta_{\text{gap}}$ in the spectrum of higher-spin single-trace operators must be dual to a local effective field theory in AdS. We prove a sharp form of this conjecture by deriving numerical bounds on bulk Wilson coefficients in terms of $\Delta_{\text{gap}}$ using the conformal bootstrap. Our bounds exhibit the scaling in $\Delta_{\text{gap}}$ expected from dimensional analysis in the bulk. Our main tools are dispersive sum rules that provide a dictionary between CFT dispersion relations and S-matrix dispersion relations in appropriate limits. This dictionary allows us to apply recently-developed flat-space methods to construct positive CFT functionals. We show how AdS$_{4}$ naturally resolves the infrared divergences present in 4D flat-space bounds. Our results imply the validity of twice-subtracted dispersion relations for any S-matrix arising from the flat-space limit of AdS/CFT.
    Conformal field theoryAnti de Sitter spaceS-matrixConformal BootstrapWilson coefficientsHigher spinInfrared divergenceCentral chargeAdS/CFT correspondenceEffective field theory...
  • We show that the comma category $(\mathcal{F}\downarrow\mathbf{Grp})$ of groups under the free group functor $\mathcal{F}: \mathbf{Set} \to \mathbf{Grp}$ contains the category $\mathbf{Gph}$ of simple graphs as a full coreflective subcategory. More broadly, we generalize the embedding of topological spaces into Steven Vickers' category of topological systems to a simple technique for embedding certain categories into comma categories, then show as a straightforward application that simple graphs are coreflective in $(\mathcal{F}\downarrow\mathbf{Grp})$.
    Comma categorySimple graphFree groupSubcategoryEmbeddingCategory of groupsGraphArtin groupBoolean algebraAttention...
  • The $T\bar{T}+\Lambda_2$ deformation of a Holographic CFT on a cylinder (or torus in Euclidean signature) inhabits the stretched cosmological horizon in dS$_3$ when the deformation parameter is tuned to a particular value. I will describe how this insight allows us to compute the entropy of the stretched cosmic horizon including the logarithmic correction.
    Conformal field theoryHorizonDensity of statesEntropyPartition functionDs mesonTorusModular invarianceCosmological horizonSaddle point...
  • We prove that the marked triangulation functor from the category of marked cubical sets equipped with a model structure for ($n$-trivial, saturated) comical sets to the category of marked simplicial set equipped with a model structure for ($n$-trivial, saturated) complicial sets is a Quillen equivalence. Our proof is based on the theory of cones, previously developed by the first two authors together with Lindsey and Sattler.
    Model structureSimplicial setMonomorphismIsomorphismFibrationDisorderMorphismTensor productSubcategoryDerived functor...
  • The X9.3 flare SOL20170906T11:55 was observed by the CsI detector aboard the first Chinese X-ray observatory Hard X-ray Modulation telescope (Insight-HXMT). By using wavelets method, we report about 22 s quasiperiodic pulsations(QPPs) during the impulsive phase. And the spectra from 100 keV to 800 keV showed the evolution with the gamma-ray flux, of a power-law photon index from $\sim 1.8$ before the peak, $\sim 2.0$ around the flare peak, to $\sim 1.8$ again. The gyrosynchrotron microwave spectral analysis reveals a $36.6 \pm 0.6 \arcsec$ radius gyrosynchrotron source with mean transverse magnetic field around 608.2 Gauss, and the penetrated $\ge$ 10 keV non-thermal electron density is about $10^{6.7} \mathrm{cm}^{-3}$ at peak time. The magnetic field strength followed the evolution of high-frequency radio flux. Further gyrosynchrotron source modeling analysis implies that there exists a quite steady gyrosynchrotron source, the non-thermal electron density and transverse magnetic field evolution are similar to higher-frequency light curves. The temporally spectral analysis reveals that those non-thermal electrons are accelerated by repeated magnetic reconnection, likely from a lower corona source.
    Hard X-ray Modulation TelescopeHard X-rayCoronaLight curveSpectral analysisWINDWaveletThermal bremsstrahlungMagnetic reconnectionRegion of interest...
  • The starting point to understanding cluster properties is the putative global minimum and all the nearby local energy minima; however, locating them is computationally expensive and challenging due to a combinatorial explosion problem. The relative populations and spectroscopic properties of a molecule that are a function of temperature can be approximately computed by employing statistical thermodynamics. Here, we investigate temperature-entropy-driven isomers distribution on Be$_6$B$_{11}^{-}$ fluxional cluster and the effect of temperature on their infrared spectroscopy and relative populations. We identify the vibration modes possessed by the cluster that significantly contribute to the zero-point energy. A couple of steps are considered for computing the temperature-dependent relative population: First, using a genetic algorithm coupled to density functional theory, we performed an extensive and systematic exploration of the potential/free energy surface of Be$_6$B$_{11}^{-}$ cluster to locate the putative global minimum and elucidate the low-energy structures. Second, the relative populations' temperature effects are determined by considering the thermodynamic properties and Boltzmann factors. The temperature-dependent relative populations show that the entropies and temperature are essential for determining the global minimum. We compute the temperature-dependent total infrared spectra employing the Boltzmann factor weighted sums of each isomer's infrared spectrum and find that at finite temperature, the total infrared spectrum is composed of an admixture of infrared spectra that corresponds to the spectrum of the lowest energy structure and its isomers located at higher energies. The methodology and results describe the thermal effects in the relative population and the infrared spectra.
    C-symmetryEntropyInfrared limitZero-point energyGibbs Free EnergyVibrationPartition functionGenetic algorithmDensity functional theoryOptimization...
  • In order to find the possible progenitors of Milky Way globular clusters, we perform orbit integrations to track the orbits of 151 Galactic globular clusters and the eleven classical Milky Way satellite galaxies backward in time for 11 Gyr in a Milky-Way-plus-satellites potential including the effect of dynamical friction on the satellites. To evaluate possible past associations, we devise a globular-cluster--satellite binding criterion based on the satellite's tidal radius and escape velocity and we test it on globular clusters and satellites associated with the Sagittarius dwarf and with the Large Magellanic Cloud. For these, we successfully recover the dynamical associations highlighted by previous studies and we derive their time of accretion by the Galaxy. Assuming that Milky Way globular clusters are and have been free of dark matter and thus consist of stars alone, we demonstrate that none of the globular clusters show any clear association with the eleven classical Milky Way satellites even though a large fraction of them are believed to be accreted. This means that accreted globular clusters either came in as part of now-disrupted satellite galaxies or that globular clusters may have had dark matter halos in the past -- as suggested by the similar metallicity between globular clusters and dwarf galaxies.
    Globular clusterMilky WayLarge Magellanic CloudMilky Way satelliteSatellite galaxyAccretionTidal radiusDynamical frictionDark Energy SurveyDwarf galaxy...
  • 2106.06405  ,  ,  et al.,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  show less
    The identification of PeVatrons, hadronic particle accelerators reaching the knee of the cosmic ray spectrum (few $10^{15}$ eV), is crucial to understand the origin of cosmic rays in the Galaxy. We provide an update on the unidentified source HESS J1702-420, a promising PeVatron candidate. We present new observations of HESS J1702-420 made with the High Energy Stereoscopic System (H.E.S.S.), and processed using improved analysis techniques. The analysis configuration was optimized to enhance the collection area at the highest energies. We applied a three-dimensional (3D) likelihood analysis to model the source region and adjust non thermal radiative spectral models to the $\gamma$-ray data. We also analyzed archival data from the Fermi Large Area Telescope (LAT) to constrain the source spectrum at $\gamma$-ray energies >10 GeV. We report the detection of a new source component called HESS J1702-420A, that was separated from the bulk of TeV emission at a $5.4\sigma$ confidence level. The power law $\gamma$-ray spectrum of HESS J1702-420A extends with an index of $\Gamma=1.53\pm0.19_\text{stat}\pm0.20_\text{sys}$ and without curvature up to the energy band 64-113 TeV, in which it was detected by H.E.S.S. at a $4.0\sigma$ confidence level. This brings evidence for the source emission up to $100\,\text{TeV}$, which makes HESS J1702-420A a compelling candidate site for the presence of extremely high energy cosmic rays. Remarkably, in a hadronic scenario, the cut-off energy of the proton distribution powering HESS J1702-420A is found to be higher than 0.5 PeV at a 95% confidence level. HESS J1702-420A becomes therefore one of the most solid PeVatron candidates detected so far in H.E.S.S. data, altough a leptonic origin of its emission could not be ruled out either.
    HESS telescopeCosmic rayRegion of interestPulsarEarthSupernova remnantPulsar wind nebulaLine of sightDiffusion coefficientGalactic plane...
  • Magnetic helicity is robustly conserved in systems with large magnetic Reynolds numbers, including most systems of astrophysical interest. This plays a major role in suppressing the kinematic large scale dynamo and driving the large scale dynamo through the magnetic helicity flux. Numerical simulations of astrophysical systems typically lack sufficient resolution to enforce global magnetic helicity over several dynamical times. Errors in the internal distribution of magnetic helicity are equally serious and possibly larger. Here we propose an algorithm for enforcing strict local conservation of magnetic helicity in the Coulomb gauge in numerical simulations.
    Magnetic helicityEddyNumerical simulationHelicityCoulomb gaugeDissipationTurbulenceLong-range magnetic fieldsKinematicsMagnetohydrodynamics...
  • Planck data provide precise constraints on cosmological parameters when assuming the base $\Lambda$CDM model, including a $0.17\%$ measurement of the age of the Universe, $t_0=13.797 \pm 0.023\,{\rm Gyr}$. However, the persistence of the "Hubble tension" calls the base $\Lambda$CDM model's completeness into question and has spurred interest in models such as Early Dark Energy (EDE) that modify the assumed expansion history of the Universe. We investigate the effect of EDE on the redshift-time relation $z \leftrightarrow t$ and find that it differs from the base $\Lambda$CDM model by at least ${\approx} 4\%$ at all $t$ and $z$. As long as EDE remains observationally viable, any inferred $t \leftarrow z$ or $z \leftarrow t$ quoted to a higher level of precision do not reflect the current status of our understanding of cosmology. This uncertainty has important astrophysical implications: the reionization epoch - $10>z>6$ - corresponds to disjoint lookback time periods in the base $\Lambda$CDM and EDE models, and the EDE value of $t_0=13.25 \pm 0.17~{\rm Gyr}$ is in tension with published ages of some stars, star clusters, and ultra-faint dwarf galaxies. However, most published stellar ages do not include an uncertainty in accuracy (due to, e.g., uncertain distances and stellar physics) that is estimated to be $\sim7-10\%$, potentially reconciling stellar ages with $t_{0,\rm EDE}$. We discuss how the big data era for stars is providing extremely precise ages ($<1\%$) and how improved distances and treatment of stellar physics such as convection could result in ages accurate to $4-5\%$, comparable to the current accuracy of $t \leftrightarrow z$. Such precise and accurate stellar ages can provide detailed insight into the high-redshift Universe independent of a cosmological model.
    Early dark energyCosmologyStellar agesStarWhite dwarfGlobular clusterUltra-faint dwarf spheroidal galaxyThe age of the UniverseReionizationHubble constant tension...
  • There is a growing discrepancy in computer vision between large-scale models that achieve state-of-the-art performance and models that are affordable in practical applications. In this paper we address this issue and significantly bridge the gap between these two types of models. Throughout our empirical investigation we do not aim to necessarily propose a new method, but strive to identify a robust and effective recipe for making state-of-the-art large scale models affordable in practice. We demonstrate that, when performed correctly, knowledge distillation can be a powerful tool for reducing the size of large models without compromising their performance. In particular, we uncover that there are certain implicit design choices, which may drastically affect the effectiveness of distillation. Our key contribution is the explicit identification of these design choices, which were not previously articulated in the literature. We back up our findings by a comprehensive empirical study, demonstrate compelling results on a wide range of vision datasets and, in particular, obtain a state-of-the-art ResNet-50 model for ImageNet, which achieves 82.8\% top-1 accuracy.
    DistillationArchitectureSchedulingWide-scale modellingTraining ImageOverfittingOptimizationImage ProcessingHyperparameterImage manifolds...
  • Deep learning's successes are often attributed to its ability to automatically discover new representations of the data, rather than relying on handcrafted features like other learning methods. We show, however, that deep networks learned by the standard gradient descent algorithm are in fact mathematically approximately equivalent to kernel machines, a learning method that simply memorizes the data and uses it directly for prediction via a similarity function (the kernel). This greatly enhances the interpretability of deep network weights, by elucidating that they are effectively a superposition of the training examples. The network architecture incorporates knowledge of the target function into the kernel. This improved understanding should lead to better learning algorithms.
    Deep learningSuperpositionArchitectureTraining setOptimizationMultilayer perceptronPositive definite kernelFeature spaceMachine learningGraphical model...
  • The importance of the working document is that it allows the analysis of the information and the status of cases associated with (SARS-CoV-2) COVID-19 as open data at the municipal, state and national level, with a daily record of patients, according to a age, sex, comorbidities, for the condition of (SARS-CoV-2) COVID-19 according to the following characteristics: a) Positive, b) Negative, c) Suspicious. Likewise, it presents information related to the identification of an outpatient and / or hospitalized patient, attending to their medical development, identifying: a) Recovered, b) Deaths and c) Active, in Phase 3 and Phase 4, in the five main population areas speaker of indigenous language in the State of Veracruz - Mexico. The data analysis is carried out through the application of a data mining algorithm, which provides the information, fast and timely, required for the estimation of Medical Care Scenarios of (SARS-CoV-2) COVID-19, as well as for know the impact on the indigenous language-speaking population in Mexico.
    COVID 19Data mining algorithmsOpen dataLanguage...
  • In this paper we show how to implement in a simple way some complex real-life constraints on the portfolio optimization problem, so that it becomes amenable to quantum optimization algorithms. Specifically, first we explain how to obtain the best investment portfolio with a given target risk. This is important in order to produce portfolios with different risk profiles, as typically offered by financial institutions. Second, we show how to implement individual investment bands, i.e., minimum and maximum possible investments for each asset. This is also important in order to impose diversification and avoid corner solutions. Quite remarkably, we show how to build the constrained cost function as a quadratic binary optimization (QUBO) problem, this being the natural input of quantum annealers. The validity of our implementation is proven by finding the efficient frontier, using D-Wave Hybrid and its Advantage quantum processor, on static portfolios taking assets from the S&P500. We use three different subsets of this index. First, the S&P100 which consists of 100 of the largest companies of the S&P500; second, the 200 best-performing companies of the S&P500; and third, the full S&P500 itself. Our results show how practical daily constraints found in quantitative finance can be implemented in a simple way in current NISQ quantum processors, with real data, and under realistic market conditions. In combination with clustering algorithms, our methods would allow to replicate the behaviour of more complex indexes, such as Nasdaq Composite or others, in turn being particularly useful to build and replicate Exchange Traded Funds (ETF).
    PortfolioOptimizationVolatilityQuantum processorMarketQuantum annealingCovariance matrixNP-hard problemQuantum computationBayesian information criterion...
  • Relativistic jets launched by rotating black holes are powerful emitters of non-thermal radiation. Extraction of the rotational energy via electromagnetic stresses produces magnetically-dominated jets, which may become turbulent. Studies of magnetically-dominated plasma turbulence from first principles show that most of the accelerated particles have small pitch angles, i.e. the particle velocity is nearly aligned with the local magnetic field. We examine synchrotron-self-Compton radiation from anisotropic particles in the fast cooling regime. The small pitch angles reduce the synchrotron cooling rate and promote the role of inverse Compton (IC) cooling, which can occur in two different regimes. In the Thomson regime, both synchrotron and IC components have soft spectra, $\nu F_\nu\propto\nu^{1/2}$. In the Klein-Nishina regime, synchrotron radiation has a hard spectrum, typically $\nu F_\nu\propto\nu$, over a broad range of frequencies. Our results have implications for the modelling of BL Lacs and Gamma-Ray Bursts (GRBs). BL Lacs produce soft synchrotron and IC spectra, as expected when Klein-Nishina effects are minor. The observed synchrotron and IC luminosities are typically comparable, which indicates a moderate anisotropy with pitch angles $\theta\gtrsim0.1$. Rare orphan gamma-ray flares may be produced when $\theta\ll0.1$. The hard spectra of GRBs may be consistent with synchrotron radiation when the emitting particles are IC cooling in the Klein-Nishina regime, as expected for pitch angles $\theta\sim0.1$. Blazar and GRB spectra can be explained by turbulent jets with a similar electron plasma magnetisation parameter, $\sigma_{\rm e}\sim10^4$, which for electron-proton plasmas corresponds to an overall magnetisation $\sigma=(m_{\rm e}/m_{\rm p})\sigma_{\rm e}\sim10$.
    Inverse ComptonSynchrotronCoolingPitch angleGamma ray burstLorentz factorSynchrotron radiationLuminosityBL LacertaeSynchrotron Self-Compton radiation...
  • This paper is concerned with the analysis of a one dimensional wave equation $z_{tt}-z_{xx}=0$ on $[0,1]$ with a Dirichlet condition at $x=0$ and a damping acting at $x=1$ which takes the form $(z_t(t,1),-z_x(t,1))\in\Sigma$ for every $t\geq 0$, where $\Sigma$ is a given subset of $\mathbb R^2$. The study is performed within an $L^p$ functional framework, $p\in [1, +\infty]$. We aim at determining conditions on $\Sigma$ ensuring existence and uniqueness of solutions of that wave equation as well as strong stability and uniform global asymptotic stability of its solutions. In the latter case, we also study the decay rates of the solutions and their optimality. We first establish a one-to-one correspondence between the solutions of that wave equation and the iterated sequences of a discrete-time dynamical system in terms of which we investigate the above mentioned issues. This enables us to provide a simple necessary and sufficient condition on $\Sigma$ ensuring existence and uniqueness of solutions of the wave equation as well as an efficient strategy for determining optimal decay rates when $\Sigma$ verifies a generalized sector condition. As an application, we solve two conjectures stated in the literature, the first one seeking a specific optimal decay rate and the second one associated with a saturation type of damping. In case the boundary damping is subject to perturbations, we derive sharp results regarding asymptotic perturbation rejection and input-to-state issues.
    GraphWave equationDecay rateBounded setDiffeomorphismNonnegativeDirichlet boundary conditionLyapunov functionExponential stabilityWeak solution...
  • A large class of two dimensional quantum gravity theories of Jackiw-Teitelboim form have a description in terms of random matrix models. Such models, treated fully non-perturbatively, can give an explicit and tractable description of the underlying ``microstate'' degrees of freedom. They play a prominent role in regimes where the smooth geometrical picture of the physics is inadequate. This is shown using a natural tool for extracting the detailed microstate physics, a Fredholm determinant ${\rm det}(\mathbf{1}{-}\mathbf{ K})$. Its associated kernel $K(E,E^\prime)$ can be defined explicitly for a wide variety of JT gravity theories. To illustrate the methods, the statistics of the first several energy levels of a non-perturbative definition of JT gravity are constructed explicitly using numerical methods, and the full quenched free energy $F_Q(T)$ of the system is computed for the first time. These results are also of relevance to quantum properties of black holes in higher dimensions.
    Random matrix theoryFredholm determinantQuantum gravityTheories of gravityWavefunctionStatisticsEntropyBlack holePerturbation theoryPartition function...
  • We present Atacama Large Millimetre Array (ALMA) 2mm continuum observations of a complete and unbiased sample of 99 870micron-selected sub-millimeter galaxies (SMGs) in the Extended Chandra Deep Field South (ALESS). Our observations of each SMG reach average sensitivities of 53 microJy/beam. We measure the flux densities for 70 sources, for which we obtain a typical 870micron-to-2mm flux ratio of 14 +/- 5. We do not find a redshift dependence of this flux ratio, which would be expected if the dust emission properties of our SMGs were the same at all redshifts. By combining our ALMA measurements with existing Herschel/SPIRE observations, we construct a (biased) subset of 27 galaxies for which the cool dust emission is sufficiently well sampled to obtain precise constraints on their dust properties using simple isothermal models. Thanks to our new 2mm observations, the dust emissivity index is well-constrained and robust against different dust opacity assumptions. The median dust emissivity index of our SMGs is $\beta\simeq1.9\pm0.4$, consistent with the emissivity index of dust in the Milky Way and other local and high-redshift galaxies, as well as classical dust grain model predictions. We also find a negative correlation between the dust temperature and $\beta$, similar to low-redshift observational and theoretical studies. Our results indicate that $\beta\simeq2$ in high-redshift dusty star-forming galaxies, implying little evolution in dust grain properties between our SMGs and local dusty galaxy samples, and suggesting these high-mass and high-metallicity galaxies have dust reservoirs driven by grain growth in their ISM.
    Atacama Large Millimeter ArrayGalaxySpectral energy distributionDust temperatureOpacityDust emissionLuminosityDust grainInfrared limitOptically thick medium...
  • This paper presents a speech BERT model to extract embedded prosody information in speech segments for improving the prosody of synthesized speech in neural text-to-speech (TTS). As a pre-trained model, it can learn prosody attributes from a large amount of speech data, which can utilize more data than the original training data used by the target TTS. The embedding is extracted from the previous segment of a fixed length in the proposed BERT. The extracted embedding is then used together with the mel-spectrogram to predict the following segment in the TTS decoder. Experimental results obtained by the Transformer TTS show that the proposed BERT can extract fine-grained, segment-level prosody, which is complementary to utterance-level prosody to improve the final prosody of the TTS speech. The objective distortions measured on a single speaker TTS are reduced between the generated speech and original recordings. Subjective listening tests also show that the proposed approach is favorably preferred over the TTS without the BERT prosody embedding module, for both in-domain and out-of-domain applications. For Microsoft professional, single/multiple speakers and the LJ Speaker in the public database, subjective preference is similarly confirmed with the new BERT prosody embedding. TTS demo audio samples are in https://judy44chen.github.io/TTSSpeechBERT/.
    EmbeddingTraining setInferenceGround truthOpen sourceConvolution Neural NetworkAttentionNaturalnessReverberationMean squared error...
  • Custom voice, a specific text to speech (TTS) service in commercial speech platforms, aims to adapt a source TTS model to synthesize personal voice for a target speaker using few speech data. Custom voice presents two unique challenges for TTS adaptation: 1) to support diverse customers, the adaptation model needs to handle diverse acoustic conditions that could be very different from source speech data, and 2) to support a large number of customers, the adaptation parameters need to be small enough for each target speaker to reduce memory usage while maintaining high voice quality. In this work, we propose AdaSpeech, an adaptive TTS system for high-quality and efficient customization of new voices. We design several techniques in AdaSpeech to address the two challenges in custom voice: 1) To handle different acoustic conditions, we use two acoustic encoders to extract an utterance-level vector and a sequence of phoneme-level vectors from the target speech during training; in inference, we extract the utterance-level vector from a reference speech and use an acoustic predictor to predict the phoneme-level vectors. 2) To better trade off the adaptation parameters and voice quality, we introduce conditional layer normalization in the mel-spectrogram decoder of AdaSpeech, and fine-tune this part in addition to speaker embedding for adaptation. We pre-train the source TTS model on LibriTTS datasets and fine-tune it on VCTK and LJSpeech datasets (with different acoustic conditions from LibriTTS) with few adaptation data, e.g., 20 sentences, about 1 minute speech. Experiment results show that AdaSpeech achieves much better adaptation quality than baseline methods, with only about 5K specific parameters for each speaker, which demonstrates its effectiveness for custom voice. Audio samples are available at https://speechresearch.github.io/adaspeech/.
    EmbeddingInferenceAttentionTraining setGround truthNaturalnessMean squared errorAblationGame theoryVariational autoencoders...
  • We use the stochastic series expansion quantum Monte Carlo method, together with the eigenstate-to-Hamiltonian mapping approach, to map the localized ground states of the disordered two-dimensional Heisenberg model, to excited states of a target Hamiltonian. The localized nature of the ground state is established by studying the spin stiffness, local entanglement entropy, and local magnetization. This construction allows us to define many body localized states in an energy resolved phase diagram thereby providing concrete numerical evidence for the existence of a many-body localized phase in two dimensions.
    HamiltonianMany-body localizationDisorderQuantum Monte CarloExcited stateMagnetizationHeisenberg modelCovariance matrixEntanglement entropyMonte Carlo method...
  • Motivated by the recent search for a Higgs-Portal Scalar decaying inside the MicroBooNE detector, we demonstrate that the same search can be used to constrain Heavy Neutral Leptons. These are gauge-singlet fermions that interact with the Standard Model by mixing with neutrinos only and could be related to the origin of neutrino masses. By recasting the results of the MicroBooNE Collaboration's analysis, we show that, for a Heavy Neutral Lepton that mixes predominantly with muon-flavored neutrinos, previously unexplored parameter space can be excluded for masses between 30 and 150 MeV. Additionally, we make our Monte Carlo tools publicly available.
    Heavy sterile neutrinoNuMINeutrinoHiggs portalStandard ModelSignal efficiencyLight scalarKinematicsNeutrino massKaon...
  • We present a modification of the DFS graph search algorithm, suited for finding long induced paths. We use it to give simple proofs of the following results. We show that the induced size-Ramsey number of paths satisfies $\hat{R}_{\mathrm{ind}}(P_n)\leq 5\cdot 10^7n$, thus giving an explicit constant in the linear bound, improving the previous bound with a large constant from a regularity lemma argument by Haxell, Kohayakawa and {\L}uczak. We also provide a bound for the $k$-color version, showing that $\hat{R}_{\mathrm{ind}}^k(P_n)=O(k^3\log^4k)n$. Finally, we present a new short proof of the fact that the binomial random graph in the supercritical regime, $G(n,\frac{1+\varepsilon}{n})$, contains typically an induced path of length $\Theta(\varepsilon^2) n$.
    GraphRandom graphSparsityExpansion propertyGiant componentBoilingSparse graphLower and upperPath in graphBernoulli distribution...
  • Consider the following experiment: a deck with $m$ copies of $n$ different card types is randomly shuffled, and a guesser attempts to guess the cards sequentially as they are drawn. Each time a guess is made, some amount of "feedback" is given. For example, one could tell the guesser the true identity of the card they just guessed (the complete feedback model) or they could be told nothing at all (the no feedback model). In this paper we explore a partial feedback model, where upon guessing a card, the guesser is only told whether or not their guess was correct. We show in this setting that, uniformly in $n$, at most $m+O(m^{3/4}\log m)$ cards can be guessed correctly in expectation. This resolves a question of Diaconis and Graham from 1981, where even the $m=2$ case was open.
    PermutationBinomial DistributionUniform distributionConvex combinationIndicator functionNonnegativeCountingHypergeometric distributionMaximum likelihoodHarmonic number...
  • The free monoid $A^*$ on a finite totally ordered alphabet $A$ acts at the left on columns, by Schensted left insertion. This defines a finite monoid, denoted $Styl(A)$ and called the stylic monoid. It is canonically a quotient of the plactic monoid. Main results are: the cardinality of $Styl(A)$ is equal to the number of partitions of a set on $|A|+1$ elements. We give a bijection with so-called $N$-tableaux, similar to Schensted's algorithm, explaining this fact. Presentation of $Styl(A)$: it is generated by $A$ subject to the plactic (Knuth) relations and the idempotent relations $a^2=a$, $a\in A$. The canonical involutive anti-automorphism on $A^*$, which reverses the order on $A$, induces an involution of $Styl(A)$, which similarly to the corresponding involution of the plactic monoid, may be computed by an evacuation-like operation (Sch\"utzenberger involution on tableaux) on so-called standard immaculate tableaux (which are in bijection with partitions). The monoid $Styl(A)$ is $J$-trivial, and the $J$-order of $Styl(A)$ is graded: the co-rank is given by the number of elements in the $N$-tableau. The monoid $Styl(A)$ is the syntactic monoid for the the function which associates to each word $w\in A^*$ the length of its longest strictly decreasing subword.
    Young tableauMonoidPlactic monoidBumpingPartially ordered setAutomorphismHomomorphismRankInflationRewriting system...
  • Given a piece of text, the ability to generate a coherent extension of it implies some sophistication, including a knowledge of grammar and semantics. In this paper, we propose a mathematical framework for passing from probability distributions on extensions of given texts to an enriched category containing semantic information. Roughly speaking, we model probability distributions on texts as a category enriched over the unit interval. Objects of this category are expressions in language and hom objects are conditional probabilities that one expression is an extension of another. This category is syntactical: it describes what goes with what. We then pass to the enriched category of unit interval-valued copresheaves on this syntactical category to find semantic information.
    Enriched categoryMorphismCategory theoryIsomorphismEmbeddingMonoidGrammarEmpty Lattice ApproximationMetric spaceConjunction...
  • The polarized Galactic thermal dust emission constitutes a major probe for the study and the characterization of the Galactic Magnetic Field (GMF). In this paper, we apply the maximum-likelihood analysis that we established in our companion paper (Pelgrims, Mac\'ias-P\'erez & Ruppin) to model the large-scale regular-component of the GMF from the polarized diffuse emission from Galactic thermal dust as measured by $Planck$ at 353 GHz. As a first attempt, we consider three models to describe the dust density distribution across the whole Galaxy and four models for the GMF. All models are parametric and heuristics and leave us with twelve reconstructions of the GMF geometrical structure. These reconstructions are obtained at the $N_{\rm{side}} = 64$ and provide the first constraints on the GMF obtained through Markov Chain Monte Carlo analysis from thermal dust polarization data. This work demonstrates that competitive constraints on the GMF can be obtained from the polarized thermal dust sky as compared to other observational probes.
    Galactic magnetic fieldPlanck missionDust emissionMaximum likelihoodCompanionDiffuse emissionGalaxyMonte Carlo Markov chainPolarization...
  • We develop a bootstrap approach to Effective Field Theories (EFTs) based on the concept of duality in optimisation theory. As a first application, we consider the fascinating set of EFTs for confining flux tubes. The outcome of our analysis are optimal bounds on the scattering amplitude of Goldstone excitations of the flux tube, which in turn translate into bounds on the Wilson coefficients of the EFT action. Finally, we comment on how our approach compares to EFT positivity bounds.
    Wilson coefficientsDualityScattering amplitudeGoldstone bosonEffective field theoryTheoryAction...
  • We prove that there exists an absolute constant $\delta>0$ such any binary code $C\subset\{0,1\}^N$ tolerating $(1/2-\delta)N$ adversarial deletions must satisfy $|C|\le 2^{\text{poly}\log N}$ and thus have rate asymptotically approaching 0. This is the first constant fraction improvement over the trivial bound that codes tolerating $N/2$ adversarial deletions must have rate going to 0 asymptotically. Equivalently, we show that there exists absolute constants $A$ and $\delta>0$ such that any set $C\subset\{0,1\}^N$ of $2^{\log^A N}$ binary strings must contain two strings $c$ and $c'$ whose longest common subsequence has length at least $(1/2+\delta)N$. As an immediate corollary, we show that $q$-ary codes tolerating a fraction $1-(1+2\delta)/q$ of adversarial deletions must also have rate approaching 0. Our techniques include string regularity arguments and a structural lemma that classifies binary strings by their oscillation patterns. Leveraging these tools, we find in any large code two strings with similar oscillation patterns, which is exploited to find a long common subsequence.
    StatisticsErasureBinary caseAttentionNonnegativeLong stringsCountingSzemerédi regularity lemmaGreedy algorithmTheory...
  • We show that in any two-coloring of the positive integers there is a color for which the set of positive integers that can be represented as a sum of distinct elements with this color has upper logarithmic density at least $(2+\sqrt{3})/4$ and this is best possible. This answers a forty-year-old question of Erd\H{o}s.
    Convex combinationArithmetic progressionAttentionLower and upperEuler's totient functionCoprimeNumber theoryBrillouin zoneContradictionReal numbers...
  • Searches for new physics by experimental collaborations represent a significant investment in time and resources. Often these searches are sensitive to a broader class of models than they were originally designed to test. We aim to extend the impact of existing searches through a technique we call 'recasting'. After considering several examples, which illustrate the issues and subtleties involved, we present RECAST, a framework designed to facilitate the usage of this technique.
    RECAST frameworkApplication programming interfaceDetector simulationSignal efficiencyFinal stateEvent selection criteriaHiggs bosonSignal eventsStandard ModelExclusion limit...
  • A key factor in the success of deep neural networks is the ability to scale models to improve performance by varying the architecture depth and width. This simple property of neural network design has resulted in highly effective architectures for a variety of tasks. Nevertheless, there is limited understanding of effects of depth and width on the learned representations. In this paper, we study this fundamental question. We begin by investigating how varying depth and width affects model hidden representations, finding a characteristic block structure in the hidden representations of larger capacity (wider or deeper) models. We demonstrate that this block structure arises when model capacity is large relative to the size of the training set, and is indicative of the underlying layers preserving and propagating the dominant principal component of their representations. This discovery has important ramifications for features learned by different models, namely, representations outside the block structure are often similar across architectures with varying widths and depths, but the block structure is unique to each model. We analyze the output predictions of different model architectures, finding that even when the overall accuracy is similar, wide and deep models exhibit distinctive error patterns and variations across classes.
    ArchitecturePrincipal componentNeural networkTraining setHidden layerDeep Neural NetworksLogistic regressionFeature learningStatistical estimatorRamification...
  • We show how functional relations, which can be considered as a definition of a quantum integrable theory, entail an integral equation that can be extended upon introducing dynamical variables to a Marchenko-like equation. Then, we naturally derive from the latter a classical Lax pair problem. We exemplify our method by focusing on the massive version of the ODE/IM (Ordinary Differential Equations/Integrable Models) correspondence involving the sinh-Gordon model, first emerged in the gauge theories and scattering amplitudes/Wilson loops $AdS_3$ context with many moduli/masses, but in a way which reveals its generality. In fact, we give some hints, in the end, to its application to spin chains.
    Ordinary differential equationsLax pairWilson loopScattering amplitudeGauge theoryDynamical variableTheorySpinMass...
  • Diagrammatically speaking, grammatical calculi such as pregroups provide wires between words in order to elucidate their interactions, and this enables one to verify grammatical correctness of phrases and sentences. In this paper we also provide wirings within words. This will enable us to identify grammatical constructs that we expect to be either equal or closely related. Hence, our work paves the way for a new theory of grammar, that provides novel `grammatical truths'. We give a nogo-theorem for the fact that our wirings for words make no sense for preordered monoids, the form which grammatical calculi usually take. Instead, they require diagrams -- or equivalently, (free) monoidal categories.
    GrammarPregroupMonoidTheory...
  • We propose a novel approach to dimensionality reduction combining techniques of metric geometry and distributed persistent homology, in the form of a gradient-descent based method called DIPOLE. DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term. By fixing an initial embedding method (we use Isomap), DIPOLE can also be viewed as a full dimensionality-reduction pipeline. This framework is based on the strong theoretical and computational properties of distributed persistent homology and comes with the guarantee of almost sure convergence. We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets, both visually and in terms of precise quantitative metrics.
    EmbeddingPoint cloudPersistent homologyHyperparameterQuasi-isometryNearest-neighbor siteMetric spaceTopological termGraphOptimization...
  • We present the largest systematic search to date for luminous $z\gtrsim8$ galaxy candidates using ~1267 arcmin$^{2}$ of (pure-)parallel HST observations from the SuperBoRG data set, a compilation of 316 random sightlines with ACS and WFC3 observations, which together represent a factor ~1.4x larger than existing data sets. Using NIR color cuts and careful photo-$z$ analyses, we find 49 $z\sim8-12$ galaxy candidates over 44 unique sightlines, and derive global galaxy properties such as UV magnitudes and continuum slopes, sizes, and rest-frame optical properties (e.g., SFRs, stellar masses, $A_{\rm v}$). Taking advantage of the (pure-)parallel nature of our data set - making it one of the most representative thus far - and derived SFRs, we evaluate the cosmic star formation rate density for the bright end of the luminosity at $z\sim8-10$ and test the validity of luminosity function-derived results using a conversion factor. We find our method yields comparable results to those derived with luminosity functions. Furthermore, we present follow up observations of 4 (Super)BoRG targets with Keck/MOSFIRE, finding no evidence of Ly$\alpha$ in >3 hrs of $Y-$band observations in either, consistent with a largely neutral medium at $z\sim8$. Our results offer a definitive HST legacy on the bright end of the luminosity function and provide a valuable benchmark as well as targets for follow up with JWST.
    GalaxyLuminosity functionNear-infraredPhotometrySpectral energy distributionWide Field Camera 3Signal to noise ratioStar formation ratePhotometric redshiftStellar mass...
  • 55 Cnc e is a transiting super-Earth (radius $1.88\rm\,R_\oplus$ and mass $8\rm\, M_\oplus$) orbiting a G8V host star on a 17-hour orbit. Spitzer observations of the planet's phase curve at 4.5 $\mu$m revealed a time-varying occultation depth, and MOST optical observations are consistent with a time-varying phase curve amplitude and phase offset of maximum light. Both broadband and high-resolution spectroscopic analyses are consistent with either a high mean molecular weight atmosphere or no atmosphere for planet e. A long term photometric monitoring campaign on an independent optical telescope is needed to probe the variability in this system. We seek to measure the phase variations of 55 Cnc e with a broadband optical filter with the 30 cm effective aperture space telescope CHEOPS and explore how the precision photometry narrows down the range of possible scenarios. We observed 55 Cnc for 1.6 orbital phases in March of 2020. We designed a phase curve detrending toolkit for CHEOPS photometry which allows us to study the underlying flux variations of the 55 Cnc system. We detected a phase variation with a full-amplitude of $72 \pm 7$ ppm but do not detect a significant secondary eclipse of the planet. The shape of the phase variation resembles that of a piecewise-Lambertian, however the non-detection of the planetary secondary eclipse, and the large amplitude of the variations exclude reflection from the planetary surface as a possible origin of the observed phase variations. They are also likely incompatible with magnetospheric interactions between the star and planet but may imply that circumplanetary or circumstellar material modulate the flux of the system. Further precision photometry of 55 Cnc from CHEOPS will measure variations in the phase curve amplitude and shape over time this year.
    Point spread functionPhotometryEclipsesStarPlanetTelescopesOccultationAlbedoSuper-earthInfrared limit...
  • Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations. Prior works show that structured latent space such as visual keypoints often outperforms unstructured representations for robotic control. However, most of these representations, whether structured or unstructured are learned in a 2D space even though the control tasks are usually performed in a 3D environment. In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner. The input images are embedded into latent 3D keypoints via a differentiable encoder which is trained to optimize both a multi-view consistency loss and downstream task objective. These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space. The proposed approach outperforms prior state-of-art methods across a variety of reinforcement learning benchmarks. Code and videos at https://buoyancy99.github.io/unsup-3d-keypoints/
    Reinforcement learningAttentionRoboticsAblationInductive biasContrastive learningConvolution Neural NetworkLatent spaceArchitectureRobotic Arm...
  • We describe the new field of mathematical analysis of deep learning. This field emerged around a list of research questions that were not answered within the classical framework of learning theory. These questions concern: the outstanding generalization power of overparametrized neural networks, the role of depth in deep architectures, the apparent absence of the curse of dimensionality, the surprisingly successful optimization performance despite the non-convexity of the problem, understanding what features are learned, why deep architectures perform exceptionally well in physical problems, and which fine aspects of an architecture affect the behavior of a learning task in which way. We present an overview of modern approaches that yield partial answers to these questions. For selected approaches, we describe the main ideas in more detail.
    ArchitectureDeep learningActivation functionTraining setNeural networkSparsityCurse of dimensionalityOptimizationRegularizationRegression...
  • Magnetic reconnection, a plasma process converting magnetic energy to particle kinetic energy, is often invoked to explain magnetic energy releases powering high-energy flares in astrophysical sources including pulsar wind nebulae and black hole jets. Reconnection is usually seen as the (essentially 2D) nonlinear evolution of the tearing instability disrupting a thin current sheet. To test how this process operates in 3D, we conduct a comprehensive particle-in-cell simulation study comparing 2D and 3D evolution of long, thin current sheets in moderately-magnetized, collisionless, relativistically-hot electron-positron plasma, and find dramatic differences. We first systematically characterize this process in 2D, where classic, hierarchical plasmoid-chain reconnection determines energy release, and explore a wide range of initial configurations, guide magnetic field strengths, and system sizes. We then show that 3D simulations of similar configurations exhibit a diversity of behaviours, including some where energy release is determined by the nonlinear relativistic drift-kink instability. Thus, 3D current-sheet evolution is not always fundamentally classical reconnection with perturbing 3D effects, but, rather, a complex interplay of multiple linear and nonlinear instabilities whose relative importance depends sensitively on the ambient plasma, minor configuration details, and even stochastic events. It often yields slower but longer-lasting and ultimately greater magnetic energy release than in 2D. Intriguingly, nonthermal particle acceleration is astonishingly robust, depending on the upstream magnetization and guide field, but otherwise yielding similar particle energy spectra in 2D and 3D. Though the variety of underlying current-sheet behaviours is interesting, the similarities in overall energy release and particle spectra may be more remarkable.
    Magnetic energyPlasmoidInstabilityParticle-in-cellEnthalpyGyroradiusMagnetizationMagnetic reconnectionPositronFinal state...
  • The phenomenon of Faraday rotation of linearly polarized synchrotron emission in a magneto-ionized medium has been understood and studied for decades. But since the sense of the rotation itself is irrelevant in most contexts, some uncertainty and inconsistencies have arisen in the literature about this detail. Here, we start from basic plasma theory to describe the propagation of polarized emission from a background radio source through a magnetized, ionized medium in order to rederive the correct sense of Faraday rotation. We present simple graphics to illustrate the decomposition of a linearly polarized wave into right and left circularly polarized modes, the temporal and spatial propagation of the phases of those modes, and the resulting physical rotation of the polarization orientation. We then re-examine the case of a medium that both Faraday-rotates and emits polarized radiation and show how a helical magnetic field can construct or destruct the Faraday rotation. This paper aims to resolve a source of confusion that has arisen between the plasma physics and radio astronomy communities and to help avoid common pitfalls when working with this unintuitive phenomenon.
    Faraday rotationSynchrotron radiationLine of sightOrientationHelical magnetic fieldRadio astronomyMagnetized plasmaHelicitySynchrotronPlasma physics...
  • We show that any pasting diagram in any $(\infty,2)$-category has a homotopically unique composite. This is achieved by showing that the free 2-category generated by a pasting scheme is the homotopy colimit of its cells as an $(\infty,2)$-category. We prove this explicitly in the simplicial categories model and then explain how to deduce the model-independent statement from that calculation.
    GraphPartially ordered setModel structureMorphismIsomorphismDirected graphSimplicial setQuasi-categorySimplicially enriched categoryFibration...
  • Motivated by some recent news, a journalist asks a group of physicists: "What's the meaning of the violation of Bell's inequality?" One physicist answers: "It means that non-locality is an established fact". Another says: "There is no non-locality; the message is that measurement outcomes are irreducibly random". A third one says: "It cannot be answered simply on purely physical grounds, the answer requires an act of metaphysical judgement". Puzzled by the answers, the journalist keeps asking questions about quantum theory: "What is teleported in quantum teleportation?" "How does a quantum computer really work?" Shockingly, for each of these questions, the journalist obtains a variety of answers which, in many cases, are mutually exclusive. At the end of the day, the journalist asks: "How do you plan to make progress if, after 90 years of quantum theory, you still don't know what it means? How can you possibly identify the physical principles of quantum theory or expand quantum theory into gravity if you don't agree on what quantum theory is about?" Here we argue that it is becoming urgent to solve this too long lasting problem. For that, we point out that the interpretations of quantum theory are, essentially, of two types and that these two types are so radically different that there must be experiments that, when analyzed outside the framework of quantum theory, lead to different empirically testable predictions. Arguably, even if these experiments do not end the discussion, they will add new elements to the list of strange properties that some interpretations must have, therefore they will indirectly support those interpretations that do not need to have all these strange properties.
    Quantum theoryQuantum teleportationBell's inequalityQuantum measurementMeasurementTheoryGravitationProbability...
  • We show that low-density random quotients of cubulated hyperbolic groups are again cubulated and hyperbolic. Ingredients of the proof include cubical small-cancellation theory, the exponential growth of conjugacy classes, and the statement that hyperplane stabilizers grow exponentially more slowly than the ambient cubical group.
    GeodesicConjugacy classClosed geodesicMetric spaceHyperbolic groupFundamental domainOrientationIsometrySubgroupSmall cancellation theory...