Recently bookmarked papers

with concepts:
  • Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge
    ArchitectureImage recognitionConjunctionInstabilityObject detectionLarge scale structureDeep Residual NetworksInception ModulesOptimizationImage Processing...
  • An analytical model for three-dimensional incompressible turbulence was recently introduced in the hydrodynamics community which, with only a few parameters, shares many properties of experimental and numerical turbulence, notably intermittency (non-Gaussianity), the energy cascade (skewness), and vorticity alignment properties. In view of modeling astrophysical environments, we introduce a manner to extend to compressible fluids the three-dimensional turbulent velocity field model of Chevillard et al. (2010), as well as the three 3D turbulent magnetic field models of Durrive et al. (2020), following the same procedure based on the concept of multiplicative chaos. Our model provides a complementary tool to numerical simulations, as it enables us to generate very quickly fairly realistic velocity fields and magnetic fields, the statistics of which are controllable with intuitive parameters. Therefore our model will also provide a useful tool for observers in astrophysics. Finally, maybe even more than the model itself, it is the very procedure that matters the most: our model is modular, in the sense that it is constructed gradually, with intuitive and physically motivated steps, so that it is prone to many further improvements.
    VorticityTurbulenceStatisticsRandom FieldCurrent densityChaosMagnetohydrodynamic turbulenceWhite noiseDissipationKinematics...
  • The delay time distribution (DTD) of Type-Ia supernovae (SNe Ia) is the rate of SNe Ia, as a function of time, that explode in a hypothetical stellar population of unit mass, formed in a brief burst at time $t=0$, and is important for understanding chemical evolution, SN Ia progenitors, and SN Ia physics. Past estimates of the DTD in galaxy clusters have been deduced from SN Ia rates measured in cluster samples observed at various redshifts, corresponding to different time intervals after a presumed initial brief burst of star formation. Recently, Friedmann and Maoz analysed a cluster sample at $z=1.13-1.75$ and confirmed indications from previous studies of lower-redshift clusters, that the DTD has a power-law form whose normalisation is at least several times higher than measured in field-galaxy environments, implying that SNe Ia are produced in larger numbers by the stellar populations in clusters. This conclusion, however, could have been affected by the implicit assumption that the stars were formed in a single brief starburst at high $z$. Here, we re-derive the DTD from the cluster SN Ia data, but now relax the single-burst assumption. Instead, we allow for a range of star-formation histories and dust extinctions for each cluster. Via MCMC modeling, we simultaneously fit, using stellar population synthesis models and DTD models, the integrated galaxy-light photometry in several bands and the SN Ia numbers discovered in each cluster. With these more-realistic assumptions, we find a best-fit DTD with power-law index $\alpha=-1.09_{-0.12}^{+0.15}$, and amplitude, at delay $t=1~\rm Gyr$, $R_1=0.41_{-0.10}^{+0.12}\times 10^{-12}~{\rm yr}^{-1}{\rm M}_\odot^{-1}$. The derived cluster-environment DTD remains higher (at $3.8\sigma$ significance) than the field-galaxy DTD, by a factor $\sim 2-3$. Cluster and field DTDs both have consistent slopes of $\alpha\approx -1.1$.
    Supernova Type IaStar formation historiesStellar massGalaxyCluster of galaxiesCluster samplingStar formationLuminosityPhotometryField galaxy...
  • The analysis of current and future cosmological surveys of type Ia supernovae (SNe Ia) at high-redshift depends on the accurate photometric classification of the SN events detected. Generating realistic simulations of photometric SN surveys constitutes an essential step for training and testing photometric classification algorithms, and for correcting biases introduced by selection effects and contamination arising from core collapse SNe in the photometric SN Ia samples. We use published SN time-series spectrophotometric templates, rates, luminosity functions and empirical relationships between SNe and their host galaxies to construct a framework for simulating photometric SN surveys. We present this framework in the context of the Dark Energy Survey (DES) 5-year photometric SN sample, comparing our simulations of DES with the observed DES transient populations. We demonstrate excellent agreement in many distributions, including Hubble residuals, between our simulations and data. We estimate the core collapse fraction expected in the DES SN sample after selection requirements are applied and before photometric classification. After testing different modelling choices and astrophysical assumptions underlying our simulation, we find that the predicted contamination varies from 5.8 to 9.3 per cent, with an average of 7.0 per cent and r.m.s. of 1.1 per cent. Our simulations are the first to reproduce the observed photometric SN and host galaxy properties in high-redshift surveys without fine-tuning the input parameters. The simulation methods presented here will be a critical component of the cosmology analysis of the DES photometric SN Ia sample: correcting for biases arising from contamination, and evaluating the associated systematic uncertainty.
    Dark Energy SurveyHost galaxyGalaxyCore collapseLuminosity functionCore-collapse supernovaSupernova Type IaSpectroscopic redshiftSpectral Adaptive Lightcurve Template 2Extinction...
  • We perform one-dimensional radiation-hydrodynamic simulations of energetic supernova ejecta colliding with a massive circumstellar medium (CSM) aiming at explaining SN 2016aps, likely the brightest supernova observed to date. SN 2016aps was a superluminous Type-IIn SN, which released as much as $\gtrsim 5\times 10^{51}$ erg of thermal radiation. Our results suggest that the multi-band light curve of SN 2016aps is well explained by the collision of a $30\ M_\odot$ SN ejecta with the explosion energy of $10^{52}$ erg and a $\simeq 8\ M_\odot$ wind-like CSM with the outer radius of $10^{16}$ cm, i.e., a hypernova explosion embedded in a massive CSM. This finding indicates that very massive stars with initial masses larger than $40\ M_\odot$, which supposedly produce highly energetic SNe, occasionally eject their hydrogen-rich envelopes shortly before the core-collapse. We suggest that the pulsational pair-instability SNe may provide a natural explanation for the massive CSM and the energetic explosion. We also provide the relations between the peak luminosity, the radiated energy, and the rise time for interacting SNe with the kinetic energy of $10^{52}$ erg, which can be used for interpreting SN 2016aps-like objects in future surveys.
    EjectaLight curveLuminosityOpacityHypernovaRadiative transfer simulationsInstabilityCore collapseBolometric luminositySupermassive stars...
  • We present a study of optical and near-infrared (NIR) spectra along with the light curves of SN 2013ai. These data range from discovery until 380 days after explosion. SN 2013ai is a fast declining type II supernova (SN II) with an unusually long rise time; $18.9\pm2.7$d in $V$ band and a bright $V$ band peak absolute magnitude of $-18.7\pm0.06$ mag. The spectra are dominated by hydrogen features in the optical and NIR. The spectral features of SN 2013ai are unique in their expansion velocities, which when compared to large samples of SNe II are more than 1,000 kms faster at 50 days past explosion. In addition, the long rise time of the light curve more closely resembles SNe IIb rather than SNe II. If SN 2013ai is coeval with a nearby compact cluster we infer a progenitor ZAMS mass of $\sim$17 M$_\odot$. After performing light curve modeling we find that SN 2013ai could be the result of the explosion of a star with little hydrogen mass, a large amount of synthesized $^{56}$Ni, 0.3-0.4 M$_\odot$, and an explosion energy of $2.5-3.0\times10^{51}$ ergs. The density structure and expansion velocities of SN 2013ai are similar to that of the prototypical SN IIb, SN 1993J. However, SN 2013ai shows no strong helium features in the optical, likely due to the presence of a dense core that prevents the majority of $\gamma$-rays from escaping to excite helium. Our analysis suggests that SN 2013ai could be a link between SNe II and stripped envelope SNe.
    Light curveNear-infraredPhotometryCore-collapse supernovaWide Field and Planetary Camera 2NGC 2207StarNatriumPhotospherePoint spread function...
  • Different properties of dark matter haloes, including growth rate, concentration, interaction history, and spin, correlate with environment in unique, scale-dependent ways. While these halo properties are not directly observable, galaxies will inherit their host haloes' correlations with environment. In this paper, we show how these characteristic environmental signatures allow using measurements of galaxy environment to constrain which dark matter halo properties are most tightly connected to observable galaxy properties. We show that different halo properties beyond mass imprint distinct scale-dependent signatures in both the galaxy two-point correlation function and the distribution of distances to galaxies' kth nearest neighbours, with features strong enough to be accessible even with low-resolution (e.g., grism) spectroscopy at higher redshifts. As an application, we compute observed two-point correlation functions for galaxies binned by half-mass radius at z=0 from the Sloan Digital Sky Survey, showing that classic galaxy size models (i.e., galaxy size being proportional to halo spin) as well as other recent proposals show significant tensions with observational data. We show that the agreement with observed clustering can be improved with a simple empirical model in which galaxy size correlates with halo growth.
    GalaxyTwo-point correlation functionStellar massSloan Digital Sky SurveyRankDisk galaxyVirial massHalo concentrationsDark matter haloScale factor...
  • After several decades of extensive research the mechanism driving core-collapse supernovae (CCSNe) is still unclear. A common mechanism is a neutrino driven outflow, but others have been proposed. Among those, a long-standing idea is that jets play an important role in SN explosions. Gamma-ray bursts (GRBs) that accompany rare and powerful CCSNe, sometimes called "hypernovae", provide a clear evidence for a jet activity. The relativistic GRB jet punches a hole in the stellar envelope and produces the observed gamma-rays far outside the progenitor star. While SNe and jets coexist in long GRBs, the relation between the mechanisms driving the hypernova and the jet is unknown. Also unclear is the relation between the rare hypernovae and the more common CCSNe. Here we {present observational evidence that indicates} that choked jets are active in CCSNe types that are not associated with GRBs. A choked jet deposits all its energy in a cocoon. The cocoon eventually breaks out from the star releasing energetic material at very high, yet sub-relativistic, velocities. This fast moving material has a unique signature that can be detected in early time SN spectra. We find a clear evidence for this signature in several CCSNe, all involving progenitors that have lost all, or most, of their hydrogen envelope prior to the explosion. These include CCSNe that don't harbor GRBs or any other relativistic outflows. Our findings suggest a continuum of central engine activities in different types of CCSNe and call for rethinking of the explosion mechanism of regular CCSNe
    SupernovaCore-collapse supernovaGamma ray burstStarHypernovaRelativistic jetNeutrinoEjectaCircumstellar envelopeHigh energy neutrinos...
  • Using a sample of 215 supernovae (SNe), we analyze their positions relative to the spiral arms of their host galaxies, distinguishing grand-design (GD) spirals from non-GD (NGD) galaxies. We find that: (1) in GD galaxies, an offset exists between the positions of Ia and core-collapse (CC) SNe relative to the peaks of arms, while in NGD galaxies the positions show no such shifts; (2) in GD galaxies, the positions of CC SNe relative to the peaks of arms are correlated with the radial distance from the galaxy nucleus. Inside (outside) the corotation radius, CC SNe are found closer to the inner (outer) edge. No such correlation is observed for SNe in NGD galaxies nor for SNe Ia in either galaxy class; (3) in GD galaxies, SNe Ibc occur closer to the leading edges of the arms than do SNe II, while in NGD galaxies they are more concentrated towards the peaks of arms. In both samples of hosts, the distributions of SNe Ia relative to the arms have broader wings. These observations suggest that shocks in spiral arms of GD galaxies trigger star formation in the leading edges of arms affecting the distributions of CC SNe (known to have short-lived progenitors). The closer locations of SNe Ibc vs. SNe II relative to the leading edges of the arms supports the belief that SNe Ibc have more massive progenitors. SNe Ia having less massive and older progenitors, have more time to drift away from the leading edge of the spiral arms.
    SupernovaSpiral armHost galaxyCore-collapse supernovaSupernova Type IaStar formationSpiral galaxyStatistical significanceDensity WavesStellar populations...
  • This is a lightly edited version of the talk given on September 30, 2020 to inaugurate the international seminar series {\it All Things EFT}. It reviews some of the early history of effective field theories, and concludes with a discussion of the implications of effective field theory for future discovery.
    Standard ModelPionEffective field theoryCurrent algebraQuantum field theoryChiral symmetryField theoryStrong interactionsS-matrixWeak interaction...
  • Our understanding of processes occurring in the heliosphere historically began with reduced dimensionality - one-dimensional (1D) and two-dimensional (2D) sketches and models, which aimed to illustrate views on large-scale structures in the solar wind. However, any reduced dimensionality vision of the heliosphere limits the possible interpretations of in-situ observations. Accounting for non-planar structures, e.g. current sheets, magnetic islands, flux ropes as well as plasma bubbles, is decisive to shed the light on a variety of phenomena, such as particle acceleration and energy dissipation. In part I of this review, we have described in detail the ubiquitous and multi-scale observations of these magnetic structures in the solar wind and their significance for the acceleration of charged particles. Here, in part II, we elucidate existing theoretical paradigms of the structure of the solar wind and the interplanetary magnetic field, with particular attention to the fine structure and stability of current sheets. Differences in 2D and 3D views of processes associated with current sheets, magnetic islands, and flux ropes are discussed. We finally review the results of numerical simulations and in-situ observations, pointing out the complex nature of magnetic reconnection and particle acceleration in a strongly turbulent environment.
    TurbulenceSolar windMagnetic reconnectionFermi accelerationTransport equationHeliospherePlasmoidInstabilityAstronomical UnitBetatron...
  • The Inverse First Ionization Potential (FIP) Effect, the depletion in coronal abundance of elements like Fe, Mg, and Si that are ionized in the solar chromosphere relative to those that are neutral, has been identified in several solar flares. We give a more detailed discussion of the mechanism of fractionation by the ponderomotive force associated with magnetohydrodynamic waves, paying special attention to the conditions in which Inverse FIP fractionation arises in order to better understand its relation to the usual FIP Effect, i.e. the enhancement of coronal abundance of Fe, Mg, Si, etc. The FIP Effect is generated by parallel propagating Alfv\'en waves, with either photospheric, or more likely coronal, origins. The Inverse FIP Effect arises as upward propagating fast mode waves with an origin in the photosphere or below, refract back downwards in the chromosphere where the Alfv\'en speed is increasing with altitude. We give a more physically motivated picture of the FIP fractionation, based on the wave refraction around inhomogeneities in the solar atmosphere, and inspired by previous discussions of analogous phenomena in the optical trapping of particles by laser beams. We apply these insights to modeling the fractionation and find good agreement with the observations of Katsuda et al. (2020; arXiv:2001.10643) and Dennis et al. (2015; arXiv:1503.01602).
    Alfvén waveChromosphereSolar flareIonizationPhotosphereCoronaSolar chromosphereIonization fractionTurbulenceCharge exchange...
  • Recently, we developed the Correlation-Aided Reconstruction (CORAR) method to reconstruct solar wind inhomogeneous structures, or transients, using dual-view white-light images (Li et al. 2020; Li et al. 2018). This method is proved to be useful for studying the morphological and dynamical properties of transients like blobs and coronal mass ejection (CME), but the accuracy of reconstruction may be affected by the separation angle between the two spacecraft (Lyu et al. 2020). Based on the dual-view CME events from the Heliospheric Imager CME Join Catalogue (HIJoinCAT) in the HELCATS (Heliospheric Cataloguing, Analysis and Techniques Service) project, we study the quality of the CME reconstruction by the CORAR method under different STEREO stereoscopic angles. We find that when the separation angle of spacecraft is around 150{\deg}, most CME events can be well reconstructed. If the collinear effect is considered, the optimal separation angle should locate between 120{\deg} and 150{\deg}. Compared with the CME direction given in the Heliospheric Imager Geometrical Catalogue (HIGeoCAT) from HELCATS, the CME parameters obtained by the CORAR method are reasonable. However, the CORAR-obtained directions have deviations towards the meridian plane in longitude, and towards the equatorial plane in latitude. An empirical formula is proposed to correct these deviations. This study provides the basis for the spacecraft configuration of our recently proposed Solar Ring mission concept (Wang et al. 2020b).
    Coronal mass ejectionSolar Terrestrial Relations ObservatoryMeridianSolar windCatalogsLithiumEvent...
  • In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model.
    SparsityArchitectureInstabilityNatural languageDeep learningDistillationAttentionComputational linguisticsNeural networkNatural language inference...
  • Matter distribution around clusters is highly anisotropic from their being the nodes of the cosmic web. Clusters' shape and the number of filaments they are connected to, i.e., their connectivity, should reflect their level of anisotropic matter distribution and must be, in principle, related to their physical properties. We investigate the influence of the dynamical state and the formation history on both the morphology and local connectivity of about 2400 groups and clusters of galaxies from the large hydrodynamical simulation IllustrisTNG at z=0. We find that the mass of groups and clusters mainly influences the geometry of the matter distribution: massive halos are significantly more elliptical, and more connected to the cosmic web than low-mass ones. Beyond the mass-driven effect, ellipticity and connectivity are correlated and are imprints of the growth rate of groups and clusters. Both anisotropy measures appear to trace different dynamical states, such that unrelaxed groups and clusters are more elliptical and more connected than relaxed ones. This relation between matter anisotropies and dynamical state is the sign of different accretion histories. Relaxed groups and clusters are mostly formed long time ago, and slowly accreting matter at the present time. They are rather spherical and weakly connected to their environment, mostly because they had enough time to relax and, hence, lost the connection with their preferential directions of accretion and merging. In contrast, late-formed unrelaxed objects are highly anisotropic with large connectivities and ellipticities. These groups and clusters are in formation phase and must be strongly affected by the infalling of materials from filaments.
    EllipticityAnisotropyCosmic webMass accretion rateAccretionDark matter subhaloAccretion historyIllustrisTNG simulationVirial massRelaxation...
  • Two-dimensional conformal field theories with Virasoro symmetry generically contain a Schwarzian sector. This sector is related to the near-horizon region of the near-extremal BTZ black hole in the holographic dual. In this work we generalize this picture to CFTs with higher spin conserved currents. It is shown that the partition function in the near-extremal limit agrees with that of BF higher spin gravity in AdS$_2$ which is described by a generalized Schwarzian theory. We also provide a spectral decomposition of Schwarzian partition functions via the $\mathcal{W}_N$ fusion kernel and consider supersymmetric generalizations.
    Higher spinPartition functionConformal field theoryAnti de Sitter spaceBTZ black holeBlack holeSupersymmetricSachdev-Ye-Kitaev modelZero modeHorizon...
  • We calculate the four-point function of $1/2$-BPS determinant operators in $\mathcal{N}=4$ SYM at next-to-leading order at weak coupling. We use two complementary methods recently developed for a class of determinant three-point functions: one is based on Feynman diagrams and it extracts perturbative data at finite $N$, while the other one expresses a generic correlator of determinants as the zero-dimensional integral over an auxiliary matrix field. We generalise the latter approach to calculate one-loop corrections and we solve the four-point function in a semi-classical approach at large $N$. The results allow to comment on the order of the phase transition that the four-point function is expected to exhibit in an exact integrability-based description.
    Super Yang-Mills theoryEffective theorySaddle pointPhase transitionsPartition functionGravitonThermodynamic Bethe AnsatzRegularizationPropagatorD-brane...
  • In 1968, Atkinson proved the existence of functions that satisfy all S-matrix axioms in four spacetime dimensions. His proof is constructive and to our knowledge it is the only result of this type. Remarkably, the methods to construct such functions used in the proof were never implemented in practice. In the present paper, we test the applicability of those methods in the simpler setting of two-dimensional S-matrices. We solve the problem of reconstructing the scattering amplitude starting from a given particle production probability. We do this by implementing two numerical iterative schemes (fixed-point iteration and Newton's method), which, by iterating unitarity and dispersion relations, converge to solutions to the S-matrix axioms. We characterize the region in the amplitude-space in which our algorithms converge, and discover a fractal structure connected to the so-called CDD ambiguities which we call "CDD fractal". To our surprise, the question of convergence naturally connects to the recent study of the coupling maximization in the two-dimensional S-matrix bootstrap. The methods exposed here pave the way for applications to higher dimensions, and expose some of the potential challenges that will have to be overcome.
    UnitarityFixed point iterationS-matrixBound stateScattering amplitudeFractalCDD poleBernstein polynomialsComplex planeProgramming...
  • This thesis consists of two parts. The first part is about how quantum theory can be recovered from first principles, while the second part is about the application of diagrammatic reasoning, specifically the ZX-calculus, to practical problems in quantum computing. The main results of the first part include a reconstruction of quantum theory from principles related to properties of sequential measurement and a reconstruction based on properties of pure maps and the mathematics of effectus theory. It also includes a detailed study of JBW-algebras, a type of infinite-dimensional Jordan algebra motivated by von Neumann algebras. In the second part we find a new model for measurement-based quantum computing, study how measurement patterns in the one-way model can be simplified and find a new algorithm for extracting a unitary circuit from such patterns. We use these results to develop a circuit optimisation strategy that leads to a new normal form for Clifford circuits and reductions in the T-count of Clifford+T circuits.
    QubitGraphQuantum theoryMonoidIsomorphismGraph stateVector spaceQuantum computationQuantum circuitRank...
  • Pricing financial derivatives, in particular European-style options at different time-maturities and strikes, is a relevant financial problem. The dynamics describing the price of vanilla options when constant volatilities and interest rates are assumed, is governed by the Black-Scholes model, a linear parabolic partial differential equation with terminal value given by the pay-off of the option contract and no additional boundary conditions. Here, we present a digital quantum algorithm to solve Black-Scholes equation on a quantum computer for a wide range of relevant financial parameters by mapping it to the Schr\"odinger equation. The non-Hermitian nature of the resulting Hamiltonian is solved by embedding the dynamics into an enlarged Hilbert space, which makes use of only one additional ancillary qubit. Moreover, we employ a second ancillary qubit to transform initial condition into periodic boundary conditions, which substantially improves the stability and performance of the protocol. This algorithm shows a feasible approach for pricing financial derivatives on a digital quantum computer based on Hamiltonian simulation, technique which differs from those based on Monte Carlo simulations to solve the stochastic counterpart of the Black Scholes equation. Our algorithm remarkably provides an exponential speedup since the terms in the Hamiltonian can be truncated by a polynomial number of interactions while keeping the error bounded. We report expected accuracy levels comparable to classical numerical algorithms by using 10 qubits and 94 entangling gates on a fault-tolerant quantum computer, and an expected success probability of the post-selection procedure due to the embedding protocol above 60\%.
    QubitHamiltonianBlack-ScholesEmbeddingVolatilityQuantum algorithmsDiscretizationPeriodic boundary conditionsPartial differential equationPortfolio...
  • Compilation of unitaries into a sequence of physical quantum gates is a critical prerequisite for execution of quantum algorithms. This work introduces STOQ, a stochastic search protocol for approximate unitary compilation into a sequence of gates from an arbitrary gate alphabet. We demonstrate STOQ by comparing its performance to existing product-formula compilation techniques for time-evolution unitaries on system sizes up to eight qubits. The compilations generated by STOQ are less accurate than those from product-formula techniques, but they are similar in runtime and traverse significantly different paths in state space. We also use STOQ to generate compilations of randomly-generated unitaries, and we observe its ability to generate approximately-equivalent compilations of unitaries corresponding to shallow random circuits. Finally, we discuss the applicability of STOQ to tasks such as characterization of near-term quantum devices.
    QubitHamiltonianQuantum devicesSparsityQuantum gatesProgrammingQuantum algorithmsUnitary operatorQuantum simulatorsMonte Carlo Markov chain...
  • In this paper we reformulate the problem of pricing options in a quantum setting. Our proposed algorithm involves preparing an initial state, representing the option price, and then evolving it using existing imaginary time simulation algorithms. This way of pricing options boils down to mapping an initial option price to a quantum state and then simulating the time dependence in Wick's imaginary time space. We numerically verify our algorithm for European options using a particular imaginary time evolution algorithm as proof of concept and show how it can be extended to path dependent options like Asian options. As the proposed method uses a hybrid variational algorithm, it is bound to be relevant for near-term quantum computers.
    BoilingAlgorithmsSimulations
  • The purpose of this paper is to evaluate the possibility of constructing a large-scale storage-ring-type ion-trap system capable of storing, cooling, and controlling a large number of ions as a platform for scalable quantum computing (QC) and quantum simulations (QS). In such a trap, the ions form a crystalline beam moving along a circular path with a constant velocity determined by the frequency and intensity of the cooling lasers. In this paper we consider a large leap forward in terms of the number of qubits, from fewer than 100 available in state-of-the-art linear ion-trap devices today to an order of 105 crystallized ions in the storage-ring setup. This new trap design unifies two different concepts: the storage rings of charged particles and the linear ion traps used for QC and mass spectrometry. In this paper we use the language of particle accelerators to discuss the ion state and dynamics. We outline the differences between the above concepts, analyze challenges of the large ring with a revolving beam of ions, and propose goals for the research and development required to enable future quantum computers with 1000 times more qubits than available today. The challenge of creating such a large-scale quantum system while maintaining the necessary coherence of the qubits and the high fidelity of quantum logic operations is significant. Performing analog quantum simulations may be an achievable initial goal for such a device. Quantum simulations of complex quantum systems will move forward both the fundamental science and the applied research. Nuclear and particle physics, many-body quantum systems, lattice gauge theories, and nuclear structure calculations are just a few examples in which a large-scale quantum simulation system would be a very powerful tool to move forward our understanding of nature.
    Storage ringResearch and DevelopmentQubitIon beamCoolingQuantum logicIntensityCharged particleQuantum computationLattice gauge theory...
  • Recent work has shown that entanglement and the structure of spacetime are intimately related. One way to investigate this is to begin with an entanglement entropy in a conformal field theory (CFT) and use the AdS/CFT correspondence to calculate the bulk metric. We perform this calculation for ABJM, a particular 3-dimensional supersymmetric CFT (SCFT), in its ground state. In particular we are able to reconstruct the pure AdS4 metric from the holographic entanglement entropy of the boundary ABJM theory in its ground state. Moreover, we are able to predict the correct AdS radius purely from entanglement. We also address the general philosophy of relating entanglement and spacetime through the Holographic Principle, as well as some of the philosophy behind our calculations.
    Entanglement entropyRyu-TakayanagiConformal field theoryEntanglementField theoryABJM theoryAnti de Sitter spaceAdS/CFT correspondenceSupersymmetric CFTHolographic principle...
  • Wind power as clean energy has been widely promoted. However, during the operation of a wind turbine in a mountainous area, a large number of instability phenomena appear in the wind turbine foundation. To study the damage characteristics of a mountain wind turbine foundation, the influence of topography on a mountain wind turbine is considered. According to the complex mechanical structure characteristics of the wind turbine, a circular expansion foundation model have a ratio of 1:10 to the wind turbine which 2MW installed capacity is established. The foundation model is simulated by the ABAQUS, and the impact of soil pressure on the wind turbine foundation under random wind load is analyzed. Understand the fragile parts of the wind turbine foundation. The results show that :(1) under the action of random wind load, the strain of the anchor rod, and the ground ring, of the bottom plate is small and will not be destroyed ;(2) under the influence of wind load, he soil within a 1.5 times radius range of wind turbine foundation is easily disturbed, but the core soil below the foundation is unaffected. It is suggested that the soil within a 1.5 times radius range of the wind turbine foundation should be strengthened to ensure the safety and stability of the foundation.
    Wind turbineInstabilityPressureEnergyAction...
  • The [OIII] 88 $\mu$m fine structure emission line has been detected into the Epoch of Reionization (EoR) from star-forming galaxies at redshifts $6 < z \lesssim 9$ with ALMA. These measurements provide valuable information regarding the properties of the interstellar medium (ISM) in the highest redshift galaxies discovered thus far. The [OIII] 88 $\mu$m line observations leave, however, a degeneracy between the gas density and metallicity in these systems. Here we quantify the prospects for breaking this degeneracy using future ALMA observations of the [OIII] 52 $\mu$m line. Among the current set of ten [OIII] 88 $\mu$m emitters at $6 < z \lesssim 9$, we forecast 52 $\mu$m detections (at 6-$\sigma$) in SXDF-NB1006-2, B14-6566, J0217-0208, and J1211-0118 within on-source observing times of 2-10 hours, provided their gas densities are larger than about $n_{\mathrm{H}} \gtrsim 10^2-10^3$ cm$^{-3}$. Other targets generally require much longer integration times for a 6-$\sigma$ detection. Either successful detections of the 52 $\mu$m line, or reliable upper limits, will lead to significantly tighter constraints on ISM parameters. The forecasted improvements are as large as $\sim 3$ dex in gas density and $\sim 1$ dex in metallicity for some regions of parameter space. We suggest SXDF-NB1006-2 as a promising first target for 52 $\mu$m line measurements.
    GalaxyMetallicityAtacama Large Millimeter ArrayInterstellar mediumLuminosityEpoch of reionizationStar formation rateFine structureSignal to noise ratioMilky Way...
  • The fourth orbit of Parker Solar Probe (PSP) reached heliocentric distances down to 27.9 Rs, allowing solar wind turbulence and acceleration mechanisms to be studied in situ closer to the Sun than previously possible. The turbulence properties were found to be significantly different in the inbound and outbound portions of PSP's fourth solar encounter, likely due to the proximity to the heliospheric current sheet (HCS) in the outbound period. Near the HCS, in the streamer belt wind, the turbulence was found to have lower amplitudes, higher magnetic compressibility, a steeper magnetic field spectrum (with spectral index close to -5/3 rather than -3/2), a lower Alfv\'enicity, and a "1/f" break at much lower frequencies. These are also features of slow wind at 1 au, suggesting the near-Sun streamer belt wind to be the prototypical slow solar wind. The transition in properties occurs at a predicted angular distance of ~4{\deg} from the HCS, suggesting ~8{\deg} as the full-width of the streamer belt wind at these distances. While the majority of the Alfv\'enic turbulence energy fluxes measured by PSP are consistent with those required for reflection-driven turbulence models of solar wind acceleration, the fluxes in the streamer belt are significantly lower than the model predictions, suggesting that additional mechanisms are necessary to explain the acceleration of the streamer belt solar wind.
    TurbulenceSolar windSunCompressibilityHeliospheric current sheetShearedTurbulence modelingTime SeriesAngular distancePlasma Beta...
  • Much of the information about the magnetic field in the Milky Way and other galaxies comes from measurements which are path integrals, such as Faraday rotation and the polarization of synchrotron radiation of cosmic ray electrons. The measurement made at the radio telescope results from the contributions of volume elements along a long line of sight. The inferred magnetic field is therefore some sort of average over a long line segment. A magnetic field measurement at a given spatial location is of much more physical significance. In this paper, we point out that HII regions fortuitously offer such a ``point'' measurement, albeit of one component of the magnetic field, and averaged over the sightline through the HII region. However, the line of sight (LOS) through an HII region is much smaller (e.g. 30 - 50 pc) than one through the entire Galactic disk, and thus constitutes a ``pseudo-local'' measurement. We use published HII region Faraday rotation measurements to provide a new constraint on the magnitude of magnetohydrodynamic (MHD) turbulence in the Galaxy, as well as to raise intriguing speculations about the modification of the Galactic field during the star formation process.
    Line of sightTurbulenceInterstellar mediumStar formationGalactic magnetic fieldMean fieldFaraday rotationMilky WayGalactic planeNGC 2244...
  • We make use of the Parker Solar Probe (PSP) data to explore the nature of solar wind turbulence focusing on the Alfv\'enic character and power spectra of the fluctuations and their dependence on distance and context (i.e. large scale solar wind properties), aiming to understand the role that different effects such as source properties, solar wind expansion, stream interaction might play in determining the turbulent state. We carry out a statistical survey of the data from the first five orbits of PSP with a focus on how the fluctuation properties at the large, MHD scales, vary with different solar wind streams and distance from the Sun. A more in-depth analysis from several selected periods is also presented. Our results show that as fluctuations are transported outward by the solar wind, the magnetic field spectrum steepens while the shape of the velocity spectrum remains unchanged. The steepening process is controlled by the "age" of the turbulence, determined by the wind speed together with the radial distance. Statistically, faster solar wind has higher "Alfv\'enicity", with more dominant outward propagating wave component and more balanced magnetic/kinetic energies. The outward wave dominance gradually weakens with radial distance, while the excess of magnetic energy is found to be stronger as we move closer toward the Sun. We show that the turbulence properties can vary significantly stream to stream even if these streams are of similar speed, indicating very different origins of these streams. Especially, the slow wind that originates near the polar coronal holes has much lower Alfv\'enicity compared with the slow wind that originates from the active regions/pseudostreamers. We show that structures such as heliospheric current sheets and velocity shears can play an important role in modifying the properties of the turbulence.
    TurbulenceSolar windSunShearedMagnetic energyAstronomical UnitAlfvén wavePerihelionHeliospheric current sheetCoronal hole...
  • Indra is a suite of large-volume cosmological $N$-body simulations with the goal of providing excellent statistics of the large-scale features of the distribution of dark matter. Each of the 384 simulations is computed with the same cosmological parameters and different initial phases, with 1024$^3$ dark matter particles in a box of length 1 Gpc/h, 64 snapshots of particle data and halo catalogs, and 505 time steps of the Fourier modes of the density field, amounting to almost a petabyte of data. All of the Indra data are immediately available for analysis via the SciServer science platform, which provides interactive and batch computing modes, personal data storage, and other hosted data sets such as the Millennium simulations and many astronomical surveys. We present the Indra simulations, describe the data products and how to access them, and measure ensemble averages, variances, and covariances of the matter power spectrum, the matter correlation function, and the halo mass function to demonstrate the types of computations that Indra enables. We hope that Indra will be both a resource for large-scale structure research and a demonstration of how to make very large datasets public and computationally-accessible.
    CovarianceTwo-point correlation functionStatisticsMass functionFriends of friends algorithmDark matter subhaloDark matter particleRandom FieldCosmological parametersScale factor...
  • The Solo (Solitary Local) Dwarf Galaxy survey is a volume limited, wide-field g- and i- band survey of all known nearby (<3 Mpc) and isolated (>300 kpc from the Milky Way or M31) dwarf galaxies. This set of 44 dwarfs are homogeneously analysed for quantitative comparisons to the satellite dwarf populations of the Milky Way and M31. In this paper, an analysis of the 12 closest Solo dwarf galaxies accessible from the northern hemisphere is presented, including derivation of their distances, spatial distributions, morphology, and extended structures, including their inner integrated light properties and their outer resolved star distributions. All 12 galaxies are found to be reasonably well described by two-dimensional Sersic functions, although UGC 4879 in particular shows tentative evidence of two distinct components. No prominent extended stellar substructures, that could be signs of either faint satellites or recent mergers, are identified in the outer regions of any of the systems examined.
    GalaxyMilky WayStarWolf-Lundmark-MelotteDiffuse Intergalactic GasStellar populationsRed giantEllipticityLocal groupDwarf galaxy...
  • Dark matter-only simulations are able to produce the cosmic structure of a $\Lambda$CDM universe, at a much lower computational cost than more physically motivated hydrodynamical simulations. However, it is not clear how well smaller substructure is reproduced by dark matter-only simulations. To investigate this, we directly compare the substructure of galaxy clusters and of surrounding galaxy groups in hydrodynamical and dark matter-only simulations. We utilise TheThreeHundred project, a suite of 324 simulations of galaxy clusters that have been simulated with hydrodynamics, and in dark matter-only. We find that dark matter-only simulations underestimate the number density of galaxies in the centres of groups and clusters relative to hydrodynamical simulations, and that this effect is stronger in denser regions. We also look at the phase space of infalling galaxy groups, to show that dark matter-only simulations underpredict the number density of galaxies in the centres of these groups by about a factor of four. This implies that the structure and evolution of infalling groups may be different to that predicted by dark matter-only simulations. Finally, we discuss potential causes for this underestimation, considering both physical effects, and numerical differences in the analysis.
    Cluster of galaxiesGalaxyN-body simulationHydrodynamical simulationsDark matterGroup of galaxiesDark matter subhaloPhase spaceOutskirt of a galaxy clusterHalo finding algorithms...
  • We show that the standard notion of entanglement is not defined for gravitationally anomalous two-dimensional theories because they do not admit a local tensor factorization of the Hilbert space into local Hilbert spaces. Qualitatively, the modular flow cannot act consistently and unitarily in a finite region, if there are different numbers of states with a given energy traveling in the two opposite directions. We make this precise by decomposing it into two observations: First, a two-dimensional CFT admits a consistent quantization on a space with boundary only if it is not anomalous. Second, a local tensor factorization always leads to a definition of consistent, unitary, energy-preserving boundary condition. As a corollary we establish a generalization of the Nielsen-Ninomiya theorem to all two-dimensional unitary local QFTs: No continuum quantum field theory in two dimensions can admit a lattice regulator unless its gravitational anomaly vanishes. We also show that the conclusion can be generalized to six dimensions by dimensional reduction on a four-manifold of nonvanishing signature. We advocate that these points be used to reinterpret the gravitational anomaly quantum-information-theoretically, as a fundamental obstruction to the localization of quantum information.
    Lattice (order)Gravitational anomalyHamiltonianEntropyTensor productQuantum informationEntanglementDegree of freedomCentral chargeModular Hamiltonian...
  • By employing the ${\rm AdS}_3/{\rm CFT}_2$ correspondence in this note we observe an analogy between the structures found in connection with the Arkani-Hamed-Bai-He-Yan (ABHY) associahedron used for understanding scattering amplitudes, and the one used for understanding space-time emerging from patterns of entanglement. The analogy suggests the natural interpretation for the associahedron as a holographic entanglement polytope associated to the ${\rm CFT}_2$ vacuum. Our observations hint at the possibility that the factorization properties of scattering amplitudes are connected to the notion of separability of space-time as used in the theory of holographic quantum entanglement.
    EntanglementGeodesicKinematicsEntanglement entropyDiamondPolytopeConformal field theoryEntropyFundamental domainScattering amplitude...
  • Unitary decomposition is a widely used method to map quantum algorithms to an arbitrary set of quantum gates. Efficient implementation of this decomposition allows for translation of bigger unitary gates into elementary quantum operations, which is key to executing these algorithms on existing quantum computers. The decomposition can be used as an aggressive optimization method for the whole circuit, as well as to test part of an algorithm on a quantum accelerator. For selection and implementation of the decomposition algorithm, perfect qubits are assumed. We base our decomposition technique on Quantum Shannon Decomposition which generates O((3/4)*4^n) controlled-not gates for an n-qubit input gate. The resulting circuits are up to 10 times shorter than other methods in the field. When comparing our implementation to Qubiter, we show that our implementation generates circuits with half the number of CNOT gates and a third of the total circuit length. In addition to that, it is also up to 10 times as fast. Further optimizations are proposed to take advantage of potential underlying structure in the input or intermediate matrices, as well as to minimize the execution time of the decomposition.
    QubitOptimizationProgrammingQuantum gatesQuantum circuitQuantum algorithmsPythonProgramming LanguageQuantum computationSuperposition...
  • We discovered a so called high-temperature blackbody (HBB) component, found in the 15 -- 40 keV range, in the broad-band X-ray energy spectra of black hole (BH) candidate sources. A detailed study of this spectral feature is presented using data from five of the Galactic BH binaries, Cyg X-1, GX 339-4, GRS 1915+105, SS 433 and V4641~Sgr in the low/hard, intermediate, high/soft and very soft spectral states (LHS, IS, HSS and VSS, respectively) and spectral transitions between them using {\it RXTE}, INTEGRAL and BeppoSAX data. In order to fit the broad-band energy spectra of these sources we used an additive XSPEC model, composed of the Comptonization component and the Gaussian line component. In particular, we reveal that the IS spectra have the HBB component which color temperature, kT_HBB} is in the range of 4.5 -- 5.9 keV. This HBB feature has been detected in some spectra of these five sources only in the IS (for the photon index Gamma>1.9) using different X-ray telescopes. We also demonstrate that a timescale of the HBB-feature is of orders of magnitude shorter than the timescale of the iron line and its edge. That leads us to conclude that these spectral features are formed in geometrically different parts of the source and which are not connected to each other. Laurent & Titarchuk (2018) demonstrated a presence of a gravitational redshifted annihilation line emission in a BH using the Monte-Carlo simulations and therefore the observed HBB hump leads us to suggest this feature is a gravitational redshifted annihilation line observed in these black holes
    Black holeGX 339-4Rossi X-ray Timing ExplorerBeppoSAXCompact starSS 433Line emissionX-ray binaryINTEGRAL satelliteMonte Carlo method...
  • One of the problem revealed recently in cosmology is a so-called Hubble tension (HT), which is the difference between values of the present Hubble constant, measured by observation of the universe at redshift $z \lesssim 1$, and by observations of a distant universe with CMB fluctuations originated at $z \sim 1100$. In this paper we suggest, that this discrepancy may be explained by deviation of the cosmological expansion from a standard Lambda-CDM %simple Friedman model of a flat universe, during the period after recombination at $z \lesssim 1100$, due to action of additional variable component of a dark energy of different origin.. We suppose, that a dark matter (DM) has a common origin with a variable component of a dark energy (DEV). DE presently may have two components, one of which is the Einstein constant $\Lambda$, and another, smaller component DEV ($\Lambda_V$) comes from the remnants of a scalar fields responsible for inflation. Due to common origin and interconnections the densities of DEV and DM are supposed to be connected, and remain almost constant during, at least, the time after recombination, when we may approximate $\rho_{DM}=\alpha \rho_{DEV}$. This part of the dark energy in not connected with the cosmological constant $\Lambda$, but is defined by existence of scalar fields with a variable density. Taking into account the influence of DEV on the universe expansion we find the value of $\alpha$ which could remove the HT problem. In order to maintain the almost constant DEV/DM energy density ratio during the time interval at $z<1100$, we suggest an existence of a wide mass DM particle distribution.
    Dark matterScalar fieldDark energyRecombinationHubble constant tensionDark matter particleHubble constantCosmological constantInflationExpansion of the Universe...
  • The Last Journey is a large-volume, gravity-only, cosmological N-body simulation evolving more than 1.24 trillion particles in a periodic box with a side-length of 5.025Gpc. It was implemented using the HACC simulation and analysis framework on the BG/Q system, Mira. The cosmological parameters are chosen to be consistent with the results from the Planck satellite. A range of analysis tools have been run in situ to enable a diverse set of science projects, and at the same time, to keep the resulting data amount manageable. Analysis outputs have been generated starting at redshift z~10 to allow for construction of synthetic galaxy catalogs using a semi-analytic modeling approach in post-processing. As part of our in situ analysis pipeline we employ a new method for tracking halo sub-structures, introducing the concept of subhalo cores. The production of multi-wavelength synthetic sky maps is facilitated by generating particle lightcones in situ, also beginning at z~10. We provide an overview of the simulation set-up and the generated data products; a first set of analysis results is presented. A subset of the data is publicly available.
    Merger treeLight conesDark matter subhaloMiraSpherical OverdensityVirial massMass functionGalaxyTwo-point correlation functionCosmology...
  • We study self-interacting dark matter (SIDM) scenarios, where the $s$-wave self-scattering cross section almost saturates the Unitarity bound. Such self-scattering cross sections are singly parameterized by the dark matter mass, and are featured by strong velocity dependence in a wide range of velocities. They may be indicated by observations of dark matter halos in a wide range of masses, from Milky Way's dwarf spheroidal galaxies to galaxy clusters. We pin down the model parameters that saturates the Unitarity bound in well-motivated SIDM models: the gauged $L_{\mu} - L_{\tau}$ model and composite asymmetric dark matter model. We discuss implications and predictions of such model parameters for cosmology like the $H_{0}$ tension and dark-matter direct-detection experiments, and particle phenomenology like the beam-dump experiments.
    Standard ModelSelf-interacting dark matterHidden photonScattering cross sectionUnitarityDark matter particle massMilky WayNeutrinoAsymmetric dark matterHiggs boson...
  • We present an updated version of the HMcode augmented halo model that can be used to make accurate predictions of the non-linear matter power spectrum over a wide range of cosmologies. Major improvements include modelling of BAO damping in the power spectrum and an updated treatment of massive neutrinos. We fit our model to simulated power spectra and show that we can match the results with an RMS error of 2.5 per cent across a range of cosmologies, scales $k < 10\,h\mathrm{Mpc}^{-1}$, and redshifts $z<2$. The error rarely exceeds 5 per cent and never exceeds 16 per cent. The worst-case errors occur at $z\simeq2$, or for cosmologies with unusual dark-energy equations of state. This represents a significant improvement over previous versions of HMcode, and over other popular fitting functions, particularly for massive-neutrino cosmologies with high neutrino mass. We also present a simple halo model that can be used to model the impact of baryonic feedback on the power spectrum. This six-parameter physical model includes gas expulsion by AGN feedback and encapsulates star formation. By comparing this model to data from hydrodynamical simulations we demonstrate that the power spectrum response to feedback is matched at the $<1$ per cent level for $z<1$ and $k<20\,h\mathrm{Mpc}^{-1}$. We also present a single-parameter variant of this model, parametrized in terms of feedback strength, which is only slightly less accurate. We make code available for our non-linear and baryon models at https://github.com/alexander-mead/HMcode and it is also available within CAMB and soon within CLASS.
    CosmologyHalo modelMassive neutrinoNeutrinoNeutrino massDark energyMass functionMatter power spectrumBaryonic feedbackVirial mass...
  • Planned efforts to probe the largest observable distance scales in future cosmological surveys are motivated by a desire to detect relic correlations left over from inflation, and the possibility of constraining novel gravitational phenomena beyond General Relativity (GR). On such large scales, the usual Newtonian approaches to modelling summary statistics like the power spectrum and bispectrum are insufficient, and we must consider a fully relativistic and gauge-independent treatment of observables such as galaxy number counts in order to avoid subtle biases, e.g. in the determination of the $f_{\rm NL}$ parameter. In this work, we present an initial application of an analysis pipeline capable of accurately modelling and recovering relativistic spectra and correlation functions. As a proof of concept, we focus on the non-zero dipole of the redshift-space power spectrum that arises in the cross-correlation of different mass bins of dark matter halos, using strictly gauge-independent observable quantities evaluated on the past light cone of a fully relativistic N-body simulation in a redshift bin $1.7 \le z \le 2.9$. We pay particular attention to the correct estimation of power spectrum multipoles, comparing different methods of accounting for complications such as the survey geometry (window function) and evolution/bias effects on the past light cone, and discuss how our results compare with previous attempts at extracting novel GR signatures from relativistic simulations.
    Redshift binsLight conesLine of sightMagnetic monopoleStatistical estimatorReal spaceSelection functionDark matter haloGeneral relativityPast light cones...
  • The halo mass function (HMF) is a critical element in cosmological analyses of galaxy cluster catalogs. We quantify the impact of uncertainties in HMF parameters on cosmological constraints from cluster catalogs similar to those from Planck, those expected from the Euclid, Roman and Rubin surveys, and from a hypothetical larger future survey. We analyse simulated catalogs in each case, gradually loosening priors on HMF parameters to evaluate the degradation in cosmological constraints. While current uncertainties on HMF parameters do not substantially impact Planck-like surveys, we find that they can significantly degrade the cosmological constraints for a Euclid-like survey. Consequently, the current precision on the HMF will not be sufficient for Euclid (or Roman or Rubin) and possible larger surveys. Future experiments will have to properly account for uncertainties in HMF parameters, and it will be necessary to improve precision of HMF fits to avoid weakening constraints on cosmological parameters.
    Halo mass functionCosmological parametersCosmologyCosmological constraintsMass functionCluster number countsNeutrino massCluster of galaxiesFisher information matrixEuclid mission...
  • We study the relation of causal influence between input systems of a reversible evolution and its output systems, in the context of operational probabilistic theories. We analyse two different definitions that are borrowed from the literature on quantum theory -- where they are equivalent. One is the notion based on signalling, and the other one is the notion used to define the neighbourhood of a cell in a quantum cellular automaton. The latter definition, that we adopt in the general scenario, turns out to be strictly weaker than the former: it is possible for a system to have causal influence on another one without signalling to it. We stress that, according to our definition, it is impossible anyway to have causal influence in the absence of an interaction, e.g.~in a Bell-like scenario. We study various conditions for causal influence, and introduce the feature that we call {\em no interaction without disturbance}, under which we prove that signalling and causal influence coincide.
    Quantum theoryDilationSpeed of lightQuantum informationCoarse grainingQuantum networkQuantum cellular automataEntanglementBayesian networkPositive linear functional...
  • Sparse regression algorithms have been proposed as the appropriate framework to model the governing equations of a system from data, without needing prior knowledge of the underlying physics. In this work, we use sparse regression to build an accurate and explainable model of the stellar mass of central galaxies given properties of their host dark matter (DM) halo. Our data set comprises 9,521 central galaxies from the EAGLE hydrodynamic simulation. By matching the host halos to a DM-only simulation, we collect the halo mass and specific angular momentum at present time and for their main progenitors in 10 redshift bins from $z=0$ to $z=4$. The principal component of our governing equation is a third-order polynomial of the host halo mass, which models the stellar-mass halo-mass relation. The scatter about this relation is driven by the halo mass evolution and is captured by second and third-order correlations of the halo mass evolution with the present halo mass. An advantage of sparse regression approaches is that unnecessary terms are removed. Although we include information on halo specific angular momentum, these parameters are discarded by our methodology. This suggests that halo angular momentum has little connection to galaxy formation efficiency. Our model has a root mean square error (RMSE) of $0.167 \log_{10}(M^*/M_\odot)$, and accurately reproduces both the stellar mass function and central galaxy correlation function of EAGLE. The methodology appears to be an encouraging approach for populating the halos of DM-only simulations with galaxies, and we discuss the next steps that are required.
    GalaxyStellar massEAGLE simulation projectVirial massSparsityRegressionDark matter haloMean squared errorDark matterN-body simulation...
  • Context. HI filaments are closely related to dusty magnetized structures that are observable in the far infrared (FIR). Recently it was proposed that the coherence of oriented H i structures in velocity traces the line-of-sight magnetic field tangling. Aims. We study the velocity dependent coherence between FIR emission at 857 GHz and HI on angular scales of 18' . Methods. We use HI4PI HI data and Planck FIR data and apply the Hessian operator to extract filaments. For coherence we demand that local orientation angles {\theta} in the FIR at 857 GHz along the filaments are correlated with the HI. Results. We find some correlation for HI column densities at |v_LSR | < 50 km/s but a tight agreement between FIR and HI orientation angles {\theta} exists only in narrow velocity intervals of 1 km/s. Accordingly we assign velocities to FIR filaments. Along the line of sight these HI structures show a high degree of the local alignment with {\theta}, also in velocity space. Interpreting these aligned structures in analogy to the polarization of dust emission defines an HI polarization. We observe polarization fractions up to 80%, with averages of 30%. Orientation angles {\theta} along the filaments, projected perpendicular to the line of sight, are fluctuating systematically and allow to determine a characteristic distribution of filament curvatures. Conclusions. Local HI and FIR filaments identified by the Hessian analysis are coherent structures with well defined radial velocities. HI structures are also organized along the line of sight with a high degree of coherence. The observed bending of these structures in the plane of the sky is consistent with models for magnetic field curvatures induced by a Galactic small-scale turbulent dynamo.
    CurvatureHealth informaticsLine of sightSmall-scale dynamoOrientationStokes parametersHierarchical Equal Area isoLatitude PixelisationTurbulenceTurbulent dynamoVelocity dispersion...
  • The primordial magnetic fields (PMFs) produced in the early universe are expected to be the origin of the large-scale cosmic magnetic fields. The PMFs are considered to leave a footprint on the cosmic microwave background (CMB) anisotropies due to both the electromagnetic force and gravitational interaction. In this paper, we investigate how the PMFs affect the CMB anisotropies on smaller scales than the mean-free-path of the CMB photons. We solve the baryon Euler equation with Lorentz force due to the PMFs, and we show that the vector-type perturbations from the PMFs induce the CMB anisotropies below the Silk scale as $\ell>3000$. Based on our calculations, we put a constraint on the PMFs from the combined CMB temperature anisotropies obtained by Planck and South Pole Telescope (SPT). We have found that the highly-resolved temperature anisotropies of the SPT 2017 bandpowers at $\ell \lesssim 8000$ favor the PMF model with a small scale-dependence. As a result, the Planck and SPT's joint-analysis puts a constraint on the PMF spectral index as $n_B<-1.14$ at 95% confidence level (C.L.), and this is more stringent compared with the Planck-only constraint $n_B<-0.28$. We show that the PMF strength normalized on the co-moving 1 Mpc scale is also tightly constrained as $B_{1\mathrm{Mpc}}<1.5$ nG with Planck and SPT at 95% C.L., while $B_{1\mathrm{Mpc}}<3.2$ nG only with the Planck data at 95% C.L. We also discuss the effects on the cosmological parameter estimate when including the SPT data and CMB anisotropies induced by the PMFs.
    Cosmological magnetic fieldCMB temperature anisotropySouth Pole TelescopePlanck missionSmall-scale CMBRecombinationMean free pathCosmological parametersVector mode fluctuationMetric perturbation...
  • Deep Convolutional Neural Networks (DCNNs) are hard and time-consuming to train. Normalization is one of the effective solutions. Among previous normalization methods, Batch Normalization (BN) performs well at medium and large batch sizes and is with good generalizability to multiple vision tasks, while its performance degrades significantly at small batch sizes. In this paper, we find that BN saturates at extreme large batch sizes, i.e., 128 images per worker, i.e., GPU, as well and propose that the degradation/saturation of BN at small/extreme large batch sizes is caused by noisy/confused statistic calculation. Hence without adding new trainable parameters, using multiple-layer or multi-iteration information, or introducing extra computation, Batch Group Normalization (BGN) is proposed to solve the noisy/confused statistic calculation of BN at small/extreme large batch sizes with introducing the channel, height and width dimension to compensate. The group technique in Group Normalization (GN) is used and a hyper-parameter G is used to control the number of feature instances used for statistic calculation, hence to offer neither noisy nor confused statistic for different batch sizes. We empirically demonstrate that BGN consistently outperforms BN, Instance Normalization (IN), Layer Normalization (LN), GN, and Positional Normalization (PN), across a wide spectrum of vision tasks, including image classification, Neural Architecture Search (NAS), adversarial learning, Few Shot Learning (FSL) and Unsupervised Domain Adaptation (UDA), indicating its good performance, robust stability to batch size and wide generalizability. For example, for training ResNet-50 on ImageNet with a batch size of 2, BN achieves Top1 accuracy of 66.512% while BGN achieves 76.096% with notable improvement.
    StatisticsDeep convolutional neural networksArchitectureStochastic gradient descentRecurrent neural networkNeural networkSparsitySpectral normalizationGenerative Adversarial NetLong short term memory...
  • We estimate the net information exchange between adjacent quantum subsystems holographically living on the boundary of $AdS$ spacetime. The information exchange is a real time phenomenon and only after long time interval it may get saturated. Normally we prepare systems for small time intervals and measure the information exchange over finite interval only. We find that the information flow between entangled subsystems gets reduced if systems are in excited state whereas the ground state allows maximum information flow at any given time. Especially for $CFT_2$ we exactly show that a rise in the entropy is detrimental to the information exchange by a quantum dot and vice-versa. We next observe that there is a reduction in circuit (CV) complexity too in the presence of excitations for small times.
    Information flowExcited stateAnti de Sitter spaceQuantum informationEntropyEntanglementBlack holeEntanglement entropyCodimensionDisorder...