Recently bookmarked papers

with concepts:
  • A rare decay $K_L \to \mu^+ \mu^- $ has been measured precisely, while a rare decay $K_S \to \mu^+ \mu^- $ will be observed by an upgrade of the LHCb experiment. Although both processes are almost CP-conserving decays, we point out that an interference contribution between $K_L$ and $K_S$ in the kaon beam emerges from a genuine direct CP violation. It is found that the interference contribution can change $K_S \to \mu^+ \mu^-$ standard-model predictions at $\mathcal{O}(60\%)$. We also stress that an unknown sign of $\mathcal{A}(K_L \to \gamma \gamma)$ can be determined by a measurement of the interference, which can much reduce a theoretical uncertainty of $\mathcal{B}(K_L \to \mu^+ \mu^-)$. We also investigate the interference in a new physics model, where the $\epsilon'_K / \epsilon_K$ tension is explained by an additional $Z$-penguin contribution.
    InterferenceLHCbDirect CP-violationKaonBranching ratioRare decayStandard ModelPionKaon decayCharged pions...
  • We investigate models of a heavy neutral gauge boson Z' which could explain anomalies in B meson decays reported by the LHCb experiment. In these models, the Z' boson couples mostly to third generation fermions. We show that bottom quarks arising from gluon splitting can fuse into Z' as an essential production mechanism at the LHC, thereby allowing to probe these models. The study is performed within a generic framework for explaining the B anomalies that can be accommodated in well motivated models. The flavor violating b s coupling associated with Z' in such models produces lower bound on the production cross-section which gives rise to a cross-section range for such scenarios for the LHC to probe. Results are presented in Z' -> $\mu \mu$ decays with at least one bottom-tagged jet in its final state. Some parts of the model parameter space become constrained by the existing dimuon-resonance searches by the ATLAS and CMS collaborations. However, the requirement of one or two additional bottom-tagged jets in the final state would allow for probing a larger region of the parameter space of the models at the ongoing LHC program.
    MuonLarge Hadron ColliderBottom quarkMeson decaysB mesonProduction cross-sectionLHCbATLAS Experiment at CERNIntegrated luminosityVector boson fusion...
  • We discuss a novel approach to systematically determine the long-distance contribution to $B\to K^*\ell\ell$ decays in the kinematic region where the dilepton invariant mass is below the open charm threshold. This approach provides the most consistent and reliable determination to date and can be used to compute Standard Model predictions for all observables of interest, including the kinematic region where the dilepton invariant mass lies between the $J/\psi$ and the $\psi(2S)$ resonances. We illustrate the power of our results by performing a New Physics fit to the Wilson coefficient $C_9$. This approach is systematically improvable from theoretical and experimental sides, and applies to other decay modes of the type $B\to V\ell\ell$, $B\to P\ell\ell$ and $B\to V\gamma$.
    Standard ModelForm factorKinematicsInvariant massLight-cone sum rulesWilson coefficientsLHCbLight conesUnitarityBranching ratio...
  • Recently LHCb discovered the first doubly-charmed baryon $\Xi_{cc}^{++} = ccu$ at $3621.40 \pm 0.78$ MeV, very close to our theoretical prediction. We use the same methods to predict a doubly-bottom tetraquark $Tq(bb\bar u\bar d)$ with $J^P{=}1^+$ at $10,389\pm 12$ MeV, 215 MeV below the $BB^*$ threshold. $Tq(bb\bar u\bar d)$ is therefore stable under strong and EM interactions and can only decay weakly, the first exotic hadron with such a property. On the other hand, the mass of $Tq(cc\bar u\bar d)$ with $J^P{=}1^+$ is predicted to be $3882\pm12$ MeV, 7 MeV above the $D^0 D^{*+}$ threshold. $Tq(bc\bar u\bar d)$ with $J^P{=}0^+$ is predicted at $7134\pm13$ MeV, 11 MeV below the $\bar B^0 D^0$ threshold. Our precision is not sufficient to determine whether $cc\bar u\bar d$ and $bb\bar u\bar d$ are actually above or below the threshold. They could manifest themselves as narrow resonances just at threshold.
    TetraquarkHeavy quarkCharmed baryonsLHCbDiquarkExotic hadronQuark massEffective massDecay rateMeson decays...
  • The dissipation of small-scale perturbations in the early universe produces a distortion in the blackbody spectrum of cosmic microwave background photons. In this work, we propose to use these distortions as a probe of the microphysics of dark matter on scales $1\,\textrm{Mpc}^{-1}\lesssim k \lesssim 10^{4}\,\textrm{Mpc}^{-1}$. We consider in particular models in which the dark matter is kinetically coupled to either neutrinos or photons until shortly before recombination, and compute the photon heating rate and the resultant $\mu$-distortion in both cases. We show that the $\mu$-parameter is generally enhanced relative to $\Lambda$CDM for interactions with neutrinos, and may be either enhanced or suppressed in the case of interactions with photons. The deviations from the $\Lambda$CDM signal are potentially within the sensitivity reach of a PRISM-like experiment if $\sigma_{\textrm{DM}-\gamma} \gtrsim 1.1\times10^{-30} \left(m_{\textrm{DM}}/\textrm{GeV}\right) \textrm{cm}^{2}$ and $\sigma_{\textrm{DM}-\nu} \gtrsim 4.8\times 10^{-32} \left(m_{\textrm{DM}}/\textrm{GeV}\right) \textrm{cm}^{2}$ for time-independent cross sections, and $\sigma^{0}_{\textrm{DM}-\gamma} \gtrsim 1.8 \times 10^{-40} \left(m_{\textrm{DM}}/\textrm{GeV}\right) \textrm{cm}^{2}$ and $\sigma^{0}_{\textrm{DM}-\nu} \gtrsim 2.5 \times 10^{-47} \left(m_{\textrm{DM}}/\textrm{GeV}\right) \textrm{cm}^{2}$ for cross sections scaling as temperature squared, coinciding with the parameter regions in which late kinetic decoupling may serve as a solution to the small-scale crisis. Furthermore, these $\mu$-distortion signals differ from those of warm dark matter (no deviation from $\Lambda$CDM) and a suppressed primordial power spectrum (strongly suppressed or a negative $\mu$-parameter), demonstrating that CMB spectral distortion can potentially be used to distinguish between solutions to the small-scale crisis.
    Dark matterNeutrinoCold dark matterPRISM missionHorizonElastic scatteringLate kinetic decouplingAnisotropic stressTransfer functionWarm dark matter...
  • A few dark matter substructures have recently been detected in strong gravitational lenses though their perturbations of highly magnified images. We derive a characteristic scale for lensing perturbations and show that this is significantly larger than the perturber's Einstein radius. We show that the perturber's projected mass enclosed within this radius, scaled by the log-slope of the host galaxy's density profile, can be robustly inferred even if the inferred density profile and tidal radius of the perturber are biased. We demonstrate the validity of our analytic derivation by using several gravitational lens simulations where the tidal radii and the inner log-slopes of the density profile of the perturbing subhalo are allowed to vary. By modeling these simulated data we find that our mass estimator, which we call the effective subhalo lensing mass, is accurate to within about 10\% or smaller in each case, whereas the inferred total subhalo mass can potentially be biased by nearly an order of magnitude. We therefore recommend that the effective subhalo lensing mass be reported in future lensing reconstructions, as this will allow for a more accurate comparison with the results of dark matter simulations.
    Dark matter subhaloHost galaxyTidal radiusWeak lensing mass estimateEinstein radiusGravitational lensingStatistical estimatorBayesian evidenceSurface brightnessNavarro-Frenk-White profile...
  • The physical ingredients to describe the epoch of cosmological recombination are amazingly simple and well-understood. This fact allows us to take into account a very large variety of processes, still finding potentially measurable consequences. In this contribution we highlight some of the detailed physics that were recently studied in connection with cosmological hydrogen and helium recombination. The impact of these considerations is two-fold: (i) the associated release of photons during this epoch leads to interesting and unique deviations of the Cosmic Microwave Background (CMB) energy spectrum from a perfect blackbody, which, in particular at decimeter wavelength, may become observable in the near future. Despite the fact that the abundance of helium is rather small, it also contributes a sizeable amount of photons to the full recombination spectrum, which, because of differences in the dynamics of the helium recombinations and the non-trivial superposition of all components, lead to additional distinct spectral features. Observing the spectral distortions from the epochs of hydrogen and helium recombination, in principle would provide an additional way to determine some of the key parameters of the Universe (e.g. the specific entropy, the CMB monopole temperature and the pre-stellar abundance of helium), not suffering from limitations set by cosmic variance. Also it permits us to confront our detailed understanding of the recombination process with direct observational evidence. (ii) with the advent of high precision CMB data, e.g. as will be available using the Planck Surveyor or CMBpol, a very accurate theoretical understanding of the ionization history of the Universe becomes necessary for the interpretation of the CMB temperature and polarization anisotropies. (abridged)
    RecombinationCosmic microwave backgroundCosmologyHelium recombinationIonizationAbundanceMagnetic monopoleEntropyCMB temperatureCosmological parameters...
  • We re-analyse high redshift and high resolution Lyman-{\alpha} forest spectra from Viel et al. [1] seeking to constrain properties of warm dark matter particles. Compared to the previous work we consider a wider range on thermal histories of the intergalactic medium and find that both warm and cold dark matter models can explain the cut-off observed in the flux power spectra of high-resolution observations equally well. This implies, however, very different thermal histories and underlying re-ionisation models. We discuss how to remove this degeneracy.
    Intergalactic mediumThermalisationIGM temperatureFlux power spectrumReionizationCold dark matterDark matterWarm dark matterRedshift binsCosmology...
  • It is shown that in the passage of a short burst of non-linear plane gravitational wave, the kinetic energy of free particles may either decrease or increase. The decreasing or increasing of the kinetic energy depends crucially on the initial conditions (position and velocity) of the free particle. Therefore a plane gravitational wave may extract energy from a physical system.
    Gravitational waveGravitational energyLine elementGeodesicParticle massGravitational fieldsDissipationTetradLaser Interferometer Space AntennaGeneral relativity...
  • We investigate the impact of resonant gravitational waves on quadrupole acoustic modes of Sun-like stars located nearby stellar black hole binary systems (such as GW150914 and GW151226). We find that the stimulation of the low-overtone modes by gravitational radiation can lead to sizeable photometric amplitude variations, much larger than the predictions for amplitudes driven by turbulent convection, which in turn are consistent with the photometric amplitudes observed in most Sun-like stars. For accurate stellar evolution models, using up-to-date stellar physics, we predict photometric amplitude variations of $1$ -- $10^3$ ppm for a solar mass star located at a distance between 1 au and 10 au from the black hole binary, and belonging to the same multi-star system. The observation of such a phenomenon will be within the reach of the Plato mission because telescope will observe several portions of the Milky Way, many of which are regions of high stellar density with a substantial mixed population of Sun-like stars and black hole binaries.
    StarBlack holeSunGravitational waveGravitational radiationStar systemsStellar mass black holesQuadrupoleMilky WayOf stars...
  • The recently discovered burst of gravitational waves GW150914 provides a good new chance to verify the current view on the evolution of close binary stars. Modern population synthesis codes help to study this evolution from two main sequence stars up to the formation of two final remnant degenerate dwarfs, neutron stars or black holes [Massevich 1988]. To study the evolution of the GW150914 predecessor we use the "Scenario Machine" code presented by [Lipunov 1996]. The scenario modelling conducted in this study allowed to describe the evolution of systems for which the final stage is a massive BH+BH merger. We find that the initial mass of the primary component can be $100\div 140 M_{\odot}$ and the initial separation of the components can be $50\div 350 R_{\odot}$. Our calculations show the plausibility of modern evolutionary scenarios for binary stars and the population synthesis modelling based on it.
    Black holeWolf-Rayet starStarNeutron starClose binary starsLIGO GW150914 eventBinary starStellar windCompact starIC 10...
  • The time delay between gravitational wave signals arriving at widely separated detectors can be used to place upper and lower bounds on the speed of gravitational wave propagation. Using a Bayesian approach that combines the first three gravitational wave detections reported by the LIGO collaboration we constrain the gravitational waves propagation speed c_gw to the 90% credible interval 0.55 c < c_gw < 1.42 c, where c is the speed of light in vacuum. These bounds will improve as more detections are made and as more detectors join the worldwide network. Of order twenty detections by the two LIGO detectors will constrain the speed of gravity to within 20% of the speed of light, while just five detections by the LIGO-Virgo-Kagra network will constrain the speed of gravity to within 1% of the speed of light.
    Gravitational waveTime delayLaser Interferometer Gravitational-Wave ObservatorySpeed of lightWave propagationCredible intervalGeneral relativityLIGO GW151226 eventLIGO GW150914 eventBayesian approach...
  • We develop a unified description, via the Boltzmann equation, of damping of gravitational waves by matter, incorporating collisions. We identify two physically distinct damping mechanisms -- collisional and Landau damping. We first consider damping in flat spacetime, and then generalize the results to allow for cosmological expansion. In the first regime, maximal collisional damping of a gravitational wave, independent of the details of the collisions in the matter is, as we show, significant only when its wavelength is comparable to the size of the horizon. Thus damping by intergalactic or interstellar matter for all but primordial gravitational radiation can be neglected. Although collisions in matter lead to a shear viscosity, they also act to erase anisotropic stresses, thus suppressing the damping of gravitational waves. Damping of primordial gravitational waves remains possible. We generalize Weinberg's calculation of gravitational wave damping, now including collisions and particles of finite mass, and interpret the collisionless limit in terms of Landau damping. While Landau damping of gravitational waves cannot occur in flat spacetime, the expansion of the universe allows such damping by spreading the frequency of a gravitational wave of given wavevector.
    Gravitational waveLandau dampingExpansion of the UniverseCollisional dampingBinary starBoltzmann transport equationGravitational radiationDark matterExpanding universeNeutrino...
  • The merger of a neutron star (NS) binary may result in the formation of a long-lived, or indefinitely stable, millisecond magnetar remnant surrounded by a low-mass ejecta shell. A portion of the magnetar's prodigious rotational energy is deposited behind the ejecta in a pulsar wind nebula, powering luminous optical/X-ray emission for hours to days following the merger. Ions in the pulsar wind may also be accelerated to ultra-high energies, providing a coincident source of high energy cosmic rays and neutrinos. At early times, the cosmic rays experience strong synchrotron losses; however, after a day or so, pion production through photomeson interaction with thermal photons in the nebula comes to dominate. The pion products initially suffer from synchrotron cooling themselves, but after a few days have sufficient time to decay into energetic leptons and high-energy neutrinos. After roughly a week, the density of background photons decreases sufficiently for cosmic rays to escape the source without secondary production. These competing effects result in a neutrino light curve that peaks on a few day timescale near an energy of $\sim10^{18}$ eV. This signal may be detectable for individual mergers out to $\sim$ 10 Mpc by the IceCube Observatory (or up to 100 Mpc by next-generation neutrino telescopes), providing clear evidence for a long-lived NS remnant, the presence of which may otherwise be challenging to identify from the gravitational waves alone. Under the optimistic assumption that a sizable fraction of NS mergers produce long-lived magnetars, the cumulative cosmological neutrino background is estimated to be $\sim 10^{-8}\,\rm GeV\,cm^{-2}\,s^{-1}\,sr^{-1}$ for a NS merger rate of $10^{-7}\,\rm Mpc^{-3}\,yr^{-1}$ (depending also on the magnetar's dipole field strength), overlapping with IceCube's current sensitivity and within the reach of next-generation neutrino telescopes.
    MagnetarNeutron starEjectaNebulaeNeutrinoCosmic rayPionIceCube Neutrino ObservatoryHigh energy neutrinosGravitational wave...
  • It has recently been reported by Cresswell et al. [1] that correlations in the noise surrounding the observed gravitational wave signals, GW150194, GW151226, and GW170194 were found by the two LIGO detectors in Hanford and Livingston with the same time delay as the signals themselves. This raised some issues about the statistical reliability of the signals themselves, which led to much discussion, the current view appearing to support the contention that there is something unexplained that may be of genuine astrophysical interest [2]. In this note, it is pointed out that a resolution of this puzzle may be found in a proposal very recently put forward by the author [3], see also [4], that what seems to be spuriously generated noise may in fact be gravitational events caused by the decay of dark-matter particles (erebons) of mass around 10^-5g, the existence of such events being a clear implication of the cosmological scheme of conformal cyclic cosmology, or CCC [5], [6]. A brief outline of the salient points of CCC is provided here, especially with regard to its prediction of erebons and their impulsive gravitational signals.
    Conformal Cyclic CosmologyLaser Interferometer Gravitational-Wave ObservatoryGravitational waveLIGO GW151226 eventTime delayDark matter decayDark matter particleEventResolutionMass...
  • Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20\% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.
    Approximate Bayesian computationQuantile regressionGalaxyCosmological redshiftLikelihood functionLuminosity functionMonte Carlo Markov chainNumerical rangeCosmologyMonte Carlo method...
  • Active Galactic Nuclei (AGN) are energetic astrophysical sources powered by accretion onto supermassive black holes in galaxies, and present unique observational signatures that cover the full electromagnetic spectrum over more than twenty orders of magnitude in frequency. The rich phenomenology of AGN has resulted in a large number of different "flavours" in the literature that now comprise a complex and confusing AGN "zoo". It is increasingly clear that these classifications are only partially related to intrinsic differences between AGN, and primarily reflect variations in a relatively small number of astrophysical parameters as well the method by which each class of AGN is selected. Taken together, observations in different electromagnetic bands as well as variations over time provide complementary windows on the physics of different sub-structures in the AGN. In this review, we present an overview of AGN multi-wavelength properties with the aim of painting their "big picture" through observations in each electromagnetic band from radio to gamma-rays as well as AGN variability. We address what we can learn from each observational method, the impact of selection effects, the physics behind the emission at each wavelength, and the potential for future studies. To conclude we use these observations to piece together the basic architecture of AGN, discuss our current understanding of unification models, and highlight some open questions that present opportunities for future observational and theoretical progress.
    Active Galactic NucleiLuminosityQuasarHost galaxyBlazarBlack holeGalaxyTorusAccretionAccretion disk...
  • We introduce FORM 4.2, a new minor release of the symbolic manipulation toolkit. We demonstrate several new features, such as a new pattern matching option, new output optimization, and automatic expansion of rational functions.
    OptimizationRational functionGraphAutomorphismProgramming LanguageFunctional programmingComputer algebra systemArithmeticPartition functionMonte Carlo method...
  • The word2vec software of Tomas Mikolov and colleagues (https://code.google.com/p/word2vec/ ) has gained a lot of traction lately, and provides state-of-the-art word embeddings. The learning models behind the software are described in two research papers. We found the description of the models in these papers to be somewhat cryptic and hard to follow. While the motivations and presentation may be obvious to the neural-networks language-modeling crowd, we had to struggle quite a bit to figure out the rationale behind the equations. This note is an attempt to explain equation (4) (negative sampling) in "Distributed Representations of Words and Phrases and their Compositionality" by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean.
    Word vectorsWord embeddingSkip-gram modelNeural networkOptimizationEmbeddingNon-convexityLogistic regressionObjectiveProbability...
  • We investigate the role of thermal velocities in N-body simulations of structure formation in warm dark matter models. Starting from the commonly used approach of adding thermal velocities, randomly selected from a Fermi-Dirac distribution, to the gravitationally-induced (peculiar) velocities of the simulation particles, we compare the matter and velocity power spectra measured from CDM and WDM simulations with and without thermal velocities. This prescription for adding thermal velocities results in deviations in the velocity field in the initial conditions away from the linear theory predictions, which affects the evolution of structure at later times. We show that this is entirely due to numerical noise. For a warm candidate with mass $3.3$ keV, the matter and velocity power spectra measured from simulations with thermal velocities starting at $z=199$ deviate from the linear prediction at $k \gtrsim10$ $h/$Mpc, with an enhancement of the matter power spectrum $\sim \mathcal{O}(10)$ and of the velocity power spectrum $\sim \mathcal{O}(10^2)$ at wavenumbers $k \sim 64$ $h/$Mpc with respect to the case without thermal velocities. At late times, these effects tend to be less pronounced. Indeed, at $z=0$ the deviations do not exceed $6\%$ (in the velocity spectrum) and $1\%$ (in the matter spectrum) for scales $10 <k< 64$ $h/$Mpc. Increasing the resolution of the N-body simulations shifts these deviations to higher wavenumbers. The noise introduces more spurious structures in WDM simulations with thermal velocities and modifies the radial density profiles of dark matter haloes. We find that spurious haloes start to appear in simulations which include thermal velocities at a mass that is $\sim$3 times larger than in simulations without thermal velocities.
    Thermal velocitiesWarm dark matterCold dark matterInitial conditions for cosmological simulationsMatter power spectrumThermal WDMN-body simulationHalo mass functionSpurious haloDark matter...
  • We develop a formalism to analytically describe the clustering of matter in the mildly non-linear regime in the presence of massive neutrinos. Neutrinos, whose free streaming wavenumber ($k_{\rm fs}$) is typically longer than the non-linear scale ($k_{\rm NL}$) are described by a Boltzmann equation coupled to the effective fluid-like equations that describe dark matter. We solve the equations expanding in the neutrino density fraction $(f_\nu)$ and in $k/ k_{\rm NL}$, and add suitable counterterms to renormalize the theory. This allows us to describe the contribution of short distances to long-distance observables. Equivalently, we construct an effective Boltzmann equation where we add additional terms whose coefficients renormalize the contribution from short-distance physics. We argue that neutrinos with $k_{\rm fs}\gtrsim k_{\rm NL}$ require an additional counterterm similar to the speed of sound ($c_s$) for dark matter. We compute the one-loop total-matter power spectrum, and find that it is roughly equal to $16f_\nu$ times the dark matter one for $k$'s larger that the typical $k_{\rm fs}$. It is about half of that for smaller $k$'s. The leading contribution results from the back-reaction of the neutrinos on the dynamics of the dark matter. The counterterms contribute in a hierarchical way: the leading ones can either be computed in terms of $c_s$, or can be accounted for by shifting $c_s$ by an amount proportional to $f_\nu$.
    NeutrinoDark matterMassive neutrinoLarge scale structureEffective field theoryBoltzmann transport equationSpeed of soundFree streaming of particlesMatter power spectrumTheory...
  • We present constraints on the masses of extremely light bosons dubbed fuzzy dark matter from Lyman-$\alpha$ forest data. Extremely light bosons with a De Broglie wavelength of $\sim 1$ kpc have been suggested as dark matter candidates that may resolve some of the current small scale problems of the cold dark matter model. For the first time we use hydrodynamical simulations to model the Lyman-$\alpha$ flux power spectrum in these models and compare with the observed flux power spectrum from two different data sets: the XQ-100 and HIRES/MIKE quasar spectra samples. After marginalization over nuisance and physical parameters and with conservative assumptions for the thermal history of the IGM that allow for jumps in the temperature of up to $5000\rm\,K$, XQ-100 provides a lower limit of 7.1$\times 10^{-22}$ eV, HIRES/MIKE returns a stronger limit of 14.3$\times 10^{-22}$ eV, while the combination of both data sets results in a limit of 20 $\times 10^{-22}$ eV (2$\sigma$ C.L.). The limits for the analysis of the combined data sets increases to 37.5$\times 10^{-22}$ eV (2$\sigma$ C.L.) when a smoother thermal history is assumed where the temperature of the IGM evolves as a power-law in redshift. Light boson masses in the range $1-10 \times10^{-22}$ eV are ruled out at high significance by our analysis, casting strong doubts that FDM helps solve the "small scale crisis" of the cold dark matter models.
    Flux power spectrumIntergalactic mediumHIRES spectrometerMIKE spectrographXQ-100Hydrodynamical simulationsWarm dark matterLight bosonCold dark matterFuzzy dark matter...
  • We investigate self-shielding of intergalactic hydrogen against ionizing radiation in radiative transfer simulations of cosmic reionization carefully calibrated with Lyman alpha forest data. While self-shielded regions manifest as Lyman-limit systems in the post-reionization Universe, here we focus on their evolution during reionization (redshifts z=6-10). At these redshifts, the spatial distribution of hydrogen-ionizing radiation is highly inhomogeneous, and some regions of the Universe are still neutral. After masking the neutral regions and ionizing sources in the simulation, we find that the hydrogen photoionization rate depends on the local hydrogen density in a manner very similar to that in the post-reionization Universe. The characteristic physical hydrogen density above which self-shielding becomes important at these redshifts is about $\mathrm{n_H \sim 3 \times 10^{-3} cm^{-3}}$, or $\sim$ 20 times the mean hydrogen density, reflecting the fact that during reionization photoionization rates are typically low enough that the filaments in the cosmic web are often self-shielded. The value of the typical self-shielding density decreases by a factor of 3 between redshifts z=3 and 10, and follows the evolution of the average photoionization rate in ionized regions in a simple fashion. We provide a simple parameterization of the photoionization rate as a function of density in self-shielded regions during the epoch of reionization.
    Photoionization rateReionizationRecombinationRadiative transferEpoch of reionizationIonizing radiationIntergalactic mediumVirial massRadiative transfer simulationsNeutral hydrogen gas...
  • The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed/memory/accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to-apples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a unified implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016] and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and trace out the speed/accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and memory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task.
    Convolutional neural networkArchitectureSmall-scale dynamoCOCO simulationClassificationObject detectionInferenceRegressionActivation functionOptimization...
  • Supersymmetric states in M-theory are mapped after compactification to perturbatively non-supersymmetric states in type IIA string theory, with the supersymmetric parts being encoded in the non-perturbative section of the string theory. An observer unable to recognise certain topological features of string theory will not detect supersymmetry. Such relativity of symmetry can also be derived in the context of Theorem 3 in ref. [11]. The tool of choice in this context is the universal coefficient theorem linking cohomology theories with coefficients that reveal respectively hide certain topological features. As a consequence of these observations, it is shown that the same theorem is capable of linking perturbative with non-perturbative string theoretical domains. A discussion of inflow anomaly cancellation is also included in the context of universal coefficient theorems.
    CohomologyD-braneString theoryUniversal coefficient theoremAnomaly cancellationM-theoryRamond-Ramond chargeExact sequenceType IIA string theorySupersymmetry...
  • This a pedagogical introduction to scattering amplitudes in gauge theories. It proceeds from Dirac equation and Weyl fermions to the two pivot points of current developments: the recursion relations of Britto, Cachazo, Feng and Witten, and the unitarity cut method pioneered by Bern, Dixon, Dunbar and Kosower. In ten lectures, it covers the basic elements of on-shell methods.
    PropagatorHelicityFeynman diagramsSuper Yang-Mills theoryLorentz transformationGauginoSupersymmetryDegree of freedomQuantum electrodynamicsTadpole...
  • An infinite number of physically nontrivial symmetries are found for abelian gauge theories with massless charged particles. They are generated by large $U(1)$ gauge transformations that asymptotically approach an arbitrary function $\varepsilon(z,\bar{z})$ on the conformal sphere at future null infinity ($\mathscr I^+$) but are independent of the retarded time. The value of $\varepsilon$ at past null infinity ($\mathscr I^-$) is determined from that on $\mathscr I^+$ by the condition that it take the same value at either end of any light ray crossing Minkowski space. The $\varepsilon\neq$ constant symmetries are spontaneously broken in the usual vacuum. The associated Goldstone modes are zero-momentum photons and comprise a $U(1)$ boson living on the conformal sphere. The Ward identity associated with this asymptotic symmetry is shown to be the abelian soft photon theorem.
    Soft photonsHelicityLarge gauge transformationQuantum electrodynamicsPhase spaceGauge symmetryGoldstone bosonZero modeCharged particleBosonization...
  • This is a redacted transcript of a course given by the author at Harvard in spring semester 2016. It contains a pedagogical overview of recent developments connecting the subjects of soft theorems, the memory effect and asymptotic symmetries in four-dimensional QED, nonabelian gauge theory and gravity with applications to black holes. The lectures may be viewed online at https://goo.gl/3DJdOr. Please send typos or corrections to strominger@physics.harvard.edu.
    Black holeSoft photonsHorizonGravitonGauge theorySymplectic formInfrared limitGauge transformationGauge symmetryLarge gauge transformation...
  • Deep residual networks (ResNets) have significantly pushed forward the state-of-the-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer's activations can be reconstructed exactly from the next layer's. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth.
    BackpropagationClassificationNetworks
  • The appears to exist no detailed calculation of the multiphoton trident process e + n omega_0 -> e' + e+e-, which can occur during the interaction of an electron beam with an intense laser beam. We present a calculation in the Weizsacker-Williams approximation that is in good agreement with QED calculations for the weak-field case.
    Weizsacker-Williams approximationTrident productionLasersPhoton-jetQuantum electrodynamicsLorentz factorSpeed of lightRelativistic electronNumerical simulationCovariance...
  • Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift $z=0$, we find that after eight iterations the reconstructed density is more than $95\%$ correlated with the initial density at $k\le 0.35\; h\mathrm{Mpc}^{-1}$. The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at $k\le 0.2\; h\mathrm{Mpc}^{-1}$, and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at $z=0$ and by a factor of 2.5 at $z=0.6$, improving standard BAO reconstruction by $70\%$ at $z=0$ and $30\%$ at $z=0.6$, and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.
    Baryon acoustic oscillationsTransfer functionSignal to noise ratioPerturbation theoryZeldovich approximationLarge scale structureReal spaceShell crossingRedshift-space distortionEffective field theory...
  • The production of a mu+mu- pair from the scattering of a muon-neutrino off the Coulomb field of a nucleus, known as neutrino trident production, is a sub-weak process that has been observed in only a couple of experiments. As such, we show that it constitutes an exquisitely sensitive probe in the search for new neutral currents among leptons, putting the strongest constraints on well-motivated and well-hidden extensions of the Standard Model gauge group, including the one coupled to the difference of the lepton number between the muon and tau flavor, L_mu-L_tau. The new gauge boson, Z', increases the rate of neutrino trident production by inducing additional $(\bar\mu \gamma_\alpha \mu)(\bar\nu \gamma^\alpha \nu)$ interactions, which interfere constructively with the Standard Model contribution. Existing experimental results put significant restrictions on the parameter space of any model coupled to muon number L_mu, and disfavor a putative resolution to the muon g-2 discrepancy via the loop of Z' for any mass m_Z' > 400 MeV. The reach to the models' parameter space can be widened with future searches of the trident production at high-intensity neutrino facilities such as the LBNE.
    NeutrinoMuonStandard ModelNeutrino trident productionNeutrino beamInterferenceMuon neutrinoDUNE experimentEquivalent photon approximationPhase space...
  • With their high beam energy and intensity, existing and near-future proton beam dumps provide an excellent opportunity to search for new very weakly coupled particles in the MeV to GeV mass range. One particularly interesting example is a so-called axion-like particle (ALP), i.e. a pseudoscalar coupled to two photons. The challenge in proton beam dumps is to reliably calculate the production of the new particles from the interactions of two composite objects, the proton and the target atoms. In this work we argue that Primakoff production of ALPs proceeds in a momentum range where production rates and angular distributions can be determined to sufficient precision using simple electromagnetic form factors. Reanalysing past proton beam dump experiments for this production channel, we derive novel constraints on the parameter space for ALPs. We show that the NA62 experiment at CERN could probe unexplored parameter space by running in 'dump mode' for a few days and discuss opportunities for future experiments such as SHiP.
    Axion-like particleNA62 experimentBeam dumpSHiP experimentProton beam-dump experimentProton beamTransverse momentumForm factorLaboratory frameProduction cross-section...
  • We have calculated cross sections for the production of lepton pairs by a neutrino incident on a nucleus using both the equivalent photon approximation, and deep inelastic formalism. We find that production of mixed flavour lepton pairs can have production cross sections as high as 35 times those of the traditional muon pair-production process. Rates are estimated for the SHiP and DUNE intensity frontier experiments. We find that multiple trident production modes, some of which have never been observed, represent observable signals over the lifetime of the detectors. Our estimates indicate that the SHiP collaboration should be able to observe on the order of 300 trident events given $2\cdot 10^{20}$ POT, and that the DUNE collaboration can expect approximately 250 trident events in their near detector given 3X10^{22} POT. We also discuss possible applications of the neutrino trident data to be collected at SHiP and DUNE for SM and BSM physics.
    NeutrinoDUNE experimentSHiP experimentFlavourCharged currentEquivalent photon approximationMuonPhase spacePartonMomentum transfer...
  • Projecting measurements of the interactions of the known Standard Model (SM) states into an effective field theory framework (EFT) is an important goal of the LHC physics program. The interpretation of measurements of the properties of the Higgs-like boson in an EFT allows one to consistently study the properties of this state, while the SM is allowed to eventually break down at higher energies. In this review, basic concepts relevant to the construction of such EFTs are reviewed pedagogically. Electroweak precision data is discussed as a historical example of some importance to illustrate critical consistency issues in interpreting experimental data in EFTs. A future precision Higgs phenomenology program can benefit from the projection of raw experimental results into consistent field theories such as the SM, the SM supplemented with higher dimensional operators (the SMEFT) or an Electroweak chiral Lagrangian with a dominantly $J^P = 0^+$ scalar (the HEFT). We discuss the developing SMEFT and HEFT approaches, that are consistent versions of such EFTs, systematically improvable with higher order corrections, and comment on the pseudo-observable approach. We review the challenges that have been overcome in developing EFT methods for LHC studies, and the challenges that remain.
    Standard ModelEffective field theoryEffective field theory of the Standard ModelHiggs bosonWilson coefficientsLarge Hadron ColliderField theoryPower countingBeyond the Standard ModelLarge Electron-Positron Collider...
  • We analyze relations between BPS degeneracies related to Labastida-Marino-Ooguri-Vafa (LMOV) invariants, and algebraic curves associated to knots. We introduce a new class of such curves that we call extremal A-polynomials, discuss their special properties, and determine exact and asymptotic formulas for the corresponding (extremal) BPS degeneracies. These formulas lead to nontrivial integrality statements in number theory, as well as to an improved integrality conjecture stronger than the known M-theory integrality predictions. Furthermore we determine the BPS degeneracies encoded in augmentation polynomials and show their consistency with known colored HOMFLY polynomials. Finally we consider refined BPS degeneracies for knots, determine them from the knowledge of super-A-polynomials, and verify their integrality. We illustrate our results with twist knots, torus knots, and various other knots with up to 10 crossings.
    HOMFLY polynomialTwist knotBPS stateTorus knotAsymptotic expansionClassical limitNumber theoryTorusUnknotKnot polynomial...
  • We introduce and explore the relation between knot invariants and quiver representation theory, which follows from the identification of quiver quantum mechanics in D-brane systems representing knots. We identify various structural properties of quivers associated to knots, and identify such quivers explicitly in many examples, including some infinite families of knots, all knots up to 6 crossings, and some knots with thick homology. Moreover, based on these properties, we derive previously unknown expressions for colored HOMFLY-PT polynomials and superpolynomials for various knots. For all knots, for which we identify the corresponding quivers, the LMOV conjecture for all symmetric representations (i.e. integrality of relevant BPS numbers) is automatically proved.
    QuiverPerturbation theoryTorus knotKnot invariantUnknotBPS statePermutationTrefoil knotExpectation ValueTwist knot...
  • We discuss relations between quantum BPS invariants defined in terms of a product decomposition of certain series, and difference equations (quantum A-polynomials) that annihilate such series. We construct combinatorial models whose structure is encoded in the form of such difference equations, and whose generating functions (Hilbert-Poincar\'e series) are solutions to those equations and reproduce generating series that encode BPS invariants. Furthermore, BPS invariants in question are expressed in terms of Lyndon words in an appropriate language, thereby relating counting of BPS states to the branch of mathematics referred to as combinatorics on words. We illustrate these results in the framework of colored extremal knot polynomials: among others we determine dual quantum extremal A-polynomials for various knots, present associated combinatorial models, find corresponding BPS invariants (extremal Labastida-Mari\~no-Ooguri-Vafa invariants) and discuss their integrality.
    HOMFLY polynomialCountingTwist knotCatalan numberUnknotQuiverClassical limitTorus knotFree algebraBPS state...
  • In this paper, we investigate the properties of the full colored HOMFLYPT invariants in the full skein of the annulus $\mathcal{C}$. We show that the full colored HOMFLYPT invariant has a nice structure when $q\rightarrow 1$. The composite invariant is a combination of the full colored HOMFLYPT invariants. In order to study the framed LMOV type conjecture for composite invariants, we introduce the framed reformulated composite invariant $\check{\mathcal{R}}_{p}(\mathcal{L})$. By using the HOMFLY skein theory, we prove that $\check{\mathcal{R}}_{p}(\mathcal{L})$ lies in the ring $2\mathbb{Z}[(q-q^{-1})^2,t^{\pm 1}]$. Furthermore, we propose a conjecture of congruent skein relation for $\check{\mathcal{R}}_{p}(\mathcal{L})$ and prove it for certain special cases.
    Skein relationOrientationTorusQuantum invariantQuantum groupIrreducible representationWritheUnknotFundamental representationDuality...
  • Based on the proof of Labastida-Mari{\~n}o-Ooguri-Vafa conjecture \cite{lmov}, we derive an infinite product formula for Chern-Simons partition functions, the generating function of quantum $\fsl_N$ invariants. Some symmetry properties of the infinite product will also be discussed.
    Chern-Simons termPartition functionKnot invariantHOMFLY polynomialDualityChern-Simons theoryQuantum groupOrientationUnknotGauge theory...
  • We argue how to identify supersymmetric quiver quantum mechanics description of BPS states, which arise in string theory in brane systems representing knots. This leads to a surprising relation between knots and quivers: to a given knot we associate a quiver, so that various types of knot invariants are expressed in terms of characteristics of a moduli space of representations of the corresponding quiver. This statement can be regarded as a novel type of categorification of knot invariants, and among its various consequences we find that Labastida-Mari\~no-Ooguri-Vafa (LMOV) invariants of a knot can be expressed in terms of motivic Donaldson-Thomas invariants of the corresponding quiver; this proves integrality of LMOV invariants, conjectured originally based on string theory and M-theory arguments.
    QuiverPerturbation theoryKnot invariantBPS stateString theoryQuantum mechanicsCategorificationTrefoil knotM-theoryExpectation Value...
  • We study the Chern-Simons partition function of orthogonal quantum group invariants, and propose a new orthogonal Labastida-Mari\~{n}o-Ooguri-Vafa conjecture as well as degree conjecture for free energy associated to the orthogonal Chern-Simons partition function. We prove the degree conjecture and some interesting cases of orthogonal LMOV conjecture. In particular, We provide a formula of colored Kauffman polynomials for torus knots and links, and applied this formula to verify certain case of the conjecture at roots of unity except $1$. We also derive formulas of Lickorish-Millett type for Kauffman polynomials and relate all these to the orthogonal LMOV conjecture.
    Quantum groupChern-Simons termKauffman polynomialPartition functionTorus knotTorusUnknotHOMFLY polynomialDualityIrreducible representation...
  • We study the open string integrality invariants (LMOV invariants) for toric Calabi-Yau 3-folds with Aganagic-Vafa brane (AV-brane). In this paper, we focus on the case of the resolved conifold with one out AV-brane in any integer framing $\tau$, which is the large $N$ duality of the Chern-Simons theory for a framed unknot with integer framing $\tau$ in $S^3$. We compute the explicit formulas for the LMOV invariants in genus $g=0$ with any number of holes, and prove their integrality. For the higher genus LMOV invariants with one hole, they are reformulated into a generating function $g_{m}(q,a)$, and we prove that $g_{m}(q,a)\in (q^{1/2}-q^{-1/2})^{-2}\mathbb{Z}[(q^{1/2}-q^{-1/2})^2,a^{\pm 1/2}]$ for any integer $m\geq 1$. As a by product, we compute the reduced open string partition function of $\mathbb{C}^3$ with one AV-brane in framing $\tau$. We find that, for $\tau\leq -1$, this open string partition function is equivalent to the Hilbert-Poincar\'e series of the Cohomological Hall algebra of the $|\tau|$-loop quiver. It gives an open string GW/DT correspondence.
    Partition functionOpen string theory3-foldUnknotDualityChern-Simons termGromov-Witten invariantTopological stringsFramed knotQuiver...
  • Recent advances in knot polynomial calculus allowed us to obtain a huge variety of LMOV integers counting degeneracy of the BPS spectrum of topological theories on the resolved conifold and appearing in the genus expansion of the plethystic logarithm of the Ooguri-Vafa partition functions. Already the very first look at this data reveals that the LMOV numbers are randomly distributed in genus (!) and are very well parameterized by just three parameters depending on the representation, an integer and the knot. We present an accurate formulation and evidence in support of this new puzzling observation about the old puzzling quantities. It probably implies that the BPS states, counted by the LMOV numbers can actually be composites made from some still more elementary objects.
    Gaussian distributionCountingTorus knotKnot polynomialBinomial DistributionPartition functionHOMFLY polynomialTwist knotBPS statePerturbation theory...
  • We present a unified treatment of the quantum mechanics of B-factory and neutrino oscillation experiments. While our approach obtains the usual phenomenological predictions for these experiments, it does so without having to invoke perplexing Einstein-Podolsky-Rosen correlations or non-intuitive kinematical assumptions.
    B mesonNeutrino oscillationsNeutrino oscillation experimentsB-factoryQuantum mechanics...
  • We study the return probability and its imaginary ($\tau$) time continuation after a quench from a domain wall initial state in the XXZ spin chain, focusing mainly on the region with anisotropy $|\Delta|< 1$. We establish exact Fredholm determinant formulas for those, by exploiting a connection to the six vertex model with domain wall boundary conditions. In imaginary time, we find the expected scaling for a partition function of a statistical mechanical model of area proportional to $\tau^2$, which reflects the fact that the model exhibits the limit shape phenomenon. In real time, we observe that in the region $|\Delta|<1$ the decay for large times $t$ is nowhere continuous as a function of anisotropy: it is either gaussian at root of unity or exponential otherwise. As an aside, we also determine that the front moves as $x_{\rm f}(t)=t\sqrt{1-\Delta^2}$, by analytic continuation of known arctic curves in the six vertex model. Exactly at $|\Delta|=1$, we find the return probability decays as $e^{-\zeta(3/2) \sqrt{t/\pi}}t^{1/2}O(1)$. It is argued that this result provides an upper bound on spin transport. In particular, it suggests that transport should be diffusive at the isotropic point for this quench.
    QuenchingDomain wallVertex modelPartition functionXXZ spin chainFredholm determinantQuadratureHamiltonianFluid dynamicsAnalytic continuation...
  • We use a suite of high-resolution cosmological dwarf galaxy simulations to test the accuracy of commonly-used mass estimators from Walker et al.(2009) and Wolf et al.(2010), both of which depend on the observed line-of-sight velocity dispersion and the 2D half-light radius of the galaxy, $Re$. The simulations are part of the the Feedback in Realistic Environments (FIRE) project and include twelve systems with stellar masses spanning $10^{5} - 10^{7} M_{\odot}$ that have structural and kinematic properties similar to those of observed dispersion-supported dwarfs. Both estimators are found to be quite accurate: $M_{Wolf}/M_{true} = 0.98^{+0.19}_{-0.12}$ and $M_{Walker}/M_{true} =1.07^{+0.21}_{-0.15}$, with errors reflecting the 68% range over all simulations. The excellent performance of these estimators is remarkable given that they each assume spherical symmetry, a supposition that is broken in our simulated galaxies. Though our dwarfs have negligible rotation support, their 3D stellar distributions are flattened, with short-to-long axis ratios $ c/a \simeq 0.4-0.7$. The accuracy of the estimators shows no trend with asphericity. Our simulated galaxies have sphericalized stellar profiles in 3D that follow a nearly universal form, one that transitions from a core at small radius to a steep fall-off $\propto r^{-4.2}$ at large $r$, they are well fit by S\'ersic profiles in projection. We find that the most important empirical quantity affecting mass estimator accuracy is $Re$ . Determining $Re$ by an analytic fit to the surface density profile produces a better estimated mass than if the half-light radius is determined via direct summation.
    Statistical estimatorGalaxyDwarf galaxyFIRE simulationsHalf-light radiusMilky WayStarStellar distributionEllipticityVelocity dispersion...
  • Chiral anomalies give rise to dissipationless transport phenomena such as the chiral magnetic and vortical effects. In these notes I review the theory from a quantum field theoretic, hydrodynamic and holographic perspective. A physical interpretation of the otherwise somewhat obscure concepts of consistent and covariant anomalies will be given. Vanishing of the CME in strict equilibrium will be connected to the boundary conditions in momentum space imposed by the regularization. The role of the gravitational anomaly will be explained. That it contributes to transport in an unexpectedly low order in the derivative expansion can be easiest understood via holography. Anomalous transport is supposed to play also a key role in understanding the electronics of advanced materials, the Dirac- and Weyl (semi)metals. Anomaly related phenomena such as negative magnetoresistivity, anomalous Hall effect, thermal anomalous Hall effect and Fermi arcs can be understood via anomalous transport. Finally I briefly review a holographic model of Weyl semimetal which allows to infer a new phenomenon related to the gravitational anomaly: the presence of odd viscosity.
    Quantum anomalyGauge fieldGravitational anomalyWeyl semimetalAnomalous transportChiral magnetic effectHolographic principleAnti de Sitter spaceChiral fermionHorizon...