- Full-duplex (Full-duplex)

by Muhammad R. A. Khandaker08 Mar 2016 12:48 - Plasma physics (Plasma physics)

by Jim Crumley22 Sep 2015 16:59 - MHD equations (MHD equations)

by Jim Crumley22 Sep 2015 16:57 - Electricity and magnetism (Electricity and magnetism)

by Jim Crumley22 Sep 2015 16:54 - Geosynchronous satellites (Geosynchronous satellites)

by Jim Crumley22 Sep 2015 16:51 - Neutrino Minimal Standard Model (Neutrino Minimal Standard Model)

by Prof. Mikhail Shaposhnikov15 Nov 2016 16:20 - Benjamin-Ono equation (Benjamin-Ono equation)

by Prof. Alexander Abanov03 Nov 2009 21:51 - Full counting statistics (Full counting statistics)

by Dr. Dmitri Ivanov28 Nov 2011 09:44 - Quantum shot noise (Quantum shot noise)

by Prof. Carlo Beenakker04 Feb 2014 08:52 - Grey-body radiation (Grey-body radiation)

by Prof. Carlo Beenakker08 Jan 2011 20:36

- The latest measurements of CMB electron scattering optical depth reported by Planck significantly reduces the allowed space of HI reionization models, pointing towards a later ending and/or less extended phase transition than previously believed. Reionization impulsively heats the intergalactic medium (IGM) to $\sim10^4$ K, and owing to long cooling and dynamical times in the diffuse gas, comparable to the Hubble time, memory of reionization heating is retained. Therefore, a late ending reionization has significant implications for the structure of the $z\sim5-6$ Lyman-$\alpha$ (ly$\alpha$) forest. Using state-of-the-art hydrodynamical simulations that allow us to vary the timing of reionization and its associated heat injection, we argue that extant thermal signatures from reionization can be detected via the ly$\alpha$ forest power spectrum at $5< z<6$. This arises because the small-scale cutoff in the power depends not only the the IGMs temperature at these epochs, but is also particularly sensitive to the pressure smoothing scale set by the IGMs full thermal history. Comparing our different reionization models with existing measurements of the ly$\alpha$ forest flux power spectrum at $z=5.0-5.4$, we find that models satisfying Planck's $\tau_e$ constraint, favor a moderate amount of heat injection consistent with galaxies driving reionization, but disfavoring quasar driven scenarios. We explore the impact of different reionization histories and heating models on the shape of the power spectrum, and find that they can produce similar effects, but argue that this degeneracy can be broken with high enough quality data. We study the feasibility of measuring the flux power spectrum at $z\simeq 6$ using mock quasar spectra and conclude that a sample of $\sim10$ high-resolution spectra with attainable S/N ratio will allow to discriminate between different reionization scenarios.ReionizationFlux power spectrumIntergalactic mediumQuasarPressure smoothingPlanck missionMean transmitted fluxReionization modelsHistory of the reionizationHydrodynamical simulations...
- We review the physics of purely leptonic decays of $\pi^\pm$, $K^\pm$, $D^{\pm}$, $D_s^\pm$, and $B^\pm$ pseudoscalar mesons. The measured decay rates are related to the product of the relevant weak-interaction-based CKM matrix element of the constituent quarks and a strong interaction parameter related to the overlap of the quark and antiquark wave-functions in the meson, called the decay constant $f_P$. The leptonic decay constants for $\pi^\pm$, $K^\pm$, $D^{\pm}$, $D_s^\pm$, and $B^\pm$ mesons can be obtained with controlled theoretical uncertainties and high precision from {\it ab initio} lattice-QCD simulations. The combination of experimental leptonic decay-rate measurements and theoretical decay-constant calculations enables the determination of several elements of the CKM matrix within the standard model. These determinations are competitive with those obtained from semileptonic decays, and also complementary because they are sensitive to different quark flavor-changing currents. They can also be used to test the unitarity of the first and second rows of the CKM matrix. Conversely, taking the CKM elements predicted by unitarity, one can infer "experimental" values for $f_P$ that can be compared with theory. These provide tests of lattice-QCD methods, provided new-physics contributions to leptonic decays are negligible at the current level of precision. This review is the basis of the article in the Particle Data Group's 2016 edition, updating the versions in Refs. [1-3].Lattice QCDCabibbo-Kobayashi-Maskawa matrixPseudoscalar mesonMeson decay constantUnitarityStandard ModelDecay rateBranching ratioGauge fieldIsospin...
- In this paper, we study the $B \to K^*$ transition form factors (TFFs) within the QCD light-cone sum rules (LCSR) approach. Two correlators, i.e. the usual one and the right-handed one, are adopted in the LCSR calculation. The resultant LCSRs for the $B \to K^*$ TFFs are arranged according to the twist structure of the $K^*$-meson light-cone distribution amplitudes (LCDAs), whose twist-2, twist-3 and twist-4 terms behave quite differently by using different correlators. We observe that the twist-4 LCDAs, though generally small, shall have sizable contributions to the TFFs $A_{1/2}$, $V$ and $T_1$, thus the twist-4 terms should be kept for a sound prediction. We also observe that even though different choices of the correlator lead to different LCSRs with different twist contributions, the large correlation coefficients for most of the TFFs indicate that the LCSRs for different correlators are close to each order, not only for their values at the large recoil point $q^2=0$ but also for their ascending trends in whole $q^2$-region. Such a high degree of correlation is confirmed by their application to the branching fraction of the semi-leptonic decay $B \to K^* \mu^+ \mu^-$. Thus, a proper choice of correlator may inversely provide a chance for probing uncertain LCDAs, i.e. the contributions from those LCDAs can be amplified to a certain degree via a proper choice of correlator, thus amplifying the sensitivity of the TFFs, and hence their related observables, to those LCDAs.Light-cone sum rulesTransition form factorBranching ratioForm factorPower countingLight conesBorel parameterMeson decay constantOperator product expansionTransverse momentum...
- Recent progress in cosmology has relied on combining different cosmological probes. In earlier work, we implemented an integrated approach to cosmology where the probes are combined into a common framework at the map level. This has the advantage of taking full account of the correlations between the different probes, to provide a stringent test of systematics and of the validity of the cosmological model. We extend this analysis to include not only CMB temperature, galaxy clustering, weak lensing from SDSS but also CMB lensing, weak lensing from the DES SV survey, Type Ia SNe and $H_{0}$ measurements. This yields 12 auto and cross power spectra as well as background probes. Furthermore, we extend the treatment of systematic uncertainties. For $\Lambda$CDM, we find results that are consistent with our earlier work. Given our enlarged data set and systematics treatment, this confirms the robustness of our analysis and results. Furthermore, we find that our best-fit cosmological model gives a good fit to the data we consider with no signs of tensions within our analysis. We also find our constraints to be consistent with those found by WMAP9, SPT and ACT and the KiDS weak lensing survey. Comparing with the Planck Collaboration results, we see a broad agreement, but there are indications of a tension from the marginalized constraints in most pairs of cosmological parameters. Since our analysis includes CMB temperature Planck data at $10 < \ell < 610$, the tension appears to arise between the Planck high$-\ell$ and the other measurements. Furthermore, we find the constraints on the probe calibration parameters to be in agreement with expectations, showing that the data sets are mutually consistent. In particular, this yields a confirmation of the amplitude calibration of the weak lensing measurements from SDSS, DES SV and Planck CMB lensing from our integrated analysis. [abridged]Weak lensingCMB lensingDark Energy SurveySloan Digital Sky SurveyPlanck missionIntrinsic alignmentCMB temperature anisotropyGalaxyCross-correlationCMB temperature...
- One of the major challenges of particle physics has been to gain an in-depth understanding of the role of quark flavor and measurements and theoretical interpretations of their results have advanced tremendously: apart from masses and quantum numbers of flavor particles, there now exist detailed measurements of the characteristics of their interactions allowing stringent tests of Standard Model predictions. Among the most interesting phenomena of flavor physics is the violation of the CP symmetry that has been subtle and difficult to explore. Till early 1990s observations of CP violation were confined to neutral $K$ mesons, but since then a large number of CP-violating processes have been studied in detail in neutral $B$ mesons. In parallel, measurements of the couplings of the heavy quarks and the dynamics for their decays in large samples of $K, D$, and $B$ mesons have been greatly improved in accuracy and the results are being used as probes in the search for deviations from the Standard Model. In the near future, there will be a transition from the current to a new generation of experiments, thus a review of the status of quark flavor physics is timely. This report summarizes the results of the current generation of experiments that is about to be completed and it confronts these results with the theoretical understanding of the field.HadronizationStandard ModelKaonUnitarityCabibbo-Kobayashi-Maskawa matrixForm factorMuonHamiltonianFlavour physicsMinimal Flavor Violation...
- We present the equations of relativistic hydrodynamics coupled to dynamical electromagnetic fields, including the effects of polarization, electric fields, and the derivative expansion. We enumerate the transport coefficients at leading order in derivatives, including electrical conductivities, viscosities, and thermodynamic coefficients. We find the constraints on transport coefficients due to the positivity of entropy production, and derive the corresponding Kubo formulas. For the neutral state in a magnetic field, small fluctuations include Alfven waves, magnetosonic waves, and the dissipative modes. For the state with a non-zero dynamical charge density in a magnetic field, plasma oscillations gap out all propagating modes, except for Alfven-like waves with a quadratic dispersion relation. We relate the transport coefficients in the "conventional" magnetohydrodynamics (formulated using Maxwell's equations in matter) to those in the "dual" version of magnetohydrodynamics (formulated using the conserved magnetic flux).Transport coefficientFluid dynamicsConstitutive relationMagnetohydrodynamicsEnergy-momentum tensorTwo-point correlation functionKubo formulaEntropy currentGenerating functionalHydrodynamic description...
- As one of the most massive Milky Way satellites, the Sagittarius dwarf galaxy has played an important role in shaping the Galactic disk and stellar halo morphologies. The disruption of Sagittarius over several close-in passages has populated the halo of our Galaxy with large-scale tidal streams and offers a unique diagnostic tool for measuring its gravitational potential. Here we test different progenitor mass models for the Milky Way and Sagittarius by modeling the full infall of the satellite. We constrain the mass of the Galaxy based on the observed orbital parameters and multiple tidal streams of Sagittarius. Our semi-analytic modeling of the orbital dynamics agrees with full $N$-body simulations, and favors low values for the Milky Way mass, $\lesssim 10^{12}M_\odot$. This conclusion eases the tension between $\Lambda$CDM and the observed parameters of the Milky Way satellites.Milky WayDynamical frictionSagittarius Dwarf Elliptical GalaxyVirial radiusMass of the Milky WayVirial massPhase spaceTidal streamPeriapsisMilky Way satellite...
- We present a list of formulae useful for Weyl-Heisenberg integral quantizations, with arbitrary weight, of functions or distributions on the plane. Most of these formulae are known, others are original. The list encompasses particular cases like Weyl-Wigner quantization (constant weight) and coherent states (CS) or Berezin quantization (Gaussian weight). The formulae are given with implicit assumptions on their validity on appropriate space(s) of functions (or distributions). One of the aims of the document is to accompany a work in progress on Weyl-Heisenberg integral quantization of dynamics for the motion of a point particle on the line.QuantizationWeight functionCovarianceCoherent stateMoyal productHarmonic oscillatorPhysical dimensionsMultiplication operatorUnitary operatorPoisson bracket...
- Using 2.93~fb$^{-1}$ of data taken at 3.773 GeV with the BESIII detector operated at the BEPCII collider, we study the semileptonic decays $D^+ \to \bar K^0e^+\nu_e$ and $D^+ \to \pi^0 e^+\nu_e$. We measure the absolute decay branching fractions $\mathcal B(D^+ \to \bar K^0e^+\nu_e)=(8.60\pm0.06\pm 0.15)\times10^{-2}$ and $\mathcal B(D^+ \to \pi^0e^+\nu_e)=(3.63\pm0.08\pm0.05)\times10^{-3}$, where the first uncertainties are statistical and the second systematic. We also measure the differential decay rates and study the form factors of these two decays. With the values of $|V_{cs}|$ and $|V_{cd}|$ from Particle Data Group fits assuming CKM unitarity, we obtain the values of the form factors at $q^2=0$, $f^K_+(0) = 0.725\pm0.004\pm 0.012$ and $f^{\pi}_+(0) = 0.622\pm0.012\pm 0.003$. Taking input from recent lattice QCD calculations of these form factors, we determine values of the CKM matrix elements $|V_{cs}|=0.944 \pm 0.005 \pm 0.015 \pm 0.024$ and $|V_{cd}|=0.210 \pm 0.004 \pm 0.001 \pm 0.009$, where the third uncertainties are theoretical.Form factorMonte Carlo methodDecay rateSystematic errorBranching ratioSemileptonic decayElectron neutrinoCabibbo-Kobayashi-Maskawa matrixPositronCovariance matrix...
- It is well recognized that looking for new physics at lower energy colliders is a tendency which is complementary to high energy machines such as LHC. Based on large database of BESIII, we may have a unique opportunity to do a good job. In this paper we calculate the branching ratios of semi-leptonic processes $D^+_s \to K^+ e^-e^+$, $D^+_s \to K^+ e^-\mu^+$ and leptonic processes $D^0 \to e^-e^+$, $D^0 \to e^-\mu^+$ in the frames of $U(1)'$ model, 2HDM and unparticle separately. It is found that both the $U(1)'$ and 2HDM may influence the semi-leptonic decay rates, but only the $U(1)'$ offers substantial contributions to the pure leptonic decays and the resultant branching ratio of $D^0 \to e^-\mu^+$ can be as large as $10^{-7}\sim10^{-8}$ which might be observed at the future super $\tau$-charm factory.Standard ModelUnmatterBranching ratioTwo Higgs Doublet ModelRare decayLepton flavour violationFlavour Changing Neutral CurrentsColliderHiggs bosonCharmed meson...
- The strong and radiative decay properties of the low-lying $\Omega_c$ states are studied in a constituent quark model. We find that the newly observed $\Omega_c$ states by the LHCb Collaboration can fit in well the decay patterns. Thus, their spin-parity can be possibly assigned as the following: (i) The $\Omega_c(3000)$ and $\Omega_c(3090)$ can be assigned to be two $J^P=1/2^-$ states, $|^2P_{\lambda}\frac{1}{2}^-\rangle$ and $|^4P_{\lambda}\frac{1}{2}^-\rangle$, respectively. (ii) The $\Omega_c(3050)$ most likely corresponds to the $J^P=3/2^-$ state, i.e. either $|^2P_{\lambda}\frac{3}{2}^-\rangle$ or $|^4P_{\lambda}\frac{3}{2}^-\rangle$. (iii) The $\Omega_c(3066)$ can be assigned as the $|^4P_{\lambda}\frac{5}{2}^-\rangle$ state with $J^P=5/2^-$. (iv) The $\Omega_c(3119)$ might correspond to one of the two $2S$ states of the first radial excitations, i.e. $|2^2S_{\lambda\lambda}\frac{1}{2}^+\rangle$ or $|2^4S_{\lambda\lambda}\frac{3}{2}^+\rangle$.Radiative decayDecay widthPseudoscalar mesonConstituent quarkDecay channelsExcited stateLHCbWavefunctionCharmed baryonsDecay mode...
- Recently, the experimental results of LHCb Collaboration suggested the existence of five new excited states of $\Omega_c^0$, $\Omega_c(3000)^0$, $\Omega_c(3050)^0$, $\Omega_c(3066)^0$, $\Omega_c(3090)^0$ and $\Omega_c(3119)^0$, the quantum numbers of these new particles are not determined now. To understand the nature of the states, a dynamical calculation of 5-quark systems with quantum numbers $IJ^P=0(\frac{1}{2})^-$, $0(\frac{3}{2})^-$ and $0(\frac{5}{2})^-$ is performed in the framework of chiral quark model with the help of gaussian expansion method. The results show the $\Xi-\bar{D}$, $\Xi_c\bar{K}$ and $\Xi_c^*\bar{K}$ are possible the candidates of these new particles. The distances between quark pairs suggest that the nature of pentaquark states.PentaquarkWavefunctionLHCbExcited stateColor-flavor lockingSpin structureOrbital angular momentum of lightFew-body systemsHamiltonianDegree of freedom...
- The existence of doubly heavy baryons have not been well established in experiments so far. Searching for them is one of the important purposes at the Large Hadron Collider (LHC) where plenty of heavy quarks have been generated. In this Letter we study the weak decays of doubly charmed baryons, $\Xi_{cc}^{++}$ and $\Xi_{cc}^{+}$, using the light-front quark model to calculate the transition form factors and firstly considering the rescattering mechanism for the long-distance contributions to predict the corresponding branching fractions. Considering the predicted larger lifetime of $\Xi_{cc}^{++}$ than that of $\Xi_{cc}^{+}$, we find the processes of $\Xi_{cc}^{++}\to \Lambda_c^+K^-\pi^+\pi^+$, $\Xi_c^+\pi^+$ and $p D^0\pi^+$ are accessible with more possibilities at LHCb.Branching ratioCharmed baryonsLarge Hadron ColliderForm factorDiquarkLight frontWeak decayCharm quarkLHCbHeavy quark...
- Motivated by recent results by the ATLAS and CMS collaborations on the angular distribution of the $B \to K^* \mu^+\mu^-$ decay, we perform a state-of-the-art analysis of rare $B$ meson decays based on the $b \to s \mu \mu$ transition. Using standard estimates of hadronic uncertainties, we confirm the presence of a sizable discrepancy between data and SM predictions. We do not find evidence for a $q^2$ or helicity dependence of the discrepancy. The data can be consistently described by new physics in the form of a four-fermion contact interaction $(\bar s \gamma_\alpha P_L b)(\bar \mu \gamma^\alpha \mu)$. Assuming that the new physics affects decays with muons but not with electrons, we make predictions for a variety of theoretically clean observables sensitive to violation of lepton flavour universality.LHCbWilson coefficientsLepton flavour universalityBranching ratioHelicityATLAS Experiment at CERNMuonForm factorFlavourMeson decays...
- Neutrino masses can be generated by fermion triplets with TeV-scale mass, that would manifest at LHC as production of two leptons together with two heavy SM vectors or higgs, giving rise to final states such as 2 leptons + 4 jets (that can violate lepton number and/or lepton flavor) or 1 lepton +4 jets +missing transverse energy. We devise cuts to suppress the SM backgrounds to these signatures. Furthermore, for most of the mass range suggested by neutrino data, triplet decays are detectably displaced from the production point, allowing to infer the neutrino mass parameters. We compare with LHC signals of type-I and type-II see-saw.Large Hadron ColliderSeesaw mechanismNeutrino massHiggs bosonNeutrinoMissing transverse energyDisplaced verticesLepton number violationTeV scaleType III seesaw...
#### Seesaw at LHCver. 2

We study the implementation of the type III seesaw in the ordinary nonsupersymmetric SU(5) grand unified theory. This allows for an alternative definition of the minimal SU(5) model, with the inclusion of the adjoint fermionic multiplet. The main prediction of the theory is the light fermionic SU(2) triplet with mass at the electroweak scale. Due to their gauge couplings, these triplets can be produced pair-wise via Drell-Yan, and due to the Majorana nature of the neutral component their decays leave a clear signature of same sign di-leptons and four jets. This allows for their possible discovery at LHC and provides an example of directly measurable seesaw parameters.Large Hadron ColliderNeutrinoType III seesawStandard ModelQCD jetProton decayYukawa couplingLeptoquarkSterile neutrinoSeesaw mechanism...- We investigate the LHC discovery potential for electroweak scale heavy neutrino singlets (seesaw I), scalar triplets (seesaw II) and fermion triplets (seesaw III). For seesaw I we consider a heavy Majorana neutrino coupling to the electron or muon. For seesaw II we concentrate on the likely scenario where the new scalars decay to two leptons. For seesaw III we restrict ourselves to heavy Majorana fermion triplets decaying to light leptons plus gauge or Higgs bosons, which are dominant except for unnaturally small mixings. The possible signals are classified in terms of the charged lepton multiplicity, studying nine different final states ranging from one to six charged leptons. Using a fast detector simulation of signals and backgrounds, it is found that the trilepton channel l+- l+- l-+ is by far the best one for scalar triplet discovery, and for fermion triplets it is as good as the like-sign dilepton channel l+- l+-. For heavy neutrinos with a mass O(100) GeV, this trilepton channel is also better than the usually studied like-sign dilepton mode. In addition to evaluating the discovery potential, we make special emphasis on the discrimination among seesaw models if a positive signal is observed. This could be accomplished not only by searching for signals in different final states, but also by reconstructing the mass and determining the charge of the new resonances, which is possible in several cases. For high luminosities, further evidence is provided by the analysis of the production angular distributions in the cleanest channels with three or four leptons.Charged leptonQCD jetSterile neutrinoStandard ModelInvariant massMuonNeutrinoLuminosityNormal hierarchyLarge Hadron Collider...
- We derive and compare the fractions of cool-core clusters in the Planck Early Sunyaev-Zel'dovich sample of 164 clusters with $z \leq 0.35$ and in a flux-limited X-ray sample of 129 clusters with $z \leq 0.30$, using Chandra observations. We use four metrics to identify cool-core clusters: 1) the concentration parameter: the ratio of the integrated emissivity profile within 0.15 $r_{500}$ to that within $r_{500}$, and 2) the ratio of the integrated emissivity profile within 40 kpc to that within 400 kpc, 3) the cuspiness of the gas density profile: the negative of the logarithmic derivative of the gas density with respect to the radius, measured at 0.04 $r_{500}$, and 4) the central gas density, measured at 0.01 $r_{500}$. We find that the sample of X-ray selected clusters, as characterized by each of these metrics, contains a significantly larger fraction of cool-core clusters compared to the sample of SZ selected clusters (41$\pm$6% vs. 28$\pm$4% using the concentration in the 0.15--1.0 $r_{500}$ range, 60$\pm$7% vs. 36$\pm$5% using the concentration in the 40--400 kpc range, 64$\pm$7% vs. 38$\pm$5% using the cuspiness, and 53$\pm$7% vs. 39$\pm$5% using the central gas density). Qualitatively, cool-core clusters are more X-ray luminous at fixed mass. Hence, our X-ray flux-limited sample, compared to the approximately mass-limited SZ sample, is over-represented with cool-core clusters. We describe a simple quantitative model that uses the excess luminosity of cool-core clusters compared to non-cool-core clusters at fixed mass to successfully predict the observed fraction of cool-core clusters in X-ray selected samples.Cool core galaxy clusterNon cool-core galaxy clusterTemperature profileChandra X-ray ObservatoryCluster samplingLuminosityCluster of galaxiesSZ selected clusterFlux limited sampleCosmology...
- Similarly to the cosmic star formation history, the black hole accretion rate density of the Universe peaked at 1<z<3. This cosmic epoch is hence best suited for investigating the effects of radiative feedback from AGN. Observational efforts are underway to quantify the impact of AGN feedback, if any, on their host galaxies. Here we present a study of the molecular gas content of AGN hosts at z~1.5 using CO[2-1] line emission observed with ALMA for a sample of 10 AGNs. We compare this with a sample of galaxies without an AGN matched in redshift, stellar mass, and star formation rate. We detect CO in 3 AGNs with $\mathrm{L_{CO} \sim 6.3-25.1\times 10^{9} L_{\odot}}$ which translates to a molecular hydrogen gas mass of $\mathrm{2.5-10\times 10^{10} M_{\odot}}$ assuming conventional conversion factor of $\mathrm{\alpha_{CO}}\sim3.6$. Our results indicate a >99% probability of lower depletion time scales and lower molecular gas fractions in AGN hosts with respect to the non-AGN comparison sample. We discuss the implications of these observations on the impact that AGN feedback may have on star formation efficiency of z>1 galaxies.Active Galactic NucleiStar formation rateHost galaxyAtacama Large Millimeter ArrayAGN feedbackLuminosityStar formation efficiencyStar-forming galaxyStar formationStellar mass...
- We discuss the effect of ram pressure on the cold clouds in the centers of cool-core galaxy clusters, and in particular, how it reduces cloud velocity and sometimes causes an offset between the cold gas and young stars. The velocities of the molecular gas in both observations and our simulations fall in the range of $100-400$ km/s, much lower than expected if they fall from a few tens of kpc ballistically. If the intra-cluster medium (ICM) is at rest, the ram pressure of the ICM only slightly reduces the velocity of the clouds. When we assume that the clouds are actually "fluffier" because they are co-moving with a warm-hot layer, the velocity becomes smaller. If we also consider the AGN wind in the cluster center by adding a wind profile measured from the simulation, the clouds are further slowed down at small radii, and the resulting velocities are in general agreement with the observations and simulations. Because ram pressure only affects gas but not stars, it can cause a separation between a filament and young stars that formed in the filament as they move through the ICM together. This separation has been observed in Perseus and also exists in our simulations. We show that the star-filament offset combined with line-of-sight velocity measurements can help determine the true motion of the cold gas, and thus distinguish between inflows and outflows.Galaxy filamentIntra-cluster mediumRam pressureStarYoung stellar objectCool core galaxy clusterActive Galactic NucleiLine of sight velocityStar formationSupermassive black hole...
- We present radiation-hydrodynamic simulations of radiatively-driven gas shells launched by bright active galactic nuclei (AGN) in isolated dark matter haloes. Our goals are (1) to investigate the ability of AGN radiation pressure on dust to launch galactic outflows and (2) to constrain the efficiency of infrared (IR) multi-scattering in boosting outflow acceleration. Our simulations are performed with the radiation-hydrodynamic code RAMSES-RT and include both single- and multi-scattered radiation pressure from an AGN, radiative cooling and self-gravity. Since outflowing shells always eventually become transparent to the incident radiation field, outflows that sweep up all intervening gas are likely to remain gravitationally bound to their halo even at high AGN luminosities. The expansion of outflowing shells is well described by simple analytic models as long as the shells are mildly optically thick to IR radiation. In this case, an enhancement in the acceleration of shells through IR multi-scattering occurs as predicted, i.e. a force dP/dt = tau_IR L/c is exerted on the gas. For high optical depths tau_IR > 50, however, momentum transfer between outflowing optically thick gas and IR radiation is rapidly suppressed, even if the radiation is efficiently confined. At high tau_IR, the characteristic flow time becomes shorter than the required trapping time of IR radiation such that the momentum flux dP/dt << tau_IR L/c. We argue that while unlikely to unbind massive galactic gaseous haloes, AGN radiation pressure on dust could play an important role in regulating star formation and black hole accretion in the nuclei of massive compact galaxies at high redshift.Active Galactic NucleiLuminosityOptically thick mediumFluid dynamicsNavarro-Frenk-White profileStar formationMomentum transferOpacitySpeed of lightSupermassive black hole...
- We demonstrate how to systematically test a well-motivated mechanism for neutrino mass generation (Type-II seesaw) at the LHC, in which a Higgs triplet is introduced. In the optimistic scenarios with a small Higgs triplet vacuum expectation value vd < 10^{-4} GeV, one can look for clean signals of lepton number violation in the decays of doubly charged and singly charged Higgs bosons to distinguish the Normal Hierarchy (NH), the Inverted Hierarchy (IH) and the Quasi-Degenerate (QD) spectrum for the light neutrino masses. The observation of either H+ --> tau+ nubar or H+ --> e+ nubar will be particularly robust for the spectrum test since they are independent of the unknown Majorana phases. The H++ decays moderately depend on a Majorana phase Phi2 in the NH, but sensitively depend on Phi1 in the IH. In a less favorable scenario vd > 2 10^{-4} GeV, when the leptonic channels are suppressed, one needs to observe the decays H+ --> W+ H_1 and H+ --> t bbar to confirm the triplet-doublet mixing which in turn implies the existence of the same gauge-invariant interaction between the lepton doublet and the Higgs triplet responsible for the neutrino mass generation. In the most optimistic situation, vd approx 10^{-4} GeV, both channels of the lepton pairs and gauge boson pairs may be available simultaneously. The determination of their relative branching fractions would give a measurement for the value of vd.Higgs bosonNeutrino massBranching ratioLarge Hadron ColliderCharged Higgs bosonType II seesawInverted hierarchyVacuum expectation valueStandard ModelNormal hierarchy...
- We point out that in generic TeV scale seesaw models for neutrino masses with local $B-L$ symmetry breaking, there is a phenomenologically allowed range of parameters where the Higgs field responsible for $B-L$ symmetry breaking leaves a physical real scalar field with mass around GeV scale. This particle (denoted here by $H_3$) is weakly mixed with the Standard Model Higgs field ($h$) with mixing $\theta_1\lesssim m_{H_3}/m_h$ barring fine-tuned cancellation. In the specific case when the $B-L$ symmetry is embedded into the TeV scale left-right seesaw scenario, we show that the bounds on the $h-H_3$ mixing $\theta_1$ become further strengthened due to low energy flavor constraints, thus forcing the light $H_3$ to be long lived, with displaced vertex signals at the LHC. The property of left-right TeV scale seesaw models are such that they make the $H_3$ decay to two photons as the dominant mode. This is in contrast with a generic light scalar that mixes with the SM Higgs boson, which could also have leptonic and hadronic decay modes with comparable or larger strength. We discuss the production this new scalar field at the LHC and show that it leads to testable displaced vertex signals of collimated photon jets, which is a new distinguishing feature of this model. We also study a simpler version of the model where the $SU(2)_R$ breaking scale is much higher than $U(1)_{B-L}$ which is at the TeV scale, in which case the production and decay of $H_3$ proceed differently, but its long lifetime feature is still preserved for a large range of parameters. Thus, the search for such long-lived light scalar particles provides a new way to probe TeV scale seesaw models for neutrino masses at colliders.Standard ModelLarge Hadron ColliderMixing angleHiggs bosonLight scalarColliderTeV scaleSeesaw mechanismDisplaced verticesYukawa coupling...
- Thomson optical depth tau measurements from Planck provide new insights into the reionization of the universe. In pursuit of model-independent constraints on the properties of the ionising sources, we determine the empirical evolution of the cosmic ionizing emissivity. We use a simple two-parameter model to map out the evolution in the emissivity at z>~6 from the new Planck optical depth tau measurements, from the constraints provided by quasar absorption spectra and from the prevalence of Ly-alpha emission in z~7-8 galaxies. We find the redshift evolution in the emissivity dot{N}_{ion}(z) required by the observations to be d(log Nion)/dz=-0.15(-0.11)(+0.08), largely independent of the assumed clumping factor C_{HII} and entirely independent of the nature of the ionising sources. The trend in dot{N}_{ion}(z) is well-matched by the evolution of the galaxy UV-luminosity density (dlog_{10} rho_UV/dz=-0.11+/-0.04) to a magnitude limit >~-13 mag, suggesting that galaxies are the sources that drive the reionization of the universe. The role of galaxies is further strengthened by the conversion from the UV luminosity density rho_UV to dot(N)_{ion}(z) being possible for physically-plausible values of the escape fraction f_{esc}, the Lyman-continuum photon production efficiency xi_{ion}, and faint-end cut-off $M_{lim}$ to the luminosity function. Quasars/AGN appear to match neither the redshift evolution nor normalization of the ionizing emissivity. Based on the inferred evolution in the ionizing emissivity, we estimate that the z~10 UV-luminosity density is 8(-4)(+15)x lower than at $z~6, consistent with the observations. The present approach of contrasting the inferred evolution of the ionizing emissivity with that of the galaxy UV luminosity density adds to the growing observational evidence that faint, star-forming galaxies drive the reionization of the universe.IonizationLuminosityReionizationQuasarPlanck missionLuminosity functionIonizing radiationFilling fractionStar-forming galaxyGalactic evolution...
- We introduce a new methodology to robustly determine the mass profile, as well as the overall distribution, of Local Group satellite galaxies. Specifically we employ a statistical multilevel modelling technique, Bayesian hierarchical modelling, to simultaneously constrain the properties of individual Local Group Milky Way satellite galaxies and the characteristics of the Milky Way satellite population. We show that this methodology reduces the uncertainty in individual dwarf galaxy mass measurements up to a factor of a few for the faintest galaxies. We find that the distribution of Milky Way satellites inferred by this analysis, with the exception of the apparent lack of high-mass haloes, is consistent with the Lambda cold dark matter (Lambda-CDM) paradigm. In particular we find that both the measured relationship between the maximum circular velocity and the radius at this velocity, as well as the inferred relationship between the mass within 300 pc and luminosity, match the values predicted by Lambda-CDM simulations for halos with maximum circular velocities below 20 km/sec. Perhaps more striking is that this analysis seems to suggest a more cusped "average" halo shape that is shared by these galaxies. While this study reconciles many of the observed properties of the Milky Way satellite distribution with that of Lambda-CDM simulations, we find that there is still a deficit of satellites with maximum circular velocities of 20-40 km/sec.LuminosityCold dark matterMilky Way satelliteBayesianPrior probabilityMaximum circular velocityMass profileHalf-light radiusLuminosity functionLocal group...
- We discuss the determination of the linear power spectrum of dark matter fluctuations from measurements of the transmitted flux Ly-alpha Forest power spectrum. We show that at most scales probed by current measurements, the flux power spectrum is significantly affected by non-linear corrections to the linear dark matter power spectrum due to gravitational clustering. Inferring the linear dark matter power spectrum shape is therefore difficult due to non-linear corrections driving the non-linear power spectrum to a k^{-1.4} shape nearly independent of initial conditions. We argue that some methods used in previous estimates underestimate the uncertainties in the shape of the linear dark matter power spectrum.Dark matterFlux power spectrumMatter power spectrumPerturbation theoryLyman-alpha forestPower spectrum of primordial density perturbationsMean transmitted fluxCosmological parametersIonizationN-body simulation...
- Starting from operator equations of motion and making arguments based on a separation of time scales, a set of equations is derived which govern the non-equilibrium time evolution of a GeV-scale sterile neutrino density matrix and active lepton number densities at temperatures T > 130 GeV. The density matrix possesses generation and helicity indices; we demonstrate how helicity permits for a classification of various sources for leptogenesis. The coefficients parametrizing the equations are determined to leading order in Standard Model couplings, accounting for the LPM resummation of 1+n <-> 2+n scatterings and for all 2 <-> 2 scatterings. The regime in which sphaleron processes gradually decouple so that baryon plus lepton number becomes a separate non-equilibrium variable is also considered.Sterile neutrinoDensity matrixHelicityStandard ModelLepton asymmetryEvolution equationThermal massLepton numberSterile neutrino oscillationsNeutrino oscillations...
- A striking signal of dark matter beyond the standard model is the existence of cores in the centre of galaxy clusters. Recent simulations predict that a Brightest Cluster Galaxy (BCG) inside a cored galaxy cluster will exhibit residual wobbling due to previous major mergers, long after the relaxation of the overall cluster. This phenomena is absent with standard cold dark matter where a cuspy density profile keeps a BCG tightly bound at the centre. We test this hypothesis using cosmological simulations and deep observations of 10 galaxy clusters acting as strong gravitational lenses. Modelling the BCG wobble as a simple harmonic oscillator, we measure the wobble amplitude, A_w, in the BAHAMAS suite of cosmological hydrodynamical simulations, finding an upper limit for the CDM paradigm of $A_w < 2$ kpc at the 95% confidence limit. We carry out the same test on the data finding a non-zero amplitude of $A_w = 11.82^{+7.3}_{-3.0}$~kpc with the observations dis-favouring $A_w = 0$ at the $3\sigma$ confidence level. This detection of BCG wobbling is evidence for a dark matter core at the heart of galaxy clusters. It also shows that strong lensing models of clusters cannot assume that the BCG is exactly coincident with the large scale halo. While our small sample of galaxy clusters already indicates a non-zero Aw, with larger surveys, e.g. Euclid, we will be able to not only to confirm the effect but also to use it to determine whether or not the wobbling finds its origin in new fundamental physics or astrophysical process.Cluster of galaxiesBrightest cluster galaxyDark matterCold dark matterStrong gravitational lensingLine of sightMassive clusterSimulations of structure formationRelaxationHarmonic oscillator...
- We discuss canonical transformations in Quantum Field Theory in the framework of the functional-integral approach. In contrast with ordinary Quantum Mechanics, canonical transformations in Quantum Field Theory are mathematically more subtle due to the existence of unitarily inequivalent representations of canonical commutation relations. When one works with functional integrals, it is not immediately clear how this algebraic feature manifests itself in the formalism. Here we attack this issue by considering the canonical transformations in the context of coherent-state functional integrals. Specifically, in the case of linear canonical transformations, we derive the general functional-integral representations for both transition amplitude and partition function phrased in terms of new canonical variables. By means of this, we show how in the infinite-volume limit the canonical transformations induce a transition from one representation of canonical commutation relations to another one and under what conditions the representations are unitarily inequivalent. We also consider the partition function and derive the energy gap between statistical systems described in two different representations which, among others, allows to establish a connection with continuous phase transitions. We illustrate the inner workings of the outlined mechanism by discussing two prototypical systems: the van Hove model and the Bogoliubov model of weakly interacting Bose gas.Quantum field theoryCoherent stateHamiltonianPartition functionPhase spaceQuantum mechanicsBose gasPhase transitionsDegree of freedomVacuum state...
- We give a survey of our joint ongoing work with Ali Chamseddine, Slava Mukhanov and Walter van Suijlekom. We show how a problem purely motivated by "how geometry emerges from the quantum formalism" gives rise to a slightly noncommutative structure and a spectral model of gravity coupled with matter which fits with experimental knowledge. This text will appear as a contribution to the volume: "Foundations of Mathematics and Physics one century after Hilbert". Editor: Joseph Kouneiher. Collection Mathematical Physics, Springer 2017ManifoldLine elementInfinitesimalQuantum mechanicsClifford algebraNoncommutative geometryDirac operatorQuantizationDualitySpectral geometry...
- To see color, the human visual system combines the responses of three types of cone cells in the retina - a process that discards a significant amount of spectral information. We present an approach that can enhance human color vision by breaking the inherent redundancy in binocular vision, providing different spectral content to each eye. Using a psychophysical color model and thin-film optimization, we designed a wearable passive multispectral device that uses two distinct transmission filters, one for each eye, to enhance the user's ability to perceive spectral information. We fabricated and tested a design that "splits" the response of the short-wavelength cone of individuals with typical trichromatic vision, effectively simulating the presence of four distinct cone types between the two eyes ("tetrachromacy"). Users of this device were able to differentiate metamers (distinct spectra that resolve to the same perceived color in typical observers) without apparent adverse effects to vision. The increase in the number of effective cones from the typical three reduces the number of possible metamers that can be encountered, enhancing the ability to discriminate objects based on their emission, reflection, or transmission spectra. This technique represents a significant enhancement of the spectral perception of typical humans, and may have applications ranging from camouflage detection and anti-counterfeiting to art and data visualization.BinocularsOptimizationData visualizationThin filmsObjectWavelength...
- Authorship and citation practices evolve with time and differ by academic discipline. As such, indicators of research productivity based on citation records are naturally subject to historical and disciplinary effects. We observe these effects on a corpus of astronomer career data constructed from a database of refereed publications. We employ a simple mechanism to measure research output using author and reference counts available in bibliographic databases to develop a citation-based indicator of research productivity. The total research impact (tori) quantifies, for an individual, the total amount of scholarly work that others have devoted to his/her work, measured in the volume of research papers. A derived measure, the research impact quotient (riq), is an age independent measure of an individual's research ability. We demonstrate that these measures are substantially less vulnerable to temporal debasement and cross-disciplinary bias than the most popular current measures. The proposed measures of research impact, tori and riq, have been implemented in the Smithsonian/NASA Astrophysics Data System.QuadrantsRegressionSAO/NASA Astrophysics Data SystemStatisticsRankGravitational waveLower and upperAstrophysics Data SystemKeyphraseConfidence interval...
- Lyman-alpha forest data probing the post-reionization Universe shows surprisingly large opacity fluctuations over rather large ($\ge$50 comoving Mpc/h) spatial scales. We model these fluctuations using a hybrid approach utilizing the large volume Millennium simulation to predict the spatial distribution of QSOs combined with smaller scale full hydrodynamical simulation performed with RAMSES and post-processed with the radiative transfer code ATON. We produce realictic mock absorption spectra that account for the contribution of galaxies and QSOs to the ionising UV background. This improved models confirm our earlier findings that a significant ($\ge$50%) contribution of ionising photons from QSOs can explain the large reported opacity fluctuations on large scales. The inferred QSO luminosity function is thereby consistent with recent estimates of the space density of QSOs at this redshift. Our simulations still somewhat struggle, however, to reproduce the very long (110 comoving Mpc/h) high opacity absorption through observed in ULAS J0148+0600, perhaps suggesting an even later end of reionization than assumed in our previously favoured model. Medium-deep/medium area QSO surveys as well as targeted searches for the predicted strong transverse QSO proximity effect whould illuminate the origin of the observed large scale opacity fluctuations. They would allow to substantiate whether UV fluctuations due to QSO are indeed primarily responsible, or whether significant contributions from other recently proposed mechansims such as large scale fluctuations in temperature and mean free path (even in the absence of rare bright sources) are required.QuasarMean free pathOpacityReionizationPhotoionization rateUltraviolet backgroundLuminosity functionIonizing radiationRadiative transferLine of sight...
- Area laws were first discovered by Bekenstein and Hawking, who found that the entropy of a black hole grows proportional to its surface area, and not its volume. Entropy area laws have since become a fundamental part of modern physics, from the holographic principle in quantum gravity to ground state wavefunctions of quantum matter, where entanglement entropy is generically found to obey area law scaling. As no experiments are currently capable of directly probing the entanglement area law in naturally occurring many-body systems, evidence of its existence is based on studies of simplified theories. Using new exact microscopic numerical simulations of superfluid $^4$He, we demonstrate for the first time an area law scaling of entanglement entropy in a real quantum liquid in three dimensions. We validate the fundamental principles underlying its physical origin, and present an "entanglement equation of state" showing how it depends on the density of the superfluid.SuperfluidEntanglementEntanglement entropyEntropyQuantum Monte CarloMonte Carlo methodQuantum liquidHamiltonianNumerical simulationStatistical estimator...
- A relationship between the Riemann zeta function and a density on integer sets is explored. Several properties of the examined density are derived.Prime numberTranslational invarianceArithmetic progressionDilationRiemann zeta functionSquare-free integerReciprocityFrequentist approachMonotonic functionPrime zeta function...
- This paper exploits the connection between the quantum many-particle density of states and the partitioning of an integer in number theory. For $N$ bosons in a one dimensional harmonic oscillator potential, it is well known that the asymptotic (N -> infinity) density of states is identical to the Hardy-Ramanujan formula for the partitions p(n), of a number n into a sum of integers. We show that the same statistical mechanics technique for the density of states of bosons in a power-law spectrum yields the partitioning formula for p^s(n), the latter being the number of partitions of n into a sum of s-th powers of a set of integers. By making an appropriate modification of the statistical technique, we are also able to obtain d^s(n) for distinct partitions. We find that the distinct square partitions d^2(n) show pronounced oscillations as a function of n about the smooth curve derived by us. The origin of these oscillations from the quantum point of view is discussed. After deriving the Erdos-Lehner formula for restricted partitions for the $s=1$ case by our method, we generalize it to obtain a new formula for distinct restricted partitions.
- This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom CNN layer through which we can backpropagate. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.RegularizationSemantic segmentationConvolutional neural networkDeep learningOptimizationNeural networkStatisticsDeep Neural NetworksFireballsConjunction...
- We set limits on the presence of the synchrotron cosmic web through the cross-correlation of the 2.3 GHz S-PASS survey with a model of the local cosmic web derived from constrained magnetohydrodynamic (MHD) simulations. The MHD simulation assumes cosmologically seeded magnetic fields amplified during large-scale structure formation, and a population of relativistic electrons/positrons from proton-proton collisions within the intergalactic medium. We set a model-dependent 3$\sigma$ upper limit on the synchrotron surface brightness of 0.16 mJy arcmin$^{-2}$ at 2.3 GHz in filaments. Extrapolating from magnetic field maps created from the simulation, we infer an upper limit (density-weighted) magnetic field of 0.03 (0.13) $\mu$G in filaments at the current epoch, and a limit on the primordial magnetic field (PMF) of B$_{PMF}$~1.0 nG.Cosmic webCross-correlationCosmological magnetic fieldSynchrotronSurface brightnessGalaxy filamentLarge scale structureDiffuse emissionCross-correlation functionSynchrotron radiation...
- Axion-like particles (ALPs) and photons can quantum mechanically interconvert when propagating through magnetic fields, and ALP-photon conversion may induce oscillatory features in the spectra of astrophysical sources. We use deep (370 ks), short frame time Chandra observations of the bright nucleus at the centre of the radio galaxy M87 in the Virgo cluster to search for signatures of light ALPs. The absence of substantial irregularities in the X-ray power-law spectrum leads to a new upper limit on the photon-ALP coupling, $g_{a\gamma}$: using a conservative model of the cluster magnetic field consistent with Faraday rotation measurements from M87 and M84, we find $g_{a \gamma} < 1.5\times10^{-12}$ GeV$^{-1}$ at 95% confidence level for ALP masses $m_a \leq 10^{-13}$ eV. This constraint is a factor of $\gtrsim 3$ stronger than the bound inferred from the absence of a gamma-ray burst from SN1987A, and it rules out a substantial fraction of the parameter space accessible to future experiments such as ALPS-II and IAXO.Axion-like particleMessier 87Virgo ClusterCluster magnetic fieldActive Galactic NucleiMessier 84Cluster of galaxiesFaraday rotationPoint sourceAxion-photon coupling...
- The possibility that the dark matter comprises primordial black holes (PBHs) is considered, with particular emphasis on the currently allowed mass windows at $10^{16}$ - $10^{17}\,$g, $10^{20}$ - $10^{24}\,$g and $1$ - $10^{3}\,M_{\odot}$. The Planck mass relics of smaller evaporating PBHs are also considered. All relevant constraints (lensing, dynamical, large-scale structure and accretion) are reviewed and various effects necessary for a precise calculation of the PBH abundance (non-Gaussianity, non-sphericity, critical collapse and merging) are accounted for. It is difficult to put all the dark matter in PBHs if their mass function is monochromatic but this is still possible if the mass function is extended, as expected in many scenarios. A novel procedure for confronting observational constraints with an extended PBH mass spectrum is therefore introduced. This applies for arbitrary constraints and a wide range of PBH formation models, and allows us to identify which model-independent conclusions can be drawn from constraints over all mass ranges. We focus particularly on PBHs generated by inflation, pointing out which effects in the formation process influence the mapping from the inflationary power spectrum to the PBH mass function. We then apply our scheme to two specific inflationary models in which PBHs provide the dark matter. The possibility that the dark matter is in intermediate-mass PBHs of $1$ - $10^{3}\,M_{\odot}$ is of special interest in view of the recent detection of black-hole mergers by LIGO. The possibility of Planck relics is also intriguing but virtually untestable.Primordial black holeDark matterMass functionCurvatonPrimordial black hole massBlack holeNon-GaussianityModel of inflationLaser Interferometer Gravitational-Wave ObservatoryCosmic microwave background...
- Very strong magnetic fields can arise in non-central heavy-ion collisions at ultrarelativistic energies, which may not decay quickly in a conducting plasma. We carry out relativistic magnetohydrodynamics (RMHD) simulations to study the effects of this magnetic field on the evolution of the plasma and on resulting flow fluctuations in the ideal RMHD limit. Our results show that magnetic field leads to enhancement in elliptic flow, though in general effects of magnetic field on elliptic flow are very complex. Interestingly, we find that magnetic field in localized regions can temporarily increase in time as evolving plasma energy density fluctuations lead to reorganization of magnetic flux. This can have important effects on chiral magnetic effect. Magnetic field has non-trivial effects on the power spectrum of flow fluctuations. For very strong magnetic field case one sees a pattern of even-odd difference in the power spectrum of flow coefficients arising from reflection symmetry about the magnetic field direction if initial state fluctuations are not dominant. We discuss the situation of nontrivial magnetic field configurations arising from collision of deformed nuclei and show that it can lead to anomalous elliptic flow. Special (crossed body-body) configurations of deformed nuclei collision can lead to presence of quadrupolar magnetic field which can have very important effects on the rapidity dependence of transverse expansion (similar to {\it beam focusing} from quadrupole fields in accelerators).Elliptic flowQuark-gluon plasmaStrong magnetic fieldRelativistic magnetohydrodynamicsAnisotropyFlow coefficientHeavy ion collisionRapidityRelativistic heavy-ion collision experimentsVorticity...
- The quadratic Mandelbrot set has been referred to as the most complex and beautiful object in mathematics and the Riemann Zeta function takes the prize for the most complicated and enigmatic function. Here we elucidate the spectrum of Mandelbrot and Julia sets of Zeta, to unearth the geography of its chaotic and fractal diversities, combining these two extremes into one intrepid journey into the deepest abyss of complex function space.FractalRiemann zeta functionJulia setMandelbrot setObjective...
- We describe a simple scheme that allows an agent to explore its environment in an unsupervised manner. Our scheme pits two versions of the same agent, Alice and Bob, against one another. Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on (nearly) reversible environments, or environments that can be reset, and Alice will "propose" the task by running a set of actions and then Bob must partially undo, or repeat them, respectively. Via an appropriate reward structure, Alice and Bob automatically generate a curriculum of exploration, enabling unsupervised training of the agent. When deployed on an RL task within the environment, this unsupervised training reduces the number of episodes needed to learn.Alice and BobHyperparameterReinforcement learningNeural networkOptimizationFunction approximationReversible dynamicsRegularizationEntropyHidden layer...