- Neutrino trident production (Neutrino trident production)

by Dr. Oleg Ruchayskiy21 Jul 2017 18:29 - Full-duplex (Full-duplex)

by Muhammad R. A. Khandaker08 Mar 2016 12:48 - Plasma physics (Plasma physics)

by Jim Crumley22 Sep 2015 16:59 - MHD equations (MHD equations)

by Jim Crumley22 Sep 2015 16:57 - Electricity and magnetism (Electricity and magnetism)

by Jim Crumley22 Sep 2015 16:54 - Leptogenesis (Leptogenesis)

by Dr. Sacha Davidson08 Dec 2010 13:32 - Quantum point contact (Quantum point contact)

by Prof. Carlo Beenakker08 Jan 2011 20:32 - Gravitational lensing (Gravitational lensing)

by Prof. Koen Kuijken05 Dec 2010 22:11 - Geometric flattening (Geometric flattening)

by Dr. Ganna Ivashchenko05 Dec 2010 22:14 - Grey-body radiation (Grey-body radiation)

by Prof. Carlo Beenakker08 Jan 2011 20:36

- Traditional approaches to extractive summarization rely heavily on human-engineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs. Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation.ArchitectureNeural networkRecurrent neural networkConvolutional neural networkOptimizationLong short term memoryHidden stateCompressibilityWord embeddingSequence labeling...
- The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.AutoencoderLatent variableImputationGenerative modelHyperparameterRegularizationArchitectureOptimizationStatisticsRecurrent neural network...
- This is a pedagogical introduction to NRQED, a low enery approximation of QED which can be made to reproduce QED to an arbitrary precision. It is especially useful when applied to nonrelativistic bound states. We start by explaining why QED is so difficult to apply to nr bound states, why NRQED makes it much simpler and then proceed to do an explicit calculation. We also briefly discuss what this can teach us about ``new'' physics beyond QED. This is based on a talk given at the XIV MRST meeting in Toronto.Quantum electrodynamicsBound stateWavefunctionRenormalizationPositroniumEffective field theoryKinematicsCountingPerturbation theoryLoop integral...
- We investigate the length of the period of validity of a classical description for the cosmic axion field. To this end, we first show that we can understand the oscillating axion solution as expectation value over an underlying coherent quantum state. Once we include self-interaction of the axion, the quantum state evolves so that the expectation value over it starts to deviate from the classical solution. The time-scale of this process defines the quantum break-time. For the hypothetical dark matter axion field in our Universe, we show that quantum break-time exceeds the age of the Universe by many orders of magnitude. This conclusion is independent of specific properties of the axion model. Thus, experimental searches based on the classical approximation of the oscillating cosmic axion field are fully justified. Additionally, we point out that the distinction of classical nonlinearities and true quantum effects is crucial for calculating the quantum break-time in any system. Our analysis can also be applied to other types of dark matter that are described as classical fluids in the mean field approximation.AxionCoherent stateInstabilityExpectation ValueClassical limitDark matterAxionic dark matterHamiltonianCreation and annihilation operatorsAxion model...
- We present a blind search for doublet intergalactic metal absorption with a method we dub `agnostic stacking'. Using forward-modelling we combine it with direct detections in the literature to measure the overall metal population. Here we apply this novel approach to the search for NeVIII in 26 high-quality COS spectra of QSOs at z>0.7. We probe an unprecedented low limit of log N>12.3 at 0.47<z<1.34 with a total pathlength ${\Delta}$z = 7.36. The method selects absorption without requiring knowledge of its source, be it observing noise, artifacts, or any line transition. Stacking this mixed population with NeVIII absorption dilutes doublet features in composite spectra in a deterministic manner. We stack potential NeVIII absorption in two regimes: absorption too weak to be statistically significant in direct line studies (12.3 < log N< 13.7), and strong absorbers (log N> 13.7). We do not detect NeVIII in either regime, and place upper limits on the population using agnostic stacking alone. Combining our measurements with direct line detections, the NeVIII population is reproduced with a single power law column density distribution of slope \b{eta} = -1.86 and normalisation log f_{13.7} = -13.99, leading to an incidence rate of strong NeVIII absorbers of dn/dz =1.38. Comparing our results with a group of 3 systems in PG1148+549, these have a 0.024% probability of arising by chance. We infer a cosmic mass density for NeVIII in the column density range 12.3 < log N < 15.0 of ${\Omega}$(NeVIII) = 2.2x10^{-8}. We translate this inferred density into an estimate of the baryon density of the NeVIII-bearing gas, and arrive at ${\Omega}b~1.8x10^{-3}$, which constitutes only 4% of the total baryonic mass. The measured NeVIII column density distribution function and cosmic density here are inconsistent with predictions of the EAGLES simulations at ${\sigma}>2.0$ significance. (abridged)Signal to noise ratioCosmic Origins SpectrographQuasarGalaxyIntergalactic mediumWarm hot intergalactic mediumFramesAbundanceStatisticsMean transmitted flux...
- We tested the performance of photometric redshifts for galaxies in the Hubble Ultra Deep field down to 30th magnitude. We compared photometric redshift estimates from three spectral fitting codes from the literature (EAZY, BPZ and BEAGLE) to high quality redshifts for 1227 galaxies from the MUSE integral field spectrograph. All these codes can return photometric redshifts with bias |Dzn|=|z-z_phot|/(1+z)<0.05 down to F775W=30 and spectroscopic incompleteness is unlikely to strongly modify this statement. We have, however, identified clear systematic biases in the determination of photometric redshifts: in the 0.4<z<1.5 range, photometric redshifts are systematically biased low by as much as Dzn=-0.04 in the median, and at z>3 they are systematically biased high by up to Dzn = 0.05, an offset that can in part be explained by adjusting the amount of intergalactic absorption applied. In agreement with previous studies we find little difference in the performance of the different codes, but in contrast to those we find that adding extensive ground-based and IRAC photometry actually can worsen photo-z performance for faint galaxies. We find an outlier fraction, defined through |Dzn|>0.15, of 8% for BPZ and 10% for EAZY and BEAGLE, and show explicitly that this is a strong function of magnitude. While this outlier fraction is high relative to numbers presented in the literature for brighter galaxies, they are very comparable to literature results when the depth of the data is taken into account. Finally, we demonstrate that while a redshift might be of high confidence, the association of a spectrum to the photometric object can be very uncertain and lead to a contamination of a few percent in spectroscopic training samples that do not show up as catastrophic outliers, a problem that must be tackled in order to have sufficiently accurate photometric redshifts for future cosmological surveys.Photometric redshiftGalaxyPhotometrySpectroscopic redshiftMulti Unit Spectroscopic ExplorerHubble Space TelescopeMilky WayIntergalactic mediumFaint galaxiesSpectral energy distribution...
- The detection of GW170817 in both gravitational waves and electromagnetic waves heralds the age of gravitational-wave multi-messenger astronomy. On 17 August 2017 the Advanced LIGO and Virgo detectors observed GW170817, a strong signal from the merger of a binary neutron-star system. Less than 2 seconds after the merger, a gamma-ray burst (GRB 170817A) was detected within a region of the sky consistent with the LIGO-Virgo-derived location of the gravitational-wave source. This sky region was subsequently observed by optical astronomy facilities, resulting in the identification of an optical transient signal within $\sim 10$ arcsec of the galaxy NGC 4993. These multi-messenger observations allow us to use GW170817 as a standard siren, the gravitational-wave analog of an astronomical standard candle, to measure the Hubble constant. This quantity, which represents the local expansion rate of the Universe, sets the overall scale of the Universe and is of fundamental importance to cosmology. Our measurement combines the distance to the source inferred purely from the gravitational-wave signal with the recession velocity inferred from measurements of the redshift using electromagnetic data. This approach does not require any form of cosmic "distance ladder;" the gravitational wave analysis can be used to estimate the luminosity distance out to cosmological scales directly, without the use of intermediate astronomical distance measurements. We determine the Hubble constant to be $70.0^{+12.0}_{-8.0} \, \mathrm{km} \, \mathrm{s}^{-1} \, \mathrm{Mpc}^{-1}$ (maximum a posteriori and 68% credible interval). This is consistent with existing measurements, while being completely independent of them. Additional standard-siren measurements from future gravitational-wave sources will provide precision constraints of this important cosmological parameter.Peculiar velocityGravitational waveHubble constantInclinationHubble flowGalaxyLaser Interferometer Gravitational-Wave ObservatoryGamma ray burstOptical identificationCanonical analysis...
- In the era of vast spectroscopic surveys focusing on Galactic stellar populations, astronomers want to exploit the large quantity and good quality of data to derive their atmospheric parameters without losing precision from automatic procedures. In this work, we developed a new spectral package, FASMA, to estimate the stellar atmospheric parameters (namely effective temperature, surface gravity, and metallicity) in a fast and robust way. This method is suitable for spectra of FGK-type stars in medium and high resolution. The spectroscopic analysis is based on the spectral synthesis technique using the radiative transfer code, MOOG. The line list is comprised of mainly iron lines in the optical spectrum. The atomic data are calibrated after the Sun and Arcturus. We use two comparison samples to test our method, i) a sample of 451 FGK-type dwarfs from the high resolution HARPS spectrograph, and ii) the Gaia-ESO benchmark stars using both high and medium resolution spectra. We explore biases in our method from the analysis of synthetic spectra covering the parameter space of our interest. We show that our spectral package is able to provide reliable results for a wide range of stellar parameters, different rotational velocities, different instrumental resolutions, and for different spectral regions of the VLT-GIRAFFE spectrographs, used among others for the Gaia-ESO survey. FASMA estimates stellar parameters in less than 15 min for high resolution and 3 min for medium resolution spectra. The complete package is publicly available to the community.StarSignal to noise ratioMetallicitySpectrographsSpectral synthesisSurface gravityEquivalent widthEffective temperatureEuropean Southern ObservatoryCalibration...
- We apply the analytic conformal bootstrap method to study weakly coupled conformal gauge theories in four dimensions. We employ twist conformal blocks to find the most general form of the one-loop four-point correlation function of identical scalar operators, without any reference to Feynman calculations. The method relies only on symmetries of the model. In particular, it does not require introducing any regularisation and it is free from the redundancies usually associated with the Feynman approach. By supplementing the general solution with known data for a small number of operators, we recover explicit forms of one-loop correlation functions of four Konishi operators as well as of four half-BPS operators $\mathcal{O}_{20'}$ in $\mathcal{N}=4$ super Yang-Mills.Two-point correlation functionConformal field theorySuper Yang-Mills theoryAnomalous dimensionConformal BootstrapR-symmetryOperator product expansionSupermultipletScaling dimensionGauge theory...
- We give a detailed account of the methods introduced in [1] to calculate holographic four-point correlators in IIB supergravity on $AdS_5 \times S^5$. Our approach relies entirely on general consistency conditions and maximal supersymmetry. We discuss two related methods, one in position space and the other in Mellin space. The position space method is based on the observation that the holographic four-point correlators of one-half BPS single-trace operators can be written as finite sums of contact Witten diagrams. We demonstrate in several examples that imposing the superconformal Ward identity is sufficient to fix the parameters of this ansatz uniquely, avoiding the need for a detailed knowledge of the supergravity effective action. The Mellin space approach is an "on-shell method" inspired by the close analogy between holographic correlators and flat space scattering amplitudes. We conjecture a compact formula for the four-point correlators of one-half BPS single-trace operators of arbitrary weights. Our general formula has the expected analytic structure, obeys the superconformal Ward identity, satisfies the appropriate asymptotic conditions and reproduces all the previously calculated cases. We believe that these conditions determine it uniquely.SupergravityEffective actionSuper Yang-Mills theoryScattering amplitudePropagatorR-symmetryOperator product expansionSupersymmetryNatural languageComplete theory...
- Chord diagrams and combinatorics of word algebras are used to model products of Dirac matrices, their traces, and contractions. A simple formula for the result of arbitrary contractions is derived, simplifying and extending an old contraction algorithm due to Kahane. This formula is then used to express the parametric integrand of a QED Feynman integral in a simplified form, with the entire internal tensor structure eliminated. Possible next steps for further simplification are discussed.Gamma matricesGraphQuantum electrodynamicsMonoidFeynman diagramsPath integralPermutationClifford algebraFree algebraGauge theory...
- The rules of canonical quantization normally offer good results, but sometimes they fail, e.g., leading to quantum triviality ($=$ free) for certain examples that are classically nontrivial ($\ne$ free). A new procedure, called Enhanced Quantization, relates classical models with their quantum partners differently and leads to satisfactory results for all systems. This paper features enhanced quantization procedures and provides highlights of two examples, a rotationally symmetric model and an ultralocal scalar model, for which canonical quantization fails while enhanced quantization succeeds.HamiltonianQuantizationCanonical quantizationPhase spacePoisson bracketQuantum trivialitySelf-adjoint operatorPath integralDegree of freedomFubini-Study metric...
- We briefly report on some particular aspects of new physics searches concerning, on the one side, the CKM paradigm and, on the other, the global fit to rare $b\to s \ell\ell$ observables including lepton flavour universality ones. We put special emphasis on the state-of-the-art of hadronic uncertainties of $b\to s\mu\mu$ observables and LFUV in SM and in presence of New Physics. Finally, we discuss the latest experimental and theoretical developments concerning long-distance charm contributions.CP violationLepton flavour universalityCabibbo-Kobayashi-Maskawa matrix
- LHCb observes the $B^+_c \to D^0 K^+$ decay with $R_{D^0 K} = f_c/f_u \times {\cal B}(B^+_c \to D^0 K^+)=(9.3^{+2.8}_{-2.5} \pm 0.6)\times 10^{-7}$. The corresponding branching ratio (BR) of the decay can be estimated as ${\cal B}(B^+_c \to D^0 K^+) \approx (10.01 \pm 3.40)\times 10^{-5}$; however, the theoretical estimates vary from $\sim 10^{-7}$ to $\sim 5\times 10^{-5}$. We phenomenologically investigate the $B^+_c \to (D^0 K^+, D^0 \pi^+)$ decays through the analysis of $B\to KK$, $B^+_u\to D^+ K^0$, and $B_d \to D^-_s K^+$. With the form factor of $f^{B_c D}_0\approx 0.2$, it is found that the tree-annihilation contribution dominates the $B^+_c \to D^0 K^+$ decay, and when ${\cal B}(B^+_u \to D^+ K^0) \approx (1-3.1) \times 10^{-7}$ is required, we obtain ${\cal B}(B^+_c \to D^0 K^+)\approx (4.4- 9) \times 10^{-5}$, and the magnitude of CP asymmetry is lower than approximately $10\%$. Although the $B^+_c \to D^0 \pi^+$ decay is dominated by the tree-transition effect, the tree-annihilation also makes an important contribution, where its effect could be around $70\%$ of the tree-transition. It is found that when ${\cal B}(B^+_c \to D^0 K^+)\approx (4.4- 9) \times 10^{-5}$ is taken, the BR and CP asymmetry for $B^+_c \to D^0 \pi^+$ with the common values of parameters can be ${\cal B}(B^+_c \to D^0 \pi^+)\approx (4.9-8)\times 10^{-6}$ and of the order of one, respectively. Moreover, we conclude ${\cal B}(B^+_c \to D^+ K^0)\approx {\cal B}(B^+_c \to D^0 K^+)$, and the BRs for $B^+_c \to K^+ \bar K^0$ and $B^+_c \to J/\Psi \pi^+$ are $(6.99 \pm 1.34) \times 10^{-7}$ and $(7.7 \pm 1.1)\times 10^{-4}$, respectively.Form factorCP asymmetryBranching ratioLHCbCabibbo-Kobayashi-Maskawa matrixHamiltonianPerturbative QCDCharmed mesonInterferenceMeson decays...
- Recent experimental results provided by the CMS and LHCb, Belle and BaBar collaborations are showing a tension with the SM predictions in $R_{K^{(*)}}$, which might call for an explanation from new physics. In this work, we examine this tension in the type-III two-Higgs doublet models. We focus on the contributions of charged Higgs boson to the observable(s) $R_{K^{(*)}}$ and other rare processes $\Delta M_q$ ($q=s,d$), $B \to X_s \gamma$ $B_s \to \mu^+ \mu^{-}$ and $B_q \to X_s \mu^+ \mu^{-}$, which are governed by the same effective Hamiltonian. It is found that regions of large $\tan\beta$ and light charged Higgs mass $m_{H^\pm}$ can explain the measured value of $R_{K^{(*)}}$ and accommodate other B physics data as well. In contrast, the type-II two-Higgs doublet model can not.Two Higgs Doublet ModelHiggs bosonWilson coefficientsFlavour Changing Neutral CurrentsLHCbYukawa couplingBranching ratioHamiltonianCharged Higgs bosonQCD corrections...
- We review in detail modern numerical methods used in the determination and solution of Bethe-Salpeter and Dyson--Schwinger equations. The algorithms and techniques described are applicable to both the rainbow-ladder truncation and its non-trivial extensions. We discuss pedagogically the steps involved in constructing conventional mesons and baryons as systems of two- and three-quarks respectively. As further application we describe the self-consistent calculation of form-factors and highlight the challenges that remain therein.Bethe-Salpeter equationPropagatorBound stateFlavourFramesFaddeev equationsPermutationQuadratureSchwinger-Dyson equationForm factor...
- Tree-level $b$ decays play a critical role in characterising the quark flavour sector, and exposing possible effects of physics beyond the Standard Model. These proceedings cover recent results from the LHCb experiment on semileptonic $b$ baryon decays, $\mathcal{R}(D^{\ast-})$ using three-prong hadronic $\tau$ decays, $CP$ observables in $B^- \to D^{(\ast)}h^-$ decays, and an updated combination on the CKM angle $\gamma$.LHCbSemileptonic decayStandard ModelLattice QCDForm factorFlavourCabibbo-Kobayashi-Maskawa matrixDecay rateBaryon decaysDecay vertex...
- The semitauonic decay $\bar{B} \rightarrow D^* \tau^- \bar{\nu}_\tau$ is sensitive to new physics beyond the Standard Model (SM) that has an enhanced coupling to the $\tau$ lepton. In the ratio of branching fractions $R(D^*) = \mathcal{B}(\bar{B} \rightarrow D^* \tau^- \bar{\nu}_\tau) / \mathcal{B}(\bar{B} \rightarrow D^* \ell^- \bar{\nu}_\ell)$, where $\ell^- = e^-$ or $\mu^-$, a 3.3$\sigma$ anomaly was observed. In order to investigate the anomaly further, Belle performed a new $R(D^*)$ measurement using one-prong hadronic $\tau$ decays, which was statistically independent of the previous two measurements. This measurement included the first measurement of the $\tau$ polarization $P_\tau(D^*)$ using the kinematics of the two-body decays. The obtained results, $R(D^*) = 0.270 \pm 0.035 ({\rm stat}) ^{+0.028}_{-0.025} ({\rm syst})$ and $P_\tau(D^*) = -0.38 \pm 0.51 (stat) ^{+0.21}_{-0.16} (syst)$, were consistent both with the SM and the world-average $R(D^*)$. Including this result, the $R(D^*)$ anomaly became 3.4$\sigma$ away from the SM prediction.Standard ModelKinematicsBranching ratioFramesDecay rateBeyond the Standard ModelElectromagnetic calorimeterStatisticsSemileptonic decayNeutrino...
- This paper is designed to calculate the branching ratio of four-body decays of $B$ meson with lepton number changed by 2. With the new experimental data limit to lepton-number violation processes, we update the upper limits of mixing parameters between heavy Majorana neutrino and charged leptons. Afterwards, we calculate the branching ratio of $B^0(P)\rightarrow D^{*-}(P_1)\ell^+_1(P_2)\ell^+_2(P_3)M_2^-(P_4)$ using the updated parameters. It is found that the most hopeful decay channel is $B^0(P)\rightarrow D^{*-}(P_1)e^+_1(P_2)e^+_2(P_3)\rho^-(P_4)$ or $B^0(P)\rightarrow D^{*-}(P_1)e(\mu)^+_1(P_2)\mu(e)^+_2(P_3)\rho^-(P_4)$, whose branching ratio can reach about $10^{-4}$ with heavy Majorana neutrino mass range around $2~\mathrm{GeV}$.Sterile neutrinoBranching ratioLepton number violationNeutrinoCharged leptonMajorana neutrinoMajorana neutrino massesDecay widthDecay channelsSterile neutrino mass...
- In this paper, we systematically calculate the strong decays of newly observed $D_J(3000)$ and $D_{sJ}(3040)$ with two 2P$(1^+)$ quantum number assignments. Our results show that both the two resonances can be explained as the 2P$(1^{+'})$ with broad width via $^3P_1$ and $^1P_1$ mixing in $D$ and $D_s$ families. For $D_J(3000)$, the total width is 229.6 MeV in our calculation, closing to the upper limit of experiment data, and the dominant decay channels are $D_2^*\pi$, $D^*\pi$ and $D(2600)\pi$. For $D_{sJ}(3040)$, the total width is 157.4 MeV in our calculation, closing to the lower limit of experiment data, and the dominant channels are $D^*K$ and $D^*K^*$, which are consistent with observed channels in experiment. Given the very little information in experiment and large error bar, we urge for the detection of dominant channels in our calculation.Decay channelsEffective LagrangianMixing angleDecay widthBethe-Salpeter equationPseudoscalarLHCbLight scalarPolarization tensorHeavy quark...
- We have investigated $\Omega_c$ states that are dynamically generated from the meson-baryon interaction. We use the local hidden gauge to obtain the interaction from the exchange of vector mesons. We show that the dominant terms come from the exchange of light vectors, where the heavy quarks are spectators. This has as a consequence that heavy quark symmetry is preserved for the dominant terms in the $(1/m_Q)$ counting, and also that the interaction in this case can be obtained from the $\textrm{SU(3)}$ chiral Lagrangians. We show that for a standard value for the cutoff regulating the loop, we obtain two states with $J^{P}={1/2}^{-}$ and two more with $J^{P}={3/2}^{-}$, three of them in remarkable agreement with three experimental states in mass and width. We also make predictions at higher energies for states of vector-baryon nature.Heavy quarkVector mesonPseudoscalarHeavy quark symmetryCountingPropagatorLight quarkSpin waveBound stateLHCb...
- We do a systematic search of the new physics couplings with different four-Fermi Lorentz structures which can explain the excess in $R_D$ and $R_{D^*}$. We find many more allowed solutions, compared to the previous analyses, because of the measured values of these parameters have shifted recently. We then include the newly measured $R_{J/\psi}$ in our fitting procedure. We find that the deviation in $R_{J/\psi}$ is fully consistent with the deviations in $R_D$ and $R_{D^*}$. That is, there is no tension between the measurements of $R_D/R_{D^*}$ and $R_{J/\psi}$. Some of the allowed solutions are ruled out by the leptonic branching ratio of $B_c$ meson. We calculate the $\tau$ polarization fraction and $D^*$ polarization for each of the allowed new physics solutions. We find that these quantities can distinguish only tensor new physics but none of the others. A different technique is needed to pin down the type of new physics which is responsible for the deviations in $R_D$, $R_{D^*}$ and $R_{J/\psi}$.Standard ModelBranching ratioForm factorWilson coefficientsHamiltonianLHCbLattice QCDScale of new physicsBeyond the Standard ModelHeavy quark...
- We present our recent results for long-distance QCD effects in the FCNC radiative leptonic decays $B\to\gamma\ell^+\ell^-$, $\ell=\{e,\mu\}$. One encounters here two distinct types of long-distance effects: those encoded in the $B\to\gamma$ transition form factors induced by the $b\to q$ quark currents, and those related to the charm-loop effects. We calculate the $B\to\gamma$ form factors in a broad range of the momentum transfers making use of the relativisitc dispersion approach based on the constituent quark picture which has proven to provide reliable predictions for many weak-transition form factors. Concerning the description of the charm-loop contributions, we point out two observations: First, the precise description of the charmonium resonances, in particular, the relative phases between $\psi$ and $\psi'$, has impact on the differential distributions and on the forward-backward asymmetry, $A_{\rm FB}$, in a broad range of $q^2\ge 5$ GeV$^2$. Second, the shape of $A_{\rm FB}$ in $B\to\gamma\ell^+\ell^-$ and in $B\to V\ell^+\ell^-$ ($V$ the vector meson) {\it in the $q^2$-region between $\psi$ and $\psi'$} provides an unambiguous probe of the relative phases between $\psi$ and $\psi'$. Fixing the latter will lead to a strong reduction of the theoretical uncertainties in $A_{\rm FB}$ at $q^2=5-9$ GeV$^2$ where it has the sensitivity to physics beyond the SM.Flavour Changing Neutral CurrentsForm factorTransition form factorVector mesonDispersion approachForward-backward asymmetryConstituent quarkDegree of freedomBeyond the Standard ModelWilson coefficients...
- We report on new results on $B \to K^* \gamma$ and recent studies on $B \to K^* \ell^+ \ell^-$ and $B \to h \nu \bar{\nu}$ at Belle at KEKB accelerator. All the analyses used full data sample of 711~fb${}^{-1}$ taken on $\Upsilon(4S)$ resonance.Equivalent widthKEKBResonance
- I report new, world-leading LHCb results on heavy meson lifetimes. We use a novel approach that suppresses the shortcomings typically associated with reconstruction of semileptonic decays, allowing for precise measurements of lifetimes and other properties in collider experiments. We achieve a 15% and a $2\times$ improvement over current best determinations of the flavor-specific $B^0_s$ lifetime and $D^-_s$ lifetime, respectively.Semileptonic decayLHCbColliderMeasurementMesons...
- For half a century, the Roper resonance has defied understanding. Discovered in 1963, it appears to be an exact copy of the proton except that its mass is 50% greater. The mass is the first problem: it is difficult to explain with any theoretical tool that can validly be used to study quantum chromodynamics [QCD]. In the last decade, a new challenge has appeared, viz. precise information on the proton-to-Roper electroproduction transition form factors, reaching $Q^2\approx 4.5\,$GeV$^2$. This scale probes the domain within which hard valence-quark degrees-of-freedom could be expected to determine form factor behavior. Hence, with this new data the Roper resonance becomes a problem for strong-QCD [sQCD]. An explanation of how and where the Roper resonance fits into the emerging spectrum of hadrons cannot rest on a description of its mass alone. Instead, it must combine this with a detailed understanding of the Roper's structure and how that is revealed in the transition form factors. Furthermore, it must unify all this with a similarly complete picture of the proton. This is a prodigious task, but a ten-year international effort, drawing together experimentalists and theorists, has presented a solution to the puzzle. Namely, the Roper is at heart the proton's first radial excitation, consisting of a dressed-quark core augmented by a meson cloud that reduces the core mass by approximately 20% and materially alters its electroproduction form factors on $Q^2<2m_N^2$, where $m_N$ is the proton's mass. We describe the experimental motivations and developments which enabled electroproduction data to be procured within a domain that is unambiguously the purview of sQCD, thereby providing a real challenge and opportunity for modern theory; and survey the developments in reaction models and QCD theory that have enabled this conclusion to be drawn about the nature of the Roper resonance.Quantum chromodynamicsRoper resonanceDiquarkConstituent quarkTransition form factorForm factorBound stateFaddeev equationsHamiltonianStrong interactions...
- Andrei Bely's novel "Petersburg," first published in 1913, was declared by Vladimir Nabokov one of the four greatest masterpieces of 20th-century prose. The Banach-Tarski Paradox, published in 1924, is one of the most striking and well-known results in 20th-century mathematics. In this paper we explore a potential connection between these two landmark works, based on various interactions with the Moscow Mathematical School and passages in the novel itself.Banach-Tarski paradoxSunFoundations of mathematicsIsometryNebulaeSpecial orthogonalAbundanceInclinationClassical mathematicsCohesion...
- Binary neutron-star mergers (BNSMs) are among the most readily detectable gravitational-wave (GW) sources with LIGO. They are also thought to produce short $\gamma$-ray bursts (SGRBs), and kilonovae that are powered by r-process nuclei. Detecting these phenomena simultaneously would provide an unprecedented view of the physics during and after the merger of two compact objects. Such a Rosetta Stone event was detected by LIGO/Virgo on 17 August 2017 at a distance of $\sim40$~Mpc. We monitored the position of the BNSM with ALMA at 338.5 GHz and GMRT at 1.4 GHz, from 1.4 to 44 days after the merger. Our observations rule out any afterglow more luminous than $3\times 10^{26}~{\rm erg~s}^{-1}$ in these bands, probing $>$2--4 dex fainter than previous SGRB limits. We match these limits, in conjunction with public data announcing the appearance of X-ray and radio emission in the weeks after the GW event, to templates of off-axis afterglows. Our broadband modeling suggests that GW170817 was accompanied by a SGRB and that the GRB jet, powered by $E_{\rm AG,\,iso}\sim10^{50}$~erg, had a half-opening angle of $\sim20^\circ$, and was misaligned by $\sim41^\circ$ from our line of sight. The data are also consistent with a more powerful, collimated jet: $E_{\rm AG,\,iso}\sim10^{51}$~erg, $\theta_{1/2,\,\rm jet}\sim5^\circ$, $\theta_{\rm obs}\sim17^\circ$. This is the most conclusive detection of an off-axis GRB afterglow and the first associated with a BNSM-GW event to date. We use the viewing angle estimates to infer the initial bulk Lorentz factor and true energy release of the burst.Gamma ray burstShort gamma-ray burstAtacama Large Millimeter ArrayGravitational waveLaser Interferometer Gravitational-Wave ObservatoryBinary neutron starLong-duration gamma-ray burstLorentz factorLine of sightAfterglow emission...
- We use deep Chandra X-ray imaging to measure the distribution of specific black hole accretion rates ($L_X$ relative to the stellar mass of the galaxy) and thus trace AGN activity within star-forming and quiescent galaxies, as a function of stellar mass (from $10^{8.5}-10^{11.5} M_\odot$) and redshift (to $z \sim 4$). We adopt near-infrared selected samples of galaxies from the CANDELS and UltraVISTA surveys, extract X-ray data for every galaxy, and use a flexible Bayesian method to combine these data and to measure the probability distribution function of specific black hole accretion rates, $\lambda_{sBHAR}$. We identify a broad distribution of $\lambda_{sBHAR}$ in both star-forming and quiescent galaxies---likely reflecting the stochastic nature of AGN fuelling---with a roughly power-law shape that rises toward lower $\lambda_{sBHAR}$, a steep cutoff at $\lambda_{sBHAR} \gtrsim 0.1-1$ (in Eddington equivalent units), and a turnover or flattening at $\lambda_{sBHAR} \lesssim 10^{-3}-10^{-2}$. We find that the probability of a star-forming galaxy hosting a moderate $\lambda_{sBHAR}$ AGN depends on stellar mass and evolves with redshift, shifting toward higher $\lambda_{sBHAR}$ at higher redshifts. This evolution is truncated at a point corresponding to the Eddington limit, indicating black holes may self-regulate their growth at high redshifts when copious gas is available. The probability of a quiescent galaxy hosting an AGN is generally lower than that of a star-forming galaxy, shows signs of suppression at the highest stellar masses, and evolves strongly with redshift. The AGN duty cycle in high-redshift ($z\gtrsim2$) quiescent galaxies thus reaches $\sim$20 per cent, comparable to the duty cycle in star-forming galaxies of equivalent stellar mass and redshift.GalaxyActive Galactic NucleiStellar massAccretionStar-forming galaxyStar formationMilky WayStar formation rateSpectral energy distributionBlack hole...
- We present an analysis of 55 central galaxies in clusters and groups with molecular gas masses and star formation rates lying between $10^{8}-10^{11}\ M_{\odot}$ and $0.5-270$ $M_{\odot}\ yr^{-1}$, respectively. We have used Chandra observations to derive profiles of total mass and various thermodynamic variables. Molecular gas is detected only when the central cooling time or entropy index of the hot atmosphere falls below $\sim$1 Gyr or $\sim$35 keV cm$^2$, respectively, at a (resolved) radius of 10 kpc. This indicates that the molecular gas condensed from hot atmospheres surrounding the central galaxies. The depletion timescale of molecular gas due to star formation approaches 1 Gyr in most systems. Yet ALMA images of roughly a half dozen systems drawn from this sample suggest the molecular gas formed recently. We explore the origins of thermally unstable cooling by evaluating whether molecular gas becomes prevalent when the minimum of the cooling to free-fall time ratio ($t_{\rm cool}/t_{\rm ff}$) falls below $\sim10$. We find: 1) molecular gas-rich systems instead lie between $10 < min(t_{\rm cool}/t_{\rm ff}) < 25$, where $t_{\rm cool}/t_{\rm ff}=25$ corresponds approximately to cooling time and entropy thresholds $t_{\rm cool} \lesssim 1$ Gyr and 35 keV~cm$^2$, respectively, 2) $min(t_{\rm cool}/t_{\rm ff}$) is uncorrelated with molecular gas mass and jet power, and 3) the narrow range $10 < min(t_{\rm cool}/t_{\rm ff}) < 25$ can be explained by an observational selection effect. These results and the absence of isentropic cores in cluster atmospheres are in tension with "precipitation" models, particularly those that assume thermal instability ensues from linear density perturbations in hot atmospheres. Some and possibly all of the molecular gas may instead have condensed from atmospheric gas lifted outward either by buoyantly-rising X-ray bubbles or merger-induced gas motions.Cooling timescaleGalaxyCoolingActive Galactic NucleiMolecular cloudEntropyAtacama Large Millimeter ArrayStar formationThermally unstable coolingLuminosity...
- We study the population of supermassive black holes (SMBHs) and their effects on massive central galaxies in the IllustrisTNG cosmological hydrodynamical simulations of galaxy formation. The employed model for SMBH growth and feedback assumes a two-mode scenario in which the feedback from active galactic nuclei occurs through a kinetic, comparatively efficient mode at low accretion rates relative to the Eddington limit, and in the form of a thermal, less efficient mode at high accretion rates. We show that the quenching of massive central galaxies happens coincidently with kinetic-mode feedback, consistent with the notion that active supermassive black cause the low specific star formation rates observed in massive galaxies. However, major galaxy mergers are not responsible for initiating most of the quenching events in our model. Up to black hole masses of about $10^{8.5}\,{\rm M}_\odot$, the dominant growth channel for SMBHs is in the thermal mode. Higher mass black holes stay mainly in the kinetic mode and gas accretion is self-regulated via their feedback, which causes their Eddington ratios to drop, with SMBH mergers becoming the main channel for residual mass growth. As a consequence, the quasar luminosity function is dominated by rapidly accreting, moderately massive black holes in the thermal mode. We show that the associated growth history of SMBHs produces a low-redshift quasar luminosity function and a redshift zero black hole mass-stellar bulge mass relation in good agreement with observations, whereas the simulation tends to over-predict the high-redshift quasar luminosity function.GalaxyAccretionQuenchingBlack hole massAGN feedbackQuasar Luminosity FunctionStar formation rateStellar massBlack holeStar formation...
- It was suggested that the large scale magnetic field can be dragged inward efficiently by the corona above the disc, i.e., the so called "coronal mechanism" (Beckwith, Hawley, \& Krolik 2009), which provides a way to solve the difficulty of field advection in a geometrically thin accretion disc. In this case, the magnetic pressure should be lower than the gas pressure in the corona. We estimate the maximal power of the jets accelerated by the magnetic field advected by the corona. The Blandford-Payne (BP) jet power is found always to be higher than the Blandford-Znajek (BZ) jet power, except for a rapidly spinning black hole with a>0.8. The maximal jet power is always low, less than 0.05 Eddington luminosity, even for an extreme Kerr black hole, which is insufficient for the observed strong jets in some blazars with jet power $\sim 0.1-1$ Eddington luminosity (or even higher). It implies that these powerful jets cannot be accelerated by the coronal field. We suggest that, the magnetic field dragged inward by the accretion disc with magnetically outflows may accelerate the jets (at least for the most powerful jets, if not all) in the blazars.CoronaAccretion diskLong-range magnetic fieldsEddington luminosityBlazarSpinning Black HoleThin stellar diskQuasarRadial velocityMagnetic pressure...
- When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3-d Monte-Carlo radiative transfer codes, such as HYPERION. With the FluxCompensator, realistic synthetic observations can be generated by modelling the effects of convolution with arbitrary point-spread functions (PSFs), transmission curves, finite pixel resolution, noise and reddening. Pipelines can be applied to compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory. Additionally, this tool can read in existing observations (e.g. FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.Radiative transferTelescopesPythonExtinctionRadiative transfer codeAstronomical objectsStar-forming regionPoint spread functionReddeningMonte Carlo method...
- We formulate rigorous criteria for selecting diagrams to be taken into account in the analysis of potential polyquark (tetra or penta) poles in QCD. The central point of these criteria is the requirement that the Feynman diagrams for the relevant Green functions contain four-quark (or, respectively, five-quark in the pentaquark case) intermediate states and the corresponding cuts. It is shown that some diagrams which "visually" seem to contain the four-quark cuts, turn out to be free of these singularities and therefore should not be taken into account when calculating the tetraquark properties. We then consider large-$N_c$ QCD which in many cases provide qualitatively correct picture of hadron properties and discuss in detail the tetraquark states. For the "direct" and the "recombination" four-point Green functions, which may potentially contain the tetraquark poles, we formulate large-$N_c$ consistency conditions which strongly restrict the behaviour of the tetraquark-to-ordinary meson transition amplitudes. In the end, these conditions allow us to obtain constraints on width of the potential tetraquark states at large $N_c$. We report that both flavor-exotic and cryptoexotic (i.e., flavor-nonexotic) tetraquarks, if the corresponding poles exist, have a width of order $O(1/N_c^2)$, i.e. they should be parametrically narrower compared to the ordinary $\bar qq$ mesons with a width of order $O(1/N_c)$. Moreover, for flavor-exotic states, the large-$N_c$ consistency conditions require two narrow flavor-exotic states, each of these states coupling dominantly to one specific meson-meson channel.TetraquarkGreen's functionRecombinationBound stateFeynman diagramsFlavourPentaquarkDecay channelsPerturbative expansionMeson decay constant...
- One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to inter- and intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topology-preserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.ClassificationArchitectureConvolutional neural networkTime SeriesLong short term memoryNeural networkDeep Neural NetworksRecurrent neural networkTraining setRegularization...
- Observations of galaxies and galaxy clusters in the local universe can account for only 10% of the baryon content inferred from measurements of the cosmic microwave background and from nuclear reactions in the early Universe. Locating the remaining 90% of baryons has been one of the major challenges in modern cosmology. Cosmological simulations predict that the 'missing baryons' are spread throughout filamentary structures in the cosmic web, forming a low density gas with temperatures of $10^5-10^7$ K. Previous attempts to observe this diffuse warm-hot filamentary gas via X-ray emission or absorption in quasar spectra have been inconclusive. Here we report a $5.1 \sigma$ detection of warm-hot baryons in stacked filaments through the thermal Sunyaev-Zel'dovich (SZ) effect, which arises from the distortion in the cosmic microwave background spectrum due to ionised gas. The estimated gas density in these 15 Megaparsec-long filaments is approximately 6 times the mean universal baryon density, and overall this can account for $\sim$ 30% of the total baryon content of the Universe. This result establishes the presence of ionised gas in large-scale filaments, and suggests that the missing baryons problem may be resolved via observations of the cosmic web.GalaxyGalaxy pairsCosmic webMilky WayMissing baryonsLine of sightPlanck missionSloan Digital Sky SurveyBaryon contentLarge scale structure...
- In this paper we prove the universal nature of the Unruh effect in a general class of weakly non-local field theories. At the same time we solve the tension between two conflicting claims published in literature. Our universality statement is based on two independent computations based on the canonical formulation as well as path integral formulation of the quantum theory.Unruh effectPropagatorWightman functionField theoryCanonical quantizationQuantizationForm factorCausalityEntire functionPath integral formulation...
- Inspired by ancient astronomy, we propose a holographic description of perturbative scattering amplitudes, as integrals over a `celestial sphere'. Since Lorentz invariance, local interactions, and particle propagations all take place in a four-dimensional space-time, it is not trivial to accommodate them in a lower-dimensional `celestial sphere'. The details of this task will be discussed step by step, resulting in the Cachazo-He-Yuan (CHY) and similar scattering amplitudes, thereby providing them with a holographic non-string interpretation.PropagatorCelestial SphereScattering amplitudeStarHelicityFeynman diagramsTree amplitudeLorentz transformationLorentz invariantConformal group...
- Motivated by a proposal of Daykin (Problem 71-19*, SIAM Review 13 (1971) 569), we study the wave that propagates along an infinite chain of dominoes and find the limiting speed of the wave in an extreme case.DissipationElliptic integralElasticityScaling lawTranslational invarianceCollisionVelocityAngular momentumEnergyMass...
- The authors prepared this booklet in order to make several useful topics from the theory of special functions, in particular the spherical harmonics and Legendre polynomials for any dimension, available to undergraduates studying physics or mathematics. With this audience in mind, nearly all details of the calculations and proofs are written out, and extensive background material is covered before beginning the main subject matter. The reader is assumed to have knowledge of multivariable calculus and linear algebra as well as some level of comfort with reading proofs.Legendre polynomialsSpherical harmonicsSpecial functionLinear algebraMaterials...
- We introduce Partiti, the puzzle that will run in Mathematics Magazine in 2018, and use the opportunity to recall some basic properties of integer partitions.Euler's theoremCountingFormal power seriesPartition functionSketchPolynomialPictureObjectivePotential...
- Countless learning tasks require dealing with sequential data. Image captioning, speech synthesis, and music generation all require that a model produce outputs that are sequences. In other domains, such as time series prediction, video analysis, and musical information retrieval, a model must learn from inputs that are sequences. Interactive tasks, such as translating natural language, engaging in dialogue, and controlling a robot, often demand both capabilities. Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes. Unlike standard feedforward neural networks, recurrent networks retain a state that can represent information from an arbitrarily long context window. Although recurrent neural networks have traditionally been difficult to train, and often contain millions of parameters, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful large-scale learning with them. In recent years, systems based on long short-term memory (LSTM) and bidirectional (BRNN) architectures have demonstrated ground-breaking performance on tasks as varied as image captioning, language translation, and handwriting recognition. In this survey, we review and synthesize the research that over the past three decades first yielded and then made practical these powerful learning models. When appropriate, we reconcile conflicting notation and nomenclature. Our goal is to provide a self-contained explication of the state of the art together with a historical perspective and references to primary research.Recurrent neural networkLong short term memoryArchitectureNeural networkActivation functionBackpropagationNatural languageOptimizationMachine learningFeedforward neural network...
- This tutorial leads the reader through the details of calculating the properties of gravitational waves from orbiting binaries, such as two orbiting black holes. Using analogies with electromagnetic radiation, the tutorial presents a calculation that produces the same dependence on the masses of the orbiting objects, the orbital frequency, and the mass separation as does the linear version of General Relativity (GR). However, the calculation yields polarization, angular distributions, and overall power results that differ from those of GR. Nevertheless, the calculation produces waveforms that are very similar to the pre-binary-merger portions of the signals observed by the Laser Interferometer Gravitational-Wave Observatory (LIGO-VIRGO) collaboration. The tutorial should be easily understandable by students who have taken a standard upper-level undergraduate course in electromagnetism.General relativityGravitational waveLaser Interferometer Gravitational-Wave ObservatoryElectromagnetismElectromagnetic radiationBlack holeOrbitMassObjectPolarization...
- An overview of last seven years results concerning Sarnak's conjecture on M\"obius disjointness is presented, focusing on ergodic theory aspects of the conjecture.AutomorphismEntropyErgodic theoryArithmeticSquare-freeRankPrime numberDirichlet characterTopological entropy in physicsGeneric point...
- Theoretical computer science discusses foundational issues about computations. It asks and answers questions such as "What is a computation?", "What is computable?", "What is efficiently computable?","What is information?", "What is random?", "What is an algorithm?", etc. We will present many of the major themes and theorems with the basic language of category theory. Surprisingly, many interesting theorems and concepts of theoretical computer science are easy consequences of functoriality and composition when you look at the right categories and functors connecting them.MorphismTheoretical computer scienceDecision problemSubcategoryCategory theoryKolmogorov complexityComputability theoryMonoidComplexity classConjunction...
- We analyse scale invariant quadratic quantum gravity incorporating non-minimal coupling to a multiplet of scalar fields in a gauge theory, with particular emphasis on the consequences for its interpretation resulting from a transformation from the Jordan frame to the Einstein frame. The result is the natural emergence of a de Sitter space solution which, depending the gauge theory and region of parameter space chosen, can be free of ghosts and tachyons, and completely asymptotically free. In the case of an SO(10) model, we present a detailed account of the spontaneous symmetry breaking, and we calculate the leading (two-loop) contribution to the dilaton mass.TachyonGauge theoryNon-minimal couplingEinstein frameJordan frameDe Sitter spaceDilatonQuantum gravitySpontaneous symmetry breakingScalar field...
- We document major new features and improvements of FlexibleSUSY, a Mathematica and C++ package that generates fast and precise spectrum generators. These extensions significantly increase the generality and capabilities of the FlexibleSUSY package, while maintaining the elegant structure and easy to use interfaces. The FlexibleBSM extension makes it possible to also create spectrum generators for non-supersymmetric extensions of the Standard Model. The FlexibleCPV extension adds the option of complex parameters to the spectrum generators, allowing the study of many interesting models with new sources of $CP$ violation. FlexibleMW computes the decay of the muon for the generated model and thereby allows FlexibleSUSY to predict the mass of the $W$ boson from the input parameters by using the more precise electroweak input of $\{ G_F, M_Z, \alpha_{\text{em}} \}$ instead of $\{ M_W, M_Z, \alpha_{\text{em}} \}$. The FlexibleAMU extension provides a calculator of the anomalous magnetic moment of the muon in any model FlexibleSUSY can generate a spectrum for. FlexibleSAS introduces a new solver for the boundary value problem which makes use of semi-analytic expressions for dimensionful parameters to find solutions in models where the classic two-scale solver will not work such as the constrained E$_6$SSM. FlexibleEFTHiggs is a hybrid calculation of the Higgs mass which combines the virtues of both effective field theory calculations and fixed-order calculations. All of these extensions are included in FlexibleSUSY 2.0, which is released simultaneously with this manual.SupersymmetryMuonStandard ModelMinimal supersymmetric Standard ModelRenormalisation group equationsAnomalous magnetic dipole momentExtensions of the standard modelHiggs boson massBoundary value problemEffective field theory...
- The cooling flow problem is one of the central problems in galaxy clusters, and active galactic nucleus (AGN) feedback is considered to play a key role in offsetting cooling. However, how AGN jets heat and suppress cooling flows remains highly debated. Using an idealized simulation of a cool-core cluster, we study the development of central cooling catastrophe and how a subsequent powerful AGN jet event averts cooling flows, with a focus on complex gasdynamical processes involved. We find that the jet drives a bow shock, which reverses cooling inflows and overheats inner cool core regions. The shocked gas moves outward in a rarefaction wave, which rarefies the dense core and adiabatically transports a significant fraction of heated energy to outer regions. As the rarefaction wave propagates away, inflows resume in the cluster core, but a trailing outflow is uplifted by the AGN bubble, preventing gas accumulation and catastrophic cooling in central regions. Inflows and trailing outflows constitute meridional circulations in the cluster core. At later times, trailing outflows fall back to the cluster centre, triggering central cooling catastrophe and potentially a new generation of AGN feedback. We thus envisage a picture of cool cluster cores going through cycles of cooling-induced contraction and AGN-induced expansion. This picture naturally predicts an anti-correlation between the gas fraction (or X-ray luminosity) of cool cores and the central gas entropy, which may be tested by X-ray observations.CoolingActive Galactic NucleiIntra-cluster mediumCool core galaxy clusterAGN jetsCooling flowRarefaction waveAGN feedbackBow shockCluster core...
- We present a method to measure the small-scale matter power spectrum using high-resolution measurements of the gravitational lensing of the Cosmic Microwave Background (CMB). To determine whether small-scale structure today is suppressed on scales below 10 kiloparsecs (corresponding to M < 10^9 M_sun), one needs to probe CMB-lensing modes out to L ~ 35,000, requiring a CMB experiment with about 20 arcsecond resolution or better. We show that a CMB survey covering 4,000 square degrees of sky, with an instrumental sensitivity of 0.5 uK-arcmin at 18 arcsecond resolution, could distinguish between cold dark matter and an alternative, such as 1 keV warm dark matter or 10^(-22) eV fuzzy dark matter with about 4-sigma significance. A survey of the same resolution with 0.1 uK-arcmin noise could distinguish between cold dark matter and these alternatives at better than 20-sigma significance; such high-significance measurements may also allow one to distinguish between a suppression of power due to either baryonic effects or the particle nature of dark matter, since each impacts the shape of the lensing power spectrum differently. CMB temperature maps yield higher signal-to-noise than polarization maps in this small-scale regime; thus, systematic effects, such as from extragalactic astrophysical foregrounds, need to be carefully considered. However, these systematic concerns can likely be mitigated with known techniques. Next-generation CMB lensing may thus provide a robust and powerful method of measuring the small-scale matter power spectrum.CMB lensingCold dark matterDark matterMatter power spectrumSignal to noise ratioFuzzy dark matterStatistical estimatorCovariance matrixGalaxyWarm dark matter...
- We discuss constraints on cosmic reionisation and their implications on a cosmic SFR density $\rho_\mathrm{SFR}$ model; we study the influence of key-parameters such as the clumping factor of ionised hydrogen in the intergalactic medium (IGM) $C_{H_{II}}$ and the fraction of ionising photons escaping star-forming galaxies to reionise the IGM $f_\mathrm{esc}$. Our analysis uses SFR history data coming from luminosity functions, assuming that star-forming galaxies were sufficient to lead the reionisation process at high redshift. We add two other sets of constraints: measurements of the IGM ionised fraction and the most recent result from Planck Satellite about the integrated Thomson optical depth of the Cosmic Microwave Background (CMB) $\tau_\mathrm{Planck}$. We also consider various possibilities for the evolution of these two parameters with redshift, and confront them with observational data cited above. We conclude that, if the model of a constant clumping factor is chosen, the fiducial value of $3$ often used in papers is consistent with observations; even if a redshift-dependent model is considered, the resulting optical depth is strongly correlated to $C_{H_{II}}$ mean value at $z>7$, an additional argument in favour of the use of a constant clumping factor. Besides, the escape fraction is related to too many astrophysical parameters to allow us to use a complete and fully satisfactory model. A constant value with redshift seems again to be the most likely expression: considering it as a fit parameter, we get from the maximum likelihood (ML) model $f_\mathrm{esc}=0.24\pm0.08$; with a redshift-dependent model, we find an almost constant evolution, slightly increasing with $z$, around $f_\mathrm{esc}=0.23$. Last, our analysis shows that a reionisation beginning as early as $z\geq14$ and persisting until $z\sim6$ is a likely storyline.ReionizationStar-forming galaxyLuminosity functionIonizing radiationConfidence intervalPlanck missionGalaxyRecombinationMaximum likelihoodQuasar...