Recently bookmarked papers

with concepts:
  • The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
    TransductionArchitectureRecurrent neural networkMachine translationConvolutional neural networkEmbeddingPath lengthHidden layerHidden stateInference...
  • Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'.
    Reinforcement learningOptimizationAsteroidsShort Term MemoryQ-learningFunction approximationNavigability of networkDeep Neural NetworksStochastic gradient descentGenerative model...
  • We investigate the possibility to have a constraint on the mass of thermal warm dark matter (WDM) particle from the expected data of the Wide Field Infrared Survey Telescope (WFIRST) survey if all the dark matter is warm. For this purpose we consider the lensing effect of large scale structure based on the warm dark matter scenario on the apparent magnitude of SNe Ia. We use HALOFIT as non-linear matter power spectrum and the variance of PDF. We preform a Fisher matrix analysis and obtain the lower bound of $m_{\rm WDM}>0.167$keV.
    Warm dark matterWDM particlesSupernovaLambda-CDM modelMatter power spectrumCold dark matterThermal WDMVisual magnitudeSupernova Type IaWeak lensing...
  • FASER,the ForwArd Search ExpeRiment,is a proposed experiment dedicated to searching for light, extremely weakly-interacting particles at the LHC. Such particles may be produced in the LHC's high-energy collisions and travel long distances through concrete and rock without interacting. They may then decay to visible particles in FASER, which is placed 480 m downstream of the ATLAS interaction point. In this work we briefly describe the FASER detector layout and the status of potential backgrounds. We then present the sensitivity reach for FASER for a large number of long-lived particle models, updating previous results to a uniform set of detector assumptions, and analyzing new models. In particular, we consider all of the renormalizable portal interactions, leading to dark photons, dark Higgs bosons, and heavy neutral leptons (HNLs); light B-L and $L_i - L_j$ gauge bosons; axion-like particles (ALPs) that are coupled dominantly to photons, fermions, and gluons through non-renormalizable operators; and pseudoscalars with Yukawa-like couplings. We find that FASER and its follow-up, FASER 2, have a full physics program, with discovery sensitivity in all of these models and potentially far-reaching implications for particle physics and cosmology.
    FASER experimentAxion-like particleLong Lived ParticleLarge Hadron ColliderInteraction pointStandard ModelHidden photonSterile neutrinoDecay widthDark Higgs boson...
  • The first dark matter halos form by direct collapse from peaks in the matter density field, and evidence from numerical simulations and other analyses suggests that the dense inner regions of these objects largely persist today. These halos would be the densest dark matter structures in the Universe, and their abundance can probe processes that leave imprints on the primordial density field, such as inflation or an early matter-dominated era. They can also probe dark matter through its free-streaming scale. The first halos are qualitatively different from halos that form by hierarchical clustering, as evidenced by their $\rho\propto r^{-3/2}$ inner density profiles. In this work, we present and tune models that predict the density profiles of these halos from properties of the density peaks from which they collapsed. These models predict the coefficient $A$ of the $\rho=Ar^{-3/2}$ small-radius asymptote of the density profile along with the maximum circular velocity $v_\mathrm{max}$ and associated radius $r_\mathrm{max}$. These models are universal: they can be applied to any cosmology, and we confirm this by validating the models using six $N$-body simulations carried out in wildly disparate cosmological scenarios. With their connection to the primordial density field established, the first dark matter halos will serve as probes of the early Universe and the nature of dark matter.
    Halo populationHalo density profileCosmologyDark matterApocenterStatisticsDark matter haloMass profileInfall modelFree streaming of particles...
  • We investigate the implications of the dark axion portal interaction, the axion-photon-dark photon vertex, for the future experiments SHiP and FASER. We also study the phenomenology of the combined vector portal (kinetic mixing of the photon and dark photon) and dark axion portal. The muon $g-2$ discrepancy is unfortunately not solved even with the two portals, but the low-energy beam dump experiments with monophoton detection capability can open new opportunities in light dark sector searches using the combined portals.
    AxionFASER experimentSHiP experimentHidden photonBeam dumpDecay volumeKinetic mixingStandard ModelInteraction pointMuon...
  • We investigate a scenario where the formation of Globular Clusters (GCs) is triggered by high-speed collisions between infalling atomic-cooling subhalos during the assembly of the main galaxy host, a special dynamical mode of star formation that operates at high gas pressures and is intimately tied to LCDM hierarchical galaxy assembly. The proposed mechanism would give origin to "naked" globulars, as colliding dark matter subhalos and their stars will simply pass through one another while the warm gas within them clashes at highly supersonic speed and decouples from the collisionless component, in a process reminiscent of the Bullet galaxy cluster. We find that the resulting shock-compressed layer cools on a timescale that is tipically shorter than the crossing time, first by atomic line emission and then via fine-structure metal-line emission, and is subject to gravitational instability and fragmentation. Through a combination of kinetic theory approximation and high-resolution N-body simulations, we show that this model may produce: (a) a GC number-halo mass relation that is linear down to dwarf galaxy scales and agrees with the trend observed over five orders of magnitude in galaxy mass; (b) a population of old globulars with a median age of 12 Gyr and an age spread similar to that observed; (c) a spatial distribution that is biased relative to the overall mass profile of the host. This is because, in an inelastic collision, the splash remnant will lose orbital energy and fall deeper into the Galactic potential rather than sharing the orbits of the progenitor subhalos; and (d) a bimodal metallicity distribution with a spread similar to that observed in massive galaxies. Additional, hydrodynamic simulations of subhalo-subhalo high-speed impacts should be performed to further validate a collision-driven scenario for the formation of GCs.
    Globular clusterDark matter subhaloCoolingMilky WayAtomic line coolingStar formationMetallicityVirial massGalaxyDwarf galaxy...
  • We present a new compilation of inferences of the linear 3D matter power spectrum at redshift $z\,{=}\,0$ from a variety of probes spanning several orders of magnitude in physical scale and in cosmic history. We develop a new lower-noise method for performing this inference from the latest Ly$\alpha$ forest 1D power spectrum data. We also include cosmic microwave background (CMB) temperature and polarization power spectra and lensing reconstruction data, the cosmic shear two-point correlation function, and the clustering of luminous red galaxies. We provide a Dockerized Jupyter notebook housing the fairly complex dependencies for producing the plot of these data, with the hope that groups in the future can help add to it. Overall, we find qualitative agreement between the independent measurements considered here and the standard $\Lambda$CDM cosmological model fit to the Planck data.
    Matter power spectrumCosmic microwave backgroundRedshift binsCosmic shearSloan Digital Sky SurveyLuminous Red GalaxyLambda-CDM modelFlux power spectrumTwo-point correlation functioneBOSS survey...
  • We present an analysis of seven strongly gravitationally lensed quasars and the corresponding constraints on the properties of dark matter. Our results are derived by modelling the lensed image positions and flux-ratios using a combination of smooth macro models and a population of low-mass haloes within the mass range $10^6$ to $10^9$ M$_\odot$. Our lens models explicitly include higher-order complexity in the form of stellar discs and luminous satellites, and low-mass haloes located along the observed lines of sight for the first time. Assuming a Cold Dark Matter (CDM) cosmology, we infer an average total mass fraction in substructure of $f_{\rm sub} = 0.011^{+0.007}_{-0.005}$ (68 per cent confidence limits), which is in agreement with the predictions from CDM hydrodynamical simulations to within 1$\sigma$. This result is significantly different when compared to previous studies that did not include line-of-sight haloes. Under the assumption of a thermal relic warm dark matter (WDM) model, we derive a lower limit on the particle relic mass of $m_{\rm wdm} > 3.8$ keV (95 per cent confidence limits), which is consistent with a value of $m_{\rm wdm} > 3.5$ keV from the recent analysis of the Ly$\alpha$ forest.
    Warm dark matterCold dark matterQuasarDark matterLine-of-sight substructureFree streaming of particlesGravitationally lensed quasarsAnomalous flux ratioCosmologyAbundance...
  • The presence of primordial magnetic fields increases the minimum halo mass in which star formation is possible at high redshifts. Estimates of the dynamical mass of ultra-faint dwarf galaxies (UFDs) within their half-light radius constrain their virialized halo mass before their infall into the Milky Way. The inferred halo mass and formation redshift of the UFDs place upper bounds on the primordial comoving magnetic field, $B_0$. We derive an upper limit of $0.50\pm 0.086$ ($0.31\pm 0.04$) nG on $B_0~$ assuming the average formation redshift of the UFD host halos is $z_{\rm form}=$ 10 (20), respectively.
    Ultra-faint dwarf spheroidal galaxyCosmological magnetic fieldVirial massHalf-light radiusStar formationMilky WayRedshiftMassMagnetic field...
  • This paper addresses the challenge of 6DoF pose estimation from a single RGB image under severe occlusion or truncation. Many recent works have shown that a two-stage approach, which first detects keypoints and then solves a Perspective-n-Point (PnP) problem for pose estimation, achieves remarkable performance. However, most of these methods only localize a set of sparse keypoints by regressing their image coordinates or heatmaps, which are sensitive to occlusion and truncation. Instead, we introduce a Pixel-wise Voting Network (PVNet) to regress pixel-wise unit vectors pointing to the keypoints and use these vectors to vote for keypoint locations using RANSAC. This creates a flexible representation for localizing occluded or truncated keypoints. Another important feature of this representation is that it provides uncertainties of keypoint locations that can be further leveraged by the PnP solver. Experiments show that the proposed approach outperforms the state of the art on the LINEMOD, Occlusion LINEMOD and YCB-Video datasets by a large margin, while being efficient for real-time pose estimation. We further create a Truncation LINEMOD dataset to validate the robustness of our approach against truncation. The code will be avaliable at https://zju-3dv.github.io/pvnet/.
    Flux power spectrumConvolutional neural networkGround truthRegressionArchitectureOrientationCovarianceIntermediate representationAblationAutonomous driving...
  • Existing pose estimation approaches can be categorized into single-stage and multi-stage methods. While a multi-stage architecture is seemingly more suitable for the task, the performance of current multi-stage methods is not as competitive as single-stage ones. This work studies this issue. We argue that the current unsatisfactory performance comes from various insufficient design in current methods. We propose several improvements on the architecture design, feature flow, and loss function. The resulting multi-stage network outperforms all previous works and obtains the best performance on COCO keypoint challenge 2018. The source code will be released.
    ArchitectureCOCO simulationConvolutional neural networkClassificationBackbone networkStatistical estimatorInformation flowGround truthOptimizationDeep Neural Networks...
  • Recent advancements in energy-efficient hardware technology is driving the exponential growth we are experiencing in the Internet of Things (IoT) space, with more pervasive computations being performed near to data generation sources. A range of intelligent devices and applications performing local detection is emerging (activity recognition, fitness monitoring, etc.) bringing with them obvious advantages such as reducing detection latency for improved interaction with devices and safeguarding user data by not leaving the device. Video processing holds utility for many emerging applications and data labelling in the IoT space. However, performing this video processing with deep neural networks at the edge of the Internet is not trivial. In this paper we show that pedestrian location estimation using deep neural networks is achievable on fixed cameras with limited compute resources. Our approach uses pose estimation from key body points detection to extend pedestrian skeleton when whole body not in image (occluded by obstacles or partially outside of frame), which achieves better location estimation performance (infrence time and memory footprint) compared to fitting a bounding box over pedestrian and scaling. We collect a sizable dataset comprising of over 2100 frames in videos from one and two surveillance cameras pointing from different angles at the scene, and annotate each frame with the exact position of person in image, in 42 different scenarios of activity and occlusion. We compare our pose estimation based location detection with a popular detection algorithm, YOLOv2, for overlapping bounding-box generation, our solution achieving faster inference time (15x speedup) at half the memory footprint, within resource capabilities on embedded devices, which demonstrate that CamLoc is an efficient solution for location estimation in videos on smart-cameras.
    Internet of ThingsInferenceConvolutional neural networkDeep Neural NetworksImage ProcessingPositioning systemArchitectureVideo analysisGround truthOptimization...
  • The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model $G: \mathbb{R}^k \to \mathbb{R}^n$. Our main theorem is that, if $G$ is $L$-Lipschitz, then roughly $O(k \log L)$ random Gaussian measurements suffice for an $\ell_2/\ell_2$ recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use $5$-$10$x fewer measurements than Lasso for the same accuracy.
    Generative modelCompressed sensingObservational errorNeural networkRandom matrixOptimizationUnderdetermined systemArchitectureMNIST datasetAutoencoder...
  • The faintness of satellite systems in galaxy groups has contributed to the widely discussed "missing satellite" and "too big to fail" issues. Using techniques based on Tremaine & Richstone (1977), we show that there is no problem with the luminosity function computed from modern codes per se, but that the gap between first and second brightest systems is too big {\it given} the luminosity function, that the same large gap is found in modern, large scale baryonic $\Lambda$CDM simulations such as EAGLE and IllustrisTNG, is even greater in dark matter only simulations, and finally, that this is most likely due to gravitationally induced merging caused by classical dynamical friction. Quantitatively the gap is larger in the computed simulations than in the randomized ones by $1.79 \pm 1.04$, $1.51 \pm 0.93$, $3.43 \pm 1.44$ and $3.33 \pm 1.35$ magnitudes in the EAGLE, IllustrisTNG, and dark matter only simulations of EAGLE and IllustrisTNG respectively. Furthermore the anomalous gaps in the simulated systems are even larger than in the real data by over half a magnitude and are still larger in the dark matter only simulations. Briefly stated, $\Lambda$CDM does not have a problem with an absence of "too big to fail" galaxies. Statistically significant large gaps between first and second brightest galaxies are to be expected.
    GalaxyLuminosity functionToo big to fail problemN-body simulationEAGLE simulation projectIllustrisTNG simulationCold dark matterStatisticsMilky WayGroup of galaxies...
  • Humans have a remarkable ability to predict the effect of physical interactions on the dynamics of objects. Endowing machines with this ability would allow important applications in areas like robotics and autonomous vehicles. In this work, we focus on predicting the dynamics of 3D rigid objects, in particular an object's final resting position and total rotation when subjected to an impulsive force. Different from previous work, our approach is capable of generalizing to unseen object shapes - an important requirement for real-world applications. To achieve this, we represent object shape as a 3D point cloud that is used as input to a neural network, making our approach agnostic to appearance variation. The design of our network is informed by an understanding of physical laws. We train our model with data from a physics engine that simulates the dynamics of a large number of shapes. Experiments show that we can accurately predict the resting position and total rotation for unseen object geometries.
    Point cloudMultilayer perceptronRoboticsNeural networkGround truthArchitectureGraphAblationComplex dynamicsCollider...
  • The physical properties of Fe2CoAl (FCA) Heusler alloy have been determined by means of first-principles calculations. We focus on the influence of atomic ordering, with respect to the Wyckoff sites A (0, 0, 0), B (0.25, 0.25, 0.25), C (0.5, 0.5, 0.5) and D (0.75, 0.75, 0.75), on the structural, magnetic and electronic properties in both the conventional L21 (Cu2MnAl prototype) and XA (Hg2CuTi prototype) inverse Heusler structures. Various non-magnetic and magnetic configurations are considered. Out of these, the ferromagnetic XA-I structure is found to be energetically most stable. The total magnetic moments per cell were not in agreement with the Slater-Pauling rule in any phases. Half-metallicity is not observed in any configuration. However, all the structures exhibit high magnetic moment, Curie temperature and spin polarization. The calculated values of total magnetic moment and Curie temperature (Tc) are in a close agreement with the available experimental data. The possibility of making the alloy a half metal is also discussed.
    Curie temperatureSpin polarizationMetallicityMagnetic momentMetalsExperimental data...
  • Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at https://github.com/szagoruyko/wide-residual-networks
    ArchitectureCOCO simulationRegularizationOverfittingClassificationObject detectionDeep Neural NetworksNeural networkGradient flowOptimization...
  • We present a method for predicting dense depth in scenarios where both a monocular camera and people in the scene are freely moving. Existing methods for recovering depth for dynamic, non-rigid objects from monocular video impose strong assumptions on the objects' motion and may only recover sparse depth. In this paper, we take a data-driven approach and learn human depth priors from a new source of data: thousands of Internet videos of people imitating mannequins, i.e., freezing in diverse, natural poses, while a hand-held camera tours the scene. Because people are stationary, training data can be generated using multi-view stereo reconstruction. At inference time, our method uses motion parallax cues from the static areas of the scenes to guide the depth prediction. We demonstrate our method on real-world sequences of complex human actions captured by a moving hand-held camera, show improvement over state-of-the-art monocular depth prediction methods, and show various 3D effects produced using our predicted depth.
    ParallaxGround truthInferenceFreeze-inYouTubeScale invariancePoint cloudConfidence regionStructure-from-MotionAttention...
  • A fundamental property of the Standard Model is that the Higgs potential becomes unstable at large values of the Higgs field. For the current central values of the Higgs and top masses, the instability scale is about $10^{11}$ GeV and therefore not accessible by colliders. We show that a possible signature of the Standard Model Higgs instability is the production of gravitational waves sourced by Higgs fluctuations generated during inflation. We fully characterise the two-point correlator of such gravitational waves by computing its amplitude, the frequency at peak, the spectral index, as well as their three-point correlators for various polarisations. We show that, depending on the Higgs and top masses, either LISA or the Einstein Telescope and Advanced-Ligo, could detect such stochastic background of gravitational waves. In this sense, collider and gravitational wave physics can provide fundamental and complementary informations. Furthermore, the consistency relation among the three- and the two-point correlators could provide an efficient tool to ascribe the detected gravitational waves to the Standard Model itself. Since the mechanism described in this paper might also be responsible for the generation of dark matter under the form of primordial black holes, this latter hypothesis may find its confirmation through the detection of gravitational waves.
    Gravitational waveHiggs bosonInflationStandard ModelInstabilityLaser Interferometer Space AntennaHiggs fieldBispectrumTop quark massPrimordial black hole...
  • The \lya forest at high redshifts is a powerful probe of reionization. Modeling and observing this imprint comes with significant technical challenges: inhomogeneous reionization must be taken into account while simultaneously being able to resolve the web-like small-scale structure prior to reionization. In this work we quantify the impact of inhomogeneous reionization on the \lya forest at lower redshifts ($2 < z < 4$), where upcoming surveys such as DESI will enable precision measurements of the flux power spectrum. We use both small box simulations capable of handling the small-scale structure of the \lya forest and semi-numerical large box simulations capable of representing the effects of inhomogeneous reionization. We find that inhomogeneous reionization could produce a measurable effect on the \lya forest power spectrum. The deviation in the 3D power spectrum at $z_{\rm obs} = 4$ and $k = 0.14 \ \rm{Mpc}^{-1}$ ranges from $19 - 36\%$, with a larger effect for later reionization. The corrections decrease to $2.0 - 4.1\%$ by $z_{\rm obs} = 2$. The impact on the 1D power spectrum is smaller, and ranges from $3.3 - 6.5\%$ at $z_{\rm obs}=4$ to $0.35 - 0.75\%$ at $z_{\rm obs}=2$, values which are comparable to the statistical uncertainties in current and upcoming surveys. Furthermore, we study how can this systematic be constrained with the help of the quadrupole of the 21 cm power spectrum.
    ReionizationFlux power spectrumIntergalactic mediumQuadrupoleLyman-alpha forestHydrogen 21 cm line21-cm power spectrumDark Energy Spectroscopic InstrumentSmall scale structureReionization models...
  • Four-dimensional (4D) flat Minkowski space admits a foliation by hyperbolic slices. Euclidean AdS3 slices fill the past and future lightcones of the origin, while dS3 slices fill the region outside the lightcone. The resulting link between 4D asymptotically flat quantum gravity and AdS3/CFT2 is explored in this paper. The 4D superrotations in the extended BMS4 group are found to act as the familiar conformal transformations on the 3D hyperbolic slices, mapping each slice to itself. The associated 4D superrotation charge is constructed in the covariant phase space formalism. The soft part gives the 2D stress tensor, which acts on the celestial sphere at the boundary of the hyperbolic slices, and is shown to be an uplift to 4D of the familiar 3D holographic AdS3 stress tensor. Finally, we find that 4D quantum gravity contains an unexpected second, conformally soft, dimension (2, 0) mode that is symplectically paired with the celestial stress tensor.
    Celestial SphereQuantum gravityAnti de Sitter spaceGravitonConformal symmetryFlat-space holographyWavefunctionMetric perturbationFoliationEinstein field equations...
  • In the absence of frequent binary collisions to isotropize the plasma, the fulfillment of the magnetohydrodynamic (MHD) Rankine-Hugoniot jump conditions by collisionless shocks is not trivial. In particular, the presence of an external magnetic field can allow for stable anisotropies, implying some departures from the isotropic MHD jumps. The functional dependence of such anisotropies in terms of the field is yet to be determined. By hypothesizing a kinetic history of the plasma through the shock front, we recently devised a theory of the downstream anisotropy, hence of the density jump, in terms of the field strength for a parallel shock [J. Plasma Phys. (2018), vol. 84, 905840604]. Here, we extend the analysis to the case of a perpendicular shock. We still find that the field reduces the density jump, but the effect is less pronounced than in the parallel case.
    MagnetohydrodynamicsAnisotropyInstabilityParticle-in-cellPerpendicular ShockParallel ShockMean free pathMach numberVlasov equationEntropy...
  • Dark matter direct detection experiments have poor sensitivity to a galactic population of dark matter with mass below the GeV scale. However, such dark matter can be produced copiously in supernovae. Since this thermally-produced population is much hotter than the galactic dark matter, it can be observed with direct detection experiments. In this paper, we focus on a dark sector with fermion dark matter and a heavy dark photon as a specific example. We first extend existing supernova cooling constraints on this model to the regime of strong coupling where the dark matter becomes diffusively trapped in the supernova. Then, using the fact that even outside these cooling constraints the diffuse galactic flux of these dark sector particles can still be large, we show that this flux is detectable in direct detection experiments such as current and next-generation liquid xenon detectors. As a result, due to supernova production, light dark matter has the potential to be discovered over many orders of magnitude of mass and coupling.
    Dark matterSupernovaDark fermionsProtoneutron starCoolingLiquid xenonEarthMonte Carlo methodNeutrinoFree streaming of particles...
  • Von Neumann's original proof of the ergodic theorem is revisited. A convergence rate is established under the assumption that one can control the density of the spectrum of the underlying self-adjoint operator when restricted to suitable subspaces. Explicit rates are obtained when the bound is polynomial or logarithmic, with applications to the linear Schr\"odinger and wave equations. In particular, decay estimates for time-averages of solutions are shown.
    Wave equationUniform convergenceSelf-adjoint operatorDensity of statesOne-parameter groupUnitary transformationContinuously embeddedOperator normCompact operatorBounded operator...
  • The Sommerfield model with a massive vector field coupled to a massless fermion in 1+1 dimensions is an exactly solvable analog of a Bank-Zaks model. The `physics' of the model comprises a massive boson and an unparticle sector that survives at low energy as a conformal field theory (Thirring model). I discuss the `Schwinger point' of the Sommerfield model in which the vector boson mass goes to zero. The limit is singular but gauge invariant quantities should be well-defined. I give a number of examples, both (trivially) with local operators and with nonlocal products connected by Wilson lines (the primary technical accomplishment in this note is the explicit and very pedestrian calculation of correlators involving straight Wilson lines). I hope that this may give some insight into the nature of bosonization in the Schwinger model and its connection with unparticle physics which in this simple case may be thought of as `incomplete bosonization.'
    Wilson loopUnmatterSchwinger modelThirring modelBosonizationSchwinger limitBanks ZaksPropagatorAnomalous dimensionUnparticle physics...
  • Modern cosmological analyses constrain physical parameters using Markov Chain Monte Carlo (MCMC) or similar sampling techniques. Oftentimes, these techniques are computationally expensive to run and require up to thousands of CPU hours to complete. Here we present a method for reconstructing the log-probability distributions of completed experiments from an existing MCMC chain (or any set of posterior samples). The reconstruction is performed using Gaussian process regression for interpolating the log-probability. This allows for easy resampling, importance sampling, marginalization, testing different samplers, investigating chain convergence, and other operations. As an example use-case, we reconstruct the posterior distribution of the most recent Planck 2018 analysis. We then resample the posterior, and generate a new MCMC chain with forty times as many points in only thirty minutes. Our likelihood reconstruction tool can be found online at https://github.com/tmcclintock/AReconstructionTool.
    Monte Carlo Markov chainGaussian processCovarianceGaussian Process RegressionHyperparameterPlanck missionBayesian posterior probabilityRegressionNuisance parameterSoftware...
  • We give an introduction to the C++ library GiNaC, which extends the C++ language by new objects and methods for the representation and manipulation of arbitrary symbolic expressions.
    PolylogarithmFeynman rulesConcurrenceComputer programmingProgramming LanguagePerturbation theorySquare-freeClifford algebraLoop integralFeynman diagrams...
  • Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. Here, we present a system with such few-shot capability. It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.
    Meta learningEmbeddingConvolutional neural networkPersonalizationGround truthArchitectureInferenceAblationGenerative modelOptimization...
  • The concordance (LambdaCDM) model reproduces the main current cosmological observations assuming the validity of general relativity at all scales and epochs, the presence of cold dark matter, and of a cosmological constant, equivalent to a dark energy with constant density in space and time. However, the LambdaCDM model is poorly tested in the redshift interval between the farthest observed Type Ia supernovae5 and that of the Cosmic Microwave background (CMB). We present new measurements of the expansion rate of the Universe in the range 0.5<z<5.5 based on a Hubble diagram of quasars. The quasar distances are estimated from their X-ray and ultraviolet emission, following a method developed by our group. The distance modulus-redshift relation of quasars at z<1.4 is in agreement with that of supernovae and with the concordance model. Yet, a deviation from the LambdaCDM model emerges at higher redshift, with a statistical significance of ~4 sigma. If an evolution of the dark energy equation of state is allowed, the data suggest a dark energy density increasing with time.
    QuasarLambda-CDM modelHubble diagramCosmological constraintsCosmic microwave backgroundDark energyEquation of state of dark energyCosmological observationStatistical significanceDistance modulus...
  • The Extragalactic Background Light (EBL) stands for the mean surface brightness of the sky as we would see it from a representative vantage point in the intergalactic space outside of our Milky Way Galaxy. Averaged over the whole 4 pi solid angle it represents the collective light from all luminous matter radiated throughout the cosmic history. Part of the EBL is resolved into galaxies that, with the increasing detecting power of giant telescopes and sensitive detectors, are seen to deeper and deeper limiting magnitudes. This resolved part is now known to contribute a substantial or even the major part of the EBL. There still remains, however, the challenge of finding out to what extent galaxies too faint or too diffuse to be discerned individually, individual stars or emission by gas outside the galaxies, or more speculatively, some hitherto unknown light sources such as decaying elementary particles are accounting for the remaining EBL. We review the recent progress that has been made in the measurement of EBL. The current photometric results suggest that there is, beyond the resolved galaxies, an EBL component that cannot be explained by diffuse galaxy halos or intergalactic stars.
    Extragalactic background lightGalaxyStarMilky WayGalactic haloTelescopesLimiting magnitudeSurface brightnessMeasurementParticles...
  • It has been proposed that in a part of the parameter space of the Standard Model completed by three generations of keV...GeV right-handed neutrinos, neutrino masses, dark matter, and baryon asymmetry can be accounted for simultaneously. Here we numerically solve the evolution equations describing the cosmology of this scenario in a 1+2 flavour situation at temperatures $T \le 5$ GeV, taking as initial conditions maximal lepton asymmetries produced dynamically at higher temperatures, and accounting for late entropy and lepton asymmetry production as the heavy flavours fall out of equilibrium and decay. For 7 keV dark matter mass and other parameters tuned favourably, $\sim 10\%$ of the observed abundance can be generated. Possibilities for increasing the abundance are enumerated.
    FlavourLepton asymmetryDark matterSterile neutrinoEntropyLeptogenesisAbundanceBaryon asymmetry of the UniverseSterile neutrino DMEvolution equation...
  • Monocular depth estimation, which plays a key role in understanding 3D scene geometry, is fundamentally an ill-posed problem. Existing methods based on deep convolutional neural networks (DCNNs) have examined this problem by learning convolutional networks to estimate continuous depth maps from monocular images. However, we find that training a network to predict a high spatial resolution continuous depth map often suffers from poor local solutions. In this paper, we hypothesize that achieving a compromise between spatial and depth resolutions can improve network training. Based on this "compromise principle", we propose a regression-classification cascaded network (RCCN), which consists of a regression branch predicting a low spatial resolution continuous depth map and a classification branch predicting a high spatial resolution discrete depth map. The two branches form a cascaded structure allowing the classification and regression branches to benefit from each other. By leveraging large-scale raw training datasets and some data augmentation strategies, our network achieves top or state-of-the-art results on the NYU Depth V2, KITTI, and Make3D benchmarks.
    ClassificationRegressionConvolutional neural networkArchitectureGround truthOptimizationDiscretizationFeature vectorField of viewMean squared error...
  • Depth estimation provides essential information to perform autonomous driving and driver assistance. Especially, Monocular Depth Estimation is interesting from a practical point of view, since using a single camera is cheaper than many other options and avoids the need for continuous calibration strategies as required by stereo-vision approaches. State-of-the-art methods for Monocular Depth Estimation are based on Convolutional Neural Networks (CNNs). A promising line of work consists of introducing additional semantic information about the traffic scene when training CNNs for depth estimation. In practice, this means that the depth data used for CNN training is complemented with images having pixel-wise semantic labels, which usually are difficult to annotate (e.g. crowded urban images). Moreover, so far it is common practice to assume that the same raw training data is associated with both types of ground truth, i.e., depth and semantic labels. The main contribution of this paper is to show that this hard constraint can be circumvented, i.e., that we can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets. In order to illustrate the benefits of our approach, we combine KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on Monocular Depth Estimation.
    Convolutional neural networkGround truthSemantic segmentationClassificationRegressionLIDARAutonomous drivingCalibrationArchitectureStatistical estimator...
  • In this paper, we discuss the temperature distribution and evolution of a microflare, simultaneously observed by Hinode XRT, EIS, and SDO AIA. We find using EIS lines that during peak emission the distribution is nearly isothermal and peaked around 4.5 MK. This temperature is in good agreement with that obtained from the XRT filter ratio, validating the use of XRT to study these small events, invisible by full-Sun X-ray monitors such as GOES. The increase in the estimated Fe XVIII emission in the AIA 94 {\AA} band can mostly be explained with the small temperature increase from the background temperatures. The presence of Fe XVIII emission does not guarantee that temperatures of 7 MK are reached, as is often assumed. We also revisit with new atomic data the temperatures measured by a SoHO SUMER observation of an active region which produced microflares, also finding low temperatures (3 - 4 MK) from an Fe XVIII / Ca XIV ratio.
    CalibrationAbundanceHinodeSolar Dynamics ObservatoryCoolingSunIntensityYohkohEvaporationSpectrometers...
  • 1903.04497  ,  ,  et al.,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  show less
    Particles beyond the Standard Model (SM) can generically have lifetimes that are long compared to SM particles at the weak scale. When produced at experiments such as the Large Hadron Collider (LHC) at CERN, these long-lived particles (LLPs) can decay far from the interaction vertex of the primary proton-proton collision. Such LLP signatures are distinct from those of promptly decaying particles that are targeted by the majority of searches for new physics at the LHC, often requiring customized techniques to identify, for example, significantly displaced decay vertices, tracks with atypical properties, and short track segments. Given their non-standard nature, a comprehensive overview of LLP signatures at the LHC is beneficial to ensure that possible avenues of the discovery of new physics are not overlooked. Here we report on the joint work of a community of theorists and experimentalists with the ATLAS, CMS, and LHCb experiments --- as well as those working on dedicated experiments such as MoEDAL, milliQan, MATHUSLA, CODEX-b, and FASER --- to survey the current state of LLP searches at the LHC, and to chart a path for the development of LLP searches into the future, both in the upcoming Run 3 and at the High-Luminosity LHC. The work is organized around the current and future potential capabilities of LHC experiments to generally discover new LLPs, and takes a signature-based approach to surveying classes of models that give rise to LLPs rather than emphasizing any particular theory motivation. We develop a set of simplified models; assess the coverage of current searches; document known, often unexpected backgrounds; explore the capabilities of proposed detector upgrades; provide recommendations for the presentation of search results; and look towards the newest frontiers, namely high-multiplicity "dark showers", highlighting opportunities for expanding the LHC reach for these signals.
    Long Lived ParticleLarge Hadron ColliderBSM physicsStandard ModelFASER experimentMATHUSLA experimentHigh-luminosity LHCProton-proton collisionsPrimary protonCERN...
  • The magnetic field shapes the structure of the solar corona but we still know little about the interrelationships between the coronal magnetic field configurations and the resulting quasi-stationary structures observed in coronagraphic images (as streamers, plumes, coronal holes). One way to obtain information on the large-scale structure of the coronal magnetic field is to extrapolate it from photospheric data and compare the results with coronagraphic images. Our aim is to verify if this comparison can be a fast method to check systematically the reliability of the many methods available to reconstruct the coronal magnetic field. Coronal fields are usually extrapolated from photospheric measurements typically in a region close to the central meridian on the solar disk and then compared with coronagraphic images at the limbs, acquired at least 7 days before or after to account for solar rotation, implicitly assuming that no significant changes occurred in the corona during that period. In this work, we combine images from three coronagraphs (SOHO/LASCO-C2 and the two STEREO/SECCHI-COR1) observing the Sun from different viewing angles to build Carrington maps covering the entire corona to reduce the effect of temporal evolution to ~ 5 days. We then compare the position of the observed streamers in these Carrington maps with that of the neutral lines obtained from four different magnetic field extrapolations, to evaluate the performances of the latter in the solar corona. Our results show that the location of coronal streamers can provide important indications to discriminate between different magnetic field extrapolations.
    Carrington rotationCoronaSolar and Heliospheric ObservatorySunIntensitySolar coronaLarge Angle and Spectrometric CoronagraphSTEREO-BPlasma sheetSolar rotation...
  • We study the electrical conductivity of hot Abelian plasma containing scalar charge carriers in the leading logarithmic order in coupling constant $\alpha$ using the Boltzmann kinetic equation. The leading contribution to the collision integral is due to the M{\o}ller and Bhabha scattering of scalar particles with a singular cross section in the region of small momentum transfer. Regularizing this singularity by taking into account the hard thermal loop corrections to the propagators of intermediate particles, we derive the second order differential equation which determines the kinetic function. We solve this equation numerically and also use a variational approach in order to find a simple analytical formula for the conductivity. It has the standard parametric dependence on the coupling constant $\sigma\approx 2.38\, T/(\alpha \log\alpha^{-1})$ with the prefactor taking a somewhat lower value compared to the fermionic case. Finally, we consider the general case of hot Abelian plasma with an arbitrary number of scalar and fermionic particle species and derive the simple analytical formula for its conductivity.
    Collision integralCoupling constantMomentum transferBoltzmann transport equationKinetic equationCompton scatteringScalar electrodynamicsLeading log approximationTransport coefficientDegree of freedom...
  • The advent of new deep+wide photometric lensing surveys will open up the possibility of direct measurements of the dark matter halos of dwarf galaxies. The HSC wide survey will be the first with the statistical capability of measuring the lensing signal with high signal-to-noise at log(M*)=8. At this same mass scale, LSST will have the most overall constraining power with a predicted signal-to-noise for the galaxy-galaxy lensing signal around dwarfs of SN=200. WFIRST and LSST will have the greatest potential to push below the log(M*) = 7 mass scale thanks to the depth of their imaging data. Studies of the dark matter halos of dwarf galaxies at z=0.1 with gravitational lensing are soon within reach. However, further work will be required to develop optimized strategies for extracting dwarfs samples from these surveys, determining redshifts, and accurately measuring lensing on small radial scales. Dwarf lensing will be a new and powerful tool to constrain the halo masses and inner density slopes of dwarf galaxies and to distinguish between baryonic feedback and modified dark matter scenarios.
    Dwarf galaxyHyper Suprime-CamCOSMOS surveyCompletenessLarge Synoptic Survey TelescopeGalaxyLensing signalDark matter haloVirial massLensing survey...
  • We apply linked cluster expansion techniques to study the polarized high-field phase of a spin-half antiferromagnet on the Kagome lattice with Heisenberg and Dzyaloshinskii-Moriya interactions (DMI). We find that the Dirac points of the single-magnon spectrum without DMI are robust against arbitrary DMI when the magnetic field lies in the Kagome plane. Unlike the typical case where DMI gaps the spectrum, here we find that varying the DMI merely shifts the location of the Dirac points. In contrast, a magnetic field with a component out of the Kagome plane gaps the spectrum, leading to topological magnon bands. We map out a topological phase diagram as the couplings are varied by computing the band Chern numbers. A pair of phase transitions are observed and we find an enhanced thermal Hall conductivity near the phase boundary.
    Dzyaloshinskii-Moriya interactionMagnonDirac pointHamiltonianChern numberAntiferromagnetSpin-orbit interactionKagome AntiferromagnetPhase diagramKagome lattice...
  • The cold dark matter paradigm has been extremely successful in explaining the large-scale structure of the Universe. However, it continues to face issues when confronted by observations on sub-Galactic scales. A major caveat, now being addressed, has been the incomplete treatment of baryon physics. We first summarize the small-scale issues surrounding cold dark matter and discuss the solutions explored by modern state-of-the-art numerical simulations including treatment of baryonic physics. We identify the too big to fail in field galaxies as among the best targets to study modifications to dark matter, and discuss the particular connection with sterile neutrinowarm dark matter. We also discuss how the recently detected anomalous 3.55 keV X-ray lines, when interpreted as sterile neutrino dark matter decay, provide a very good description of small-scale observations of the Local Group.
    NeutrinoDark matterWarm dark matterDark matter decayCold dark matterNeutrino dark matterDecaying dark matterGalaxySterile neutrinoSterile neutrino DM...
  • We study fermion number non-conservation (or chirality breaking) in Abelian gauge theories at finite temperature. We consider the presence of a chemical potential $\mu$ for the fermionic charge, and monitor its evolution with real-time classical lattice simulations. This method accounts for short-scale fluctuations not included in the usual effective magneto-hydrodynamics (MHD) treatment. We observe a self-similar decay of the chemical potential, accompanied by an inverse cascade process in the gauge field that leads to a production of long-range helical magnetic fields. We also study the chiral charge dynamics in the presence of an external magnetic field $B$, and extract its decay rate $\Gamma_5 \equiv -{d\log \mu\over dt}$. We provide in this way a new determination of the gauge coupling and magnetic field dependence of the chiral rate, which exhibits a best fit scaling as $\Gamma_5 \propto e^{11/2}B^2$. We confirm numerically the fluctuation-dissipation relation between $\Gamma_5$ and $\Gamma_{\rm diff}$, the Chern-Simons diffusion rate, which was obtained in a previous study. Remarkably, even though we are outside the MHD range of validity, the dynamics observed are in qualitative agreement with MHD predictions. The magnitude of the chiral/diffusion rate is however a factor $\sim 10$ times larger than expected in MHD, signaling that we are in reality exploring a different regime accounting for short scale fluctuations. This discrepancy calls for a revision of the implications of fermion number and chirality non-conservation in finite temperature Abelian gauge theories, though not definite conclusion can be made at this point until hard-thermal-loops (HTL) are included in the lattice simulations.
    MagnetohydrodynamicsGauge fieldInverse cascadeAbelian gauge theoryLattice calculationsChiral chargeCritical valueChern-Simons numberGauge theory at finite temperaturesChirality...
  • In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LM-architecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LM-ResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress ($>50$\%) the original networks while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.
    ArchitectureOrdinary differential equationsPartial differential equationDeep Neural NetworksDiscretizationNeural networkBrownian motionClassificationOptimizationImage Processing...
  • As of today ten circumbinary planets orbiting solar type main sequence stars have been discovered. Nearly all of them orbit around the central binary very closely to the region of instability where it is difficult to form them in situ. It is assumed that they formed further out and migrated to their observed position. We extend previous studies to a more realistic thermal disc structure and determine what parameter influence the final parking location of a planet around a binary star. We perform two-dimensional numerical simulations of viscous accretion discs around a central binary that include viscous heating and radiative cooling from the disc surfaces. We vary the binary eccentricity as well as disc viscosity and mass. Concerning the disc evolution we find that it can take well over 100000 binary orbits until an equilibrium state is reached. As seen previously, we find that the central cavity opened by the binary becomes eccentric and precesses slowly in a prograde sense. Embedded planets migrate to the inner edge of the disc. In cases of lower disc viscosity they migrate further in maintaining a circular orbit, while for high viscosity they are parked further out on an eccentric orbit. The final location of an embedded planet is linked to its ability to open a gap in the disc. Gap opening planets separate inner from outer disc, preventing eccentricity excitation in the latter and making it more circular. This allows embedded planets to migrate closer to the binary, in agreement with the observations. The necessary condition for gap opening and the final planet position depend on the planet mass and disc viscosity.
    PlanetViscosityEccentricityBinary starCoolingStarRadiative coolingDissipationOpacityCircular orbit...
  • We present new measurements of the time delays of WFI2033-4723. The data sets used in this work include 14 years of data taken at the 1.2m Leonhard Euler Swiss telescope, 13 years of data from the SMARTS 1.3m telescope at Las Campanas Observatory and a single year of high-cadence and high-precision monitoring at the MPIA 2.2m telescope. The time delays measured from these different data sets, all taken in the R-band, are in good agreement with each other and with previous measurements from the literature. Combining all the time-delay estimates from our data sets results in Dt_AB = 36.2-0.8+0.7 days (2.1% precision), Dt_AC = -23.3-1.4+1.2 days (5.6%) and Dt_BC = -59.4-1.3+1.3 days (2.2%). In addition, the close image pair A1-A2 of the lensed quasars can be resolved in the MPIA 2.2m data. We measure a time delay consistent with zero in this pair of images. We also explore the prior distributions of microlensing time-delay potentially affecting the cosmological time-delay measurements of WFI2033-4723. There is however no strong indication in our measurements that microlensing time delay is neither present nor absent. This work is part of a H0LiCOW series focusing on measuring the Hubble constant from WFI2033-4723.
    Time delayLight curveStatistical estimatorQuasarGenerative modelTelescopesObservatoriesGravitational lens galaxyHubble constantAccretion disk...
  • We study the formation and evolution of a sample of Lyman Break Galaxies in the Epoch of Reionization by using high-resolution ($\sim 10 \,{\rm pc}$), cosmological zoom-in simulations part of the SERRA suite. In SERRA, we follow the interstellar medium (ISM) thermo-chemical non-equilibrium evolution, and perform on-the-fly radiative transfer of the interstellar radiation field (ISRF). The simulation outputs are post-processed to compute the emission of far infrared lines ([CII], [NII], and [OIII]). At $z=8$, the most massive galaxy, `Freesia', has an age $t_\star \simeq 409\,{\rm Myr}$, stellar mass $M_{\star} \simeq 4.2\times 10^9 {\rm M}_{\odot}$, and a star formation rate ${\rm SFR} \simeq 11.5\,{\rm M}_{\odot}{\rm yr}^{-1}$, due to a recent burst. Freesia has two stellar components (A and B) separated by $\simeq 2.5\, {\rm kpc}$; other 11 galaxies are found within $56.9 \pm 21.6 \, {\rm kpc}$. The mean ISRF in the Habing band is $G = 7.9\, G_0$ and is spatially uniform; in contrast, the ionisation parameter is $U = 2^{+20}_{-2} \times 10^{-3}$, and has a patchy distribution peaked at the location of star-forming sites. The resulting ionising escape fraction from Freesia is $f_{\rm esc}\simeq 2\%$. While [CII] emission is extended (radius 1.54 kpc), [OIII] is concentrated in Freesia-A (0.85 kpc), where the ratio $\Sigma_{\rm [OIII]}/\Sigma_{\rm [CII]} \simeq 10$. As many high-$z$ galaxies, Freesia lies below the local [CII]-SFR relation. We show that this is the general consequence of a starburst phase (pushing the galaxy above the Kennicutt-Schmidt relation) which disrupts/photodissociates the emitting molecular clouds around star-forming sites. Metallicity has a sub-dominant impact on the amplitude of [CII]-SFR deviations.
    GalaxyStar formation rateMilky WayInterstellar mediumMetallicityStar formationLuminosityStarAbundanceRadiative transfer...