Recently bookmarked papers

with concepts:
  • Recent advances in nanotechnology have provided new materials which have the potential to surpass copper and aluminum alloys in electrical conductivity, weight and ampacity [2-6]. Among these carbon nanotubes (CNTs) stand out due to their remarkable thermal and electrical conductivity and ampacity (103 times of copper), low density, abundance of precursor materials and supreme mechanical properties. However, making these materials into a continuous fiber or macrostructures has remained the main obstacle. A promising approach to tackle this issue is employing CNTs as nanofillers in copper matrixes. Subramaniam et al. [5] combined several simple fabrication to produce high density CNT (45 vol%)-Cu composite films with specific conductivity 26% greater than copper. These results show that by improving the interfacial interactions between the CNTs and the copper matrix strong and highly conductive nanocomposites can be made. In this research continuous CNT-Cu nanocomposite is produced using electrospinning method. For improving the interfacial interaction the surface of the CNTs were first coated with a thin layer of copper using electroless deposition methods (the process is explained in details in [7]).
    Carbon nanotubesCopperElectrospinningNanocompositeNanotechnologyAbundancePrecursorMechanical propertiesElectrical conductivityMaterials...
  • By revealing an underlying relation between the Dzyaloshinskii-Moriya interaction (DMI) and the scalar spin chirality, we develop the theory of magnon thermal Hall effects in antiferromagnetic systems. The dynamic fluctuation of the scalar chirality is shown to directly respond to the nontrivial topology of magnon bands. In materials such as the jarosites compounds KFe$_3$(OH)$_6$(SO$_4$)$_2$ and veseignite BaCu$_3$V$_2$O$_8$(OH)$_2$ in the presence of in-plane DMI, the time-reversal symmetry can be broken by the fluctuations of scalar chirality even in the case of coplanar $\vec{q}=0$ magnetic configuration. The spin-wave Hamiltonian is influenced by a fictitious magnetic flux determined by the in-plane DMI. Topological magnon bands and corresponding nonzero Chern numbers are presented without the need of a canted non-coplanar magnetic ordering. The canting angle dependence of thermal Hall conductivity is discussed in detail as well. These results offer a clear principle of chirality-driven topological effects in antiferromagnetically coupled systems.
    ChiralityMagnonHall effectTime-reversal symmetryAntiferromagnetDzyaloshinskii-Moriya interactionAntiferromagneticChern numberNon-coplanar magnetic orderHamiltonian...
  • The following analysis and presentation summarizes the implementation in Matlab / Simulink environment of a prototype model digital energy circuit for measurement of the consumption of electric energy of an electrification network in the installations of customer of electric energy that can be applied as part of any consumption smart meter.
    EnergyMeasurementFrequencyNetworks...
  • Convolutional Neural Network is good at image classification. However, it is found to be vulnerable to image quality degradation. Even a small amount of distortion such as noise or blur can severely hamper the performance of these CNN architectures. Most of the work in the literature strives to mitigate this problem simply by fine-tuning a pre-trained CNN on mutually exclusive or a union set of distorted training data. This iterative fine-tuning process with all known types of distortion is exhaustive and the network struggles to handle unseen distortions. In this work, we propose distortion robust DCT-Net, a Discrete Cosine Transform based module integrated into a deep network which is built on top of VGG16. Unlike other works in the literature, DCT-Net is "blind" to the distortion type and level in an image both during training and testing. As a part of the training process, the proposed DCT module discards input information which mostly represents the contribution of high frequencies. The DCT-Net is trained "blindly" only once and applied in generic situation without further retraining. We also extend the idea of traditional dropout and present a training adaptive version of the same. We evaluate our proposed method against Gaussian blur, motion blur, salt and pepper noise, Gaussian noise and speckle noise added to CIFAR-10/100 and ImageNet test sets. Experimental results demonstrate that once trained, DCT-Net not only generalizes well to a variety of unseen image distortions but also outperforms other methods in the literature.
    Convolutional neural networkClassificationArchitectureGaussian noiseOverfittingImage ProcessingNeural networkObject detectionBackpropagationAttention...
  • The overarching problem in artificial intelligence (AI) is that we do not understand the intelligence process well enough to enable the development of adequate computational models. Much work has been done in AI over the years at lower levels, but a big part of what has been missing involves the high level, abstract, general nature of intelligence. We address this gap by developing a model for general intelligence. To accomplish this, we focus on three basic aspects of intelligence. First, we must realize the general order and nature of intelligence at a high level. Second, we must come to know what these realizations mean with respect to the overall intelligence process. Third, we must describe these realizations as clearly as possible. We propose a hierarchical model to help capture and exploit the order within intelligence. The underlying order involves patterns of signals that become organized, stored and activated in space and time. These patterns can be described using a simple, general hierarchy, with physical signals at the lowest level, information in the middle, and abstract signal representations at the top. This high level perspective provides a big picture that literally helps us see the intelligence process, thereby enabling fundamental realizations, a better understanding and clear descriptions of the intelligence process. The resulting model can be used to support all kinds of information processing across multiple levels of abstraction. As computer technology improves, and as cooperation increases between humans and computers, people will become more efficient and more productive in performing their information processing tasks.
    Computational modellingArtificial intelligencePicture
  • Tracking users' activities on the World Wide Web (WWW) allows researchers to analyze each user's internet behavior as time passes and for the amount of time spent on a particular domain. This analysis can be used in research design, as researchers may access to their participant's behaviors while browsing the web. Web search behavior has been a subject of interest because of its real-world applications in marketing, digital advertisement, and identifying potential threats online. In this paper, we present an image-processing based method to extract domains which are visited by a participant over multiple browsers during a lab session. This method could provide another way to collect users' activities during an online session given that the session recorder collected the data. The method can also be used to collect the textual content of web-pages that an individual visits for later analysis
    Optical Character RecognitionImage ProcessingPythonWorld-Wide WebMarketRegular expressionClassificationSoftwareOperating systemArchitecture...
  • Every conformal field theory (CFT) above two dimensions contains an infinite set of Regge trajectories of local operators which, at large spin, asymptote to "double-twist" composites with vanishing anomalous dimension. In two dimensions, due to the existence of local conformal symmetry, this and other central results of the conformal bootstrap do not apply. We incorporate exact stress tensor dynamics into the CFT$_2$ analytic bootstrap, and extract several implications for AdS$_3$ quantum gravity. Our main tool is the Virasoro fusion kernel, which we newly analyze and interpret in the bootstrap context. The contribution to double-twist data from the Virasoro vacuum module defines a "Virasoro Mean Field Theory" (VMFT); its spectrum includes a finite number of discrete Regge trajectories, whose dimensions obey a simple formula exact in the central charge $c$ and external operator dimensions. We then show that VMFT provides a baseline for large spin universality in two dimensions: in every unitary compact CFT$_2$ with $c > 1$ and a twist gap above the vacuum, the double-twist data approaches that of VMFT at large spin $\ell$. Corrections to the large spin spectrum from individual non-vacuum primaries are exponentially small in $\sqrt{\ell}$ for fixed $c$. We analyze our results in various large $c$ limits. Further applications include a derivation of the late-time behavior of Virasoro blocks at generic $c$; a refined understanding and new derivation of heavy-light blocks; and the determination of the cross-channel limit of generic Virasoro blocks. We translate our results into statements about quantum gravity in AdS$_3$.
    Operator product expansionConformal field theoryRegge trajectoryVirasoro blocksCentral chargeTwo-point correlation functionAnomalous dimensionScaling dimensionQuantum gravitySaddle point...
  • We report our proposal for the establishment of a biocontainment and astrobiology laboratory in a strategic area of Pieve a Nievole (PT) at 28 mt above sea level - to face the lack of biological and astrobiological research centers and all the social, economic and cultural consequences that this project implicate. The structure will be built under the Horizon 2020 work program 2018-2020 - European Research Infrastructures (including e-Infrastructures), and will enable the development of major research project.
    Perturbation theoryXenobiologyResearch facilityHorizonClassificationExtinctionEventFieldNetworks...
  • We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.
    Sentiment analysisText ClassificationWord vectorsArchitectureClassificationHidden stateComputational linguisticsTopic modelDeep learningHyperparameter...
  • Low-Luminosity Active Galactic Nuclei (LLAGN) are characterized for low radiative efficiency, much less than one percent of their Eddington limit. Nevertheless, their main energy release may be mechanical, opposite to powerful AGN classes like Seyfert and Quasars. This work reports on the jet-driven mechanical energy and the corresponding mass outflow deposited by the jet in the central 170~parsecs of the nearby LLAGN ESO428-G14. The jet kinetic output is traced through the coronal line [Si vi] $\lambda$19641 \AA. It is shown that its radial extension, up to hundreds of parsecs, requires a combination of photoionization by the central source and shock excitation as its origin. From the energetics of the ionized gas it is found that the mass outflow rate of the coronal gas is in the range from 3$-8$~M$\odot$~yr$^{-1}$, comparable to those estimated from H I gas at kiloparsec scales in powerful radio galaxies.
    Active Galactic NucleiParsecLuminosityEuropean Southern ObservatoryPhotoionizationRadio jetsCoronal linesCoronal gasIonizationNear-infrared...
  • We introduce an effective action for non-dissipative magnetohydrodynamics. A crucial guiding principle is the generalized global symmetry of electrodynamics, which naturally leads to introducing a "dual photon" as the degree of freedom responsible for the electromagnetic component of the fluid. The formalism includes additional degrees of freedom and symmetries which characterize the hydrodynamic regime. By suitably enhancing one of the symmetries, the theory becomes force-free electrodynamics. The symmetries furthermore allow to systematize local and non-local conserved helicities. We also discuss higher-derivative corrections.
    MagnetohydrodynamicsForce-Free ElectrodynamicsHelicityDegree of freedomTwo-formGlobal symmetryConstitutive relationFluid dynamicsAxion electrodynamicsEffective field theory...
  • The dark axion portal is a recently introduced portal between the standard model and the dark sector. It connects both the dark photon and the axion (or axion-like particle) to the photon simultaneously through an anomaly triangle. While the vector portal and the axion portal have been popular venues to search for the dark photon and axion, respectively, the new portal provides new detection channels if they coexist. The dark axion portal is not a result of the simple combination of the two portals, and its value is not determined by the other portal values; it should be tested independently. In this paper, we discuss implications of the new portal for the leptonic g-2, B-factories, fixed target neutrino experiments, and beam dumps. We provide the model-independent constraints on the axion-photon-dark photon coupling and discuss the sensitivities of the recently activated Belle-II experiment, which will play an important role in testing the new portal.
    AxionHidden photonStandard ModelBELLE IIMuonBeam dumpDark sectorB-factoryAxion-like particleMuon anomalous magnetic moment...
  • We report the discovery of a Milky-Way satellite in the constellation of Antlia. The Antlia 2 dwarf galaxy is located behind the Galactic disc at a latitude of $b\sim 11^{\circ}$ and spans 1.26 degrees, which corresponds to $\sim2.9$ kpc at its distance of 130 kpc. While similar in extent to the Large Magellanic Cloud, Antlia~2 is orders of magnitude fainter with $M_V=-8.5$ mag, making it by far the lowest surface brightness system known (at $32.3$ mag/arcsec$^2$), $\sim100$ times more diffuse than the so-called ultra diffuse galaxies. The satellite was identified using a combination of astrometry, photometry and variability data from Gaia Data Release 2, and its nature confirmed with deep archival DECam imaging, which revealed a conspicuous BHB signal in agreement with distance obtained from Gaia RR Lyrae. We have also obtained follow-up spectroscopy using AAOmega on the AAT to measure the dwarf's systemic velocity, $290.9\pm0.5$km/s, its velocity dispersion, $5.7\pm1.1$ km/s, and mean metallicity, [Fe/H]$=-1.4$. From these properties we conclude that Antlia~2 inhabits one of the least dense Dark Matter (DM) halos probed to date. Dynamical modelling and tidal-disruption simulations suggest that a combination of a cored DM profile and strong tidal stripping may explain the observed properties of this satellite. The origin of this core may be consistent with aggressive feedback, or may even require alternatives to cold dark matter (such as ultra-light bosons).
    StarProper motionVelocity dispersionRR Lyrae starDark matterHalf-light radiusMilky WayNavarro-Frenk-White profileHertzsprung-Russell diagramRadial velocity...
  • We study the implementation of mechanical feedback from supernovae (SNe) and stellar mass loss in galaxy simulations, within the Feedback In Realistic Environments (FIRE) project. We present the FIRE-2 algorithm for coupling mechanical feedback, which can be applied to any hydrodynamics method (e.g. fixed-grid, moving-mesh, and mesh-less methods), and black hole as well as stellar feedback. This algorithm ensures manifest conservation of mass, energy, and momentum, and avoids imprinting 'preferred directions' on the ejecta. We show that it is critical to incorporate both momentum and thermal energy of mechanical ejecta in a self-consistent manner, accounting for SNe cooling radii when they are not resolved. Using idealized simulations of single SN explosions, we show that the FIRE-2 algorithm, independent of resolution, reproduces converged solutions in both energy and momentum. In contrast, common 'fully-thermal' (energy-dump) or 'fully-kinetic' (particle-kicking) schemes in the literature depend strongly on resolution: when applied at mass resolution >100 solar masses, they diverge by orders-of-magnitude from the converged solution. In galaxy-formation simulations, this divergence leads to orders-of-magnitude differences in galaxy properties, unless those models are adjusted in a resolution-dependent way. We show that all models that individually time-resolve SNe converge to the FIRE-2 solution at sufficiently high resolution. However, in both idealized single-SN simulations and cosmological galaxy-formation simulations, the FIRE-2 algorithm converges much faster than other sub-grid models without re-tuning parameters.
    SupernovaCoolingFIRE simulationsEjectaStarGalaxyGalaxy FormationMass of the Milky WayFluid dynamicsMilky Way...
  • In recent years, observations of the Sunyaev-Zeldovich (SZ) effect have had significant cosmological implications and have begun to serve as a powerful and independent probe of the warm and hot gas that pervades the Universe. As a few pioneering studies have already shown, SZ observations both complement X-ray observations -- the traditional tool for studying the intra-cluster medium -- and bring unique capabilities for probing astrophysical processes at high redshifts and out to the low-density regions in the outskirts of galaxy clusters. Advances in SZ observations have largely been driven by developments in centimetre-, millimetre-, and submillimetre-wave instrumentation on ground-based facilities, with notable exceptions including results from the Planck satellite. Here we review the utility of the thermal, kinematic, relativistic, non-thermal, and polarised SZ effects for studies of galaxy clusters and other large scale structures, incorporating the many advances over the past two decades that have impacted SZ theory, simulations, and observations. We also discuss observational results, techniques, and challenges, and aim to give an overview and perspective on emerging opportunities, with the goal of highlighting some of the exciting new directions in this field.
    Intra-cluster mediumSunyaev-Zel'dovich effectTelescopesMultidimensional ArrayPressure profileActive Galactic NucleiThermal Sunyaev-Zel'dovich effectCluster of galaxiesAnisotropyLine of sight...
  • Measurements of the Hubble parameter from the distance ladder are in tension with indirect measurements based on the cosmic microwave background (CMB) data and the inverse distance ladder measurements at 3-4 $\sigma$ level. We consider phenomenological modification to the timing and width of the recombination process and show that they can significantly affect this tension. This possibility is appealing, because such modification affects both the distance to the last scattering surface and the calibration of the baryon acoustic oscillations (BAO) ruler. Moreover, because only a very small fraction of the most energetic photons keep the early universe in the plasma state, it is possible that such modification could occur without affecting the energy density budget of the universe or being incompatible with the very tight limits on the departure from the black-body spectrum of CMB. In particular, we find that under this simplified model, with a conservative subset of Planck data alone, $H_0=73.44_{-6.77}^{+5.50}~{\rm km\ s}^{-1}\ {\rm Mpc}^{-1}$ and in combination with BAO data $H_0=68.86_{-1.35}^{+1.31}~{\rm km\ s}^{-1}\ {\rm Mpc}^{-1}$, decreasing the tension to $\sim 2\sigma$ level. However, when combined with Planck lensing reconstruction and high-$\ell$ polarization data, the tension climbs back to $\sim 2.7\sigma$, despite the uncertainty on non-ladder $H_0$ measurement more than doubling.
    RecombinationBaryon acoustic oscillationsCosmic microwave backgroundCosmic distance ladderPlanck missionHubble parameterCalibrationThe early UniverseBig bang nucleosynthesisSound horizon...
  • Recent hardware developments have made unprecedented amounts of data parallelism available for accelerating neural network training. Among the simplest ways to harness next-generation accelerators is to increase the batch size in standard mini-batch neural network training algorithms. In this work, we aim to experimentally characterize the effects of increasing the batch size on training time, as measured in the number of steps necessary to reach a goal out-of-sample error. Eventually, increasing the batch size will no longer reduce the number of training steps required, but the exact relationship between the batch size and how many training steps are necessary is of critical importance to practitioners, researchers, and hardware designers alike. We study how this relationship varies with the training algorithm, model, and dataset and find extremely large variation between workloads. Along the way, we reconcile disagreements in the literature on whether batch size affects model quality. Finally, we discuss the implications of our results for efforts to train neural networks much faster in the future.
    Neural networkSchedulingTraining setOptimizationRegularizationHidden layerConvolutional neural networkClassificationArchitectureStatistics...
  • We review the context, the motivations and the expected performances of a comprehensive and ambitious fixed-target program using the multi-TeV proton and ion LHC beams. We also provide a detailed account of the different possible technical implementations ranging from an internal wire target to a full dedicated beam line extracted with a bent crystal. The possibilities offered by the use of the ALICE and LHCb detectors in the fixed-target mode are also reviewed.
    Large Hadron ColliderParton distribution functionLHCb experimentDrell-Yan processLuminosityALICE experimentRapidityCharmed mesonColliderQCD jet...
  • We search Dark Energy Survey (DES) Year 3 imaging data for galaxy-galaxy strong gravitational lenses using convolutional neural networks. We generate 250,000 simulated lenses at redshifts > 0.8 from which we create a data set for training the neural networks with realistic seeing, sky and shot noise. Using the simulations as a guide, we build a catalogue of 1.1 million DES sources with (1.8 < g - i < 5), (0.6 < g -r < 3), r_mag > 19, g_mag > 20 and i_mag > 18.2. We train two ensembles of neural networks on training sets consisting of simulated lenses, simulated non-lenses, and real sources. We use the neural networks to score images of each of the sources in our catalogue with a value from 0 to 1, and select those with scores greater than a chosen threshold for visual inspection, resulting in a candidate set of 7,301 galaxies. During visual inspection we rate 84 as "probably" or "definitely" lenses. Four of these are previously known lenses or lens candidates. We inspect a further 9,428 candidates with a different score threshold, and identify four new candidates. We present 84 new strong lens candidates, selected after a few hours of visual inspection by astronomers. Based on simulations we estimate our sample to contain most discoverable lenses in this imaging and at this redshift range.
    Dark Energy SurveyConvolutional neural networkTraining setStrong gravitational lensingGalaxyNeural networkCompletenessArtificial neural networkPhotometric redshiftMachine learning...
  • We present a model for the interaction of the GD-1 stellar stream with a massive perturber that naturally explains many of the observed stream features, including a gap and an off-stream spur of stars. The model involves an impulse by a fast encounter, after which the stream grows a loop of stars at different orbital energies. At specific viewing angles, this loop appears offset from the stream track. The configuration-space observations are sensitive to the mass, age, impact parameter, and total velocity of the encounter, and future velocity observations will constrain the full velocity vector of the perturber. A quantitative comparison of the spur and gap features prefers models where the perturber is in the mass range of $10^6\,\rm M_\odot$ to $10^8\,\rm M_\odot$. Orbit integrations back in time show that the stream encounter could not have been caused by any known globular cluster or dwarf galaxy, and mass, size and impact-parameter arguments show that it could not have been caused by a molecular cloud in the Milky Way disk. The most plausible explanation for the gap-and-spur structure is an encounter with a dark matter substructure, like those predicted to populate galactic halos in $\Lambda$CDM cosmology. However, the expected densities of $\Lambda$CDM subhalos in this mass range and in this part of the Milky Way are $2-3\,\sigma$ lower than the inferred high density of the GD-1 perturber. This observation opens up the possibility that detailed observations of streams could measure the mass spectrum of dark-matter substructures and even identify individual substructures and their orbits in the Galactic halo.
    GD-1 stellar streamStarDark matter subhaloGlobular clusterMilky WayStellar streamOf starsKinematicsCold dark matterProper motion...
  • We use a sample of 17 strong gravitational lens systems from the BELLS GALLERY survey to quantify the amount of low-mass dark matter haloes within the lensing galaxies and along their lines of sight, and to constrain the properties of dark matter. Based on a detection criterion of 10$\sigma$, we report no significant detection in any of the lenses. Using the sensitivity function at the 10-$\sigma$ level, we have calculated the predicted number of detectable cold dark matter (CDM) line-of-sight haloes to be $\mu_{l} = 1.17\pm1.08$, in agreement with our null detection. Assuming a detection sensitivity that improved to the level implied by a 5-$\sigma$ threshold, the expected number of detectable line-of-sight haloes rises to $\mu_l = 9.0\pm3.0$. Whilst the current data find zero detections at this sensitivity level (which has a probability of P$^{{\rm5}\sigma}_{{\rm CDM}}(n_{\rm det}=0)$=0.0001 and would be in strong tension with the CDM framework), we find that such a low detection threshold leads to many spurious detections and non-detections and therefore the current lack of detections is unreliable and requires data with improved sensitivity. Combining this sample with a subsample of 11 SLACS lenses, we constrain the half-mode mass to be $\log$(M$_{\rm hm}) < 12.26$ at the 2-$\sigma$ level. The latter is consistent with resonantly produced sterile neutrino masses m$_{\rm s} < 0.8$ keV at any value of the lepton asymmetry at the 2-$\sigma$ level.
    Line of sightCold dark matterDark matter subhaloSurface brightnessDark matterNavarro-Frenk-White profileGravitational lens galaxyVirial massRegularizationBELLS survey...
  • We review and revise phenomenology of the GeV-scale heavy neutral leptons (HNLs). We extend the previous analyses by including more channels of HNLs production and decay and provide with more refined treatment, including QCD corrections for the HNLs of masses $\mathcal{O}(1)$ GeV. We summarize the relevance of individual production and decay channels for different masses, resolving a few discrepancies in the literature. Our final results are directly suitable for sensitivity studies of particle physics experiments (ranging from proton beam-dump to the LHC) aiming at searches for heavy neutral leptons.
    Sterile neutrinoActive neutrinoSterile neutrino productionDecay channelsMass eigen stateParticle physics experimentsQCD correctionsSHiP experimentMixing angleDUNE experiment...
  • In this set of four lectures I will discuss some aspects of the Standard Model (SM) as a quantum field theory and related phenomenological observations which have played a crucial role in establishing the $SU(2)_{L} \times U(1)_{Y}$ gauge theory as the correct description of Electro-Weak (EW) interactions. I will first describe in brief the idea of EW unification as well as basic aspects of the Higgs mechanism of spontaneous symmetry breaking. After this I will discuss anomaly cancellation, custodial symmetry and implications of the high energy behavior of scattering amplitudes for the particle spectrum of the EW theory. This will be followed up by a discussion of the 'indirect' constraints on the SM particle masses such as $M_{c}, M_{t}$ and $M_{h}$ from various precision EW measurements. I will end by discussing the theoretical limits on $M_{h}$ and implications of the observed Higgs mass for the SM and beyond.
    Standard ModelElectroweakWeak neutral current interactionPrecisionHiggs bosonGauge theorySpontaneous symmetry breakingCharged currentLarge Electron-Positron ColliderHiggs boson mass...
  • We present a method to measure the small-scale matter power spectrum using high-resolution measurements of the gravitational lensing of the Cosmic Microwave Background (CMB). To determine whether small-scale structure today is suppressed on scales below 10 kiloparsecs (corresponding to M < 10^9 M_sun), one needs to probe CMB-lensing modes out to L ~ 35,000, requiring a CMB experiment with about 20 arcsecond resolution or better. We show that a CMB survey covering 4,000 square degrees of sky, with an instrumental sensitivity of 0.5 uK-arcmin at 18 arcsecond resolution, could distinguish between cold dark matter and an alternative, such as 1 keV warm dark matter or 10^(-22) eV fuzzy dark matter with about 4-sigma significance. A survey of the same resolution with 0.1 uK-arcmin noise could distinguish between cold dark matter and these alternatives at better than 20-sigma significance; such high-significance measurements may also allow one to distinguish between a suppression of power due to either baryonic effects or the particle nature of dark matter, since each impacts the shape of the lensing power spectrum differently. CMB temperature maps yield higher signal-to-noise than polarization maps in this small-scale regime; thus, systematic effects, such as from extragalactic astrophysical foregrounds, need to be carefully considered. However, these systematic concerns can likely be mitigated with known techniques. Next-generation CMB lensing may thus provide a robust and powerful method of measuring the small-scale matter power spectrum.
    Cosmic microwave backgroundCMB lensingCold dark matterSignal to noise ratioStatistical estimatorDark matterFuzzy dark matterSmall-Scale Power SpectrumCovariance matrixCosmic infrared background...
  • With the development of Connected Vehicle (CV) technology, temporal variation of roadway traffic can be captured by sharing Basic Safety Messages (BSMs) from each vehicle using the communication between vehicles as well as with transportation roadside infrastructures (e.g., traffic signal) and traffic management centers. However, the penetration of connected vehicles in the near future will be limited. BSMs from limited CVs could provide an inaccurate estimation of current speed or space headway. This inaccuracy in the estimated current average speed and average space headway data is termed as noise. This noise in the traffic data significantly reduces the prediction accuracy of a machine learning model, such as the accuracy of long short term memory (LSTM) model in predicting traffic condition. To improve the real time prediction accuracy with low penetration of CVs, we developed a traffic data prediction model that combines the LSTM with a noise reduction model (the standard Kalman filter or Kalman filter based Rauch Tung Striebel (RTS)). The average speed and space headway used in this study were generated from the Enhanced Next Generation Simulation (NGSIM) dataset, which contains vehicle trajectory data for every one tenth of a second. Compared to a baseline LSTM model without any noise reduction, for 5 percent penetration of CVs, the analyses revealed that combined LSTM\RTS model reduced the mean absolute percentage error (MAPE) from 19 percent to 5 percent for speed prediction and from 27 percent to 9 percent for space headway prediction. The overall reduction of MAPE value ranged from 1 percent to 14 percent for speed and 2 percent to 18 percent for space headway prediction compared to the baseline model.
    Long short term memoryKalman filterNeural networkConnected VehicleMachine learningSimulationsCommunicationTrajectory...
  • We study production of self-interacting dark matter (DM) during an early matter-dominated phase. As a benchmark scenario, we consider a model where the DM consists of singlet scalar particles coupled to the visible Standard Model (SM) sector via the Higgs portal. We consider scenarios where the initial DM abundance is set by either the usual thermal freeze-out or an alternative freeze-in mechanism, where DM was never in thermal equilibrium with the SM sector. For the first time, we take the effect of self-interactions within the hidden sector into account in determining the DM abundance, reminiscent to the Strongly Interacting Massive Particle (SIMP) scenario. In all cases, the number density of DM may change considerably compared to the standard radiation-dominated case, having important observational and experimental ramifications.
    Dark matterStandard ModelFreeze-outDark matter abundanceFreeze-inSelf-interacting dark matterHidden sectorCosmological parametersMatter-dominated epochRelic abundance...
  • Visible signals from the decays of light long-lived hidden sector particles have been extensively searched for at beam dump, fixed-target, and collider experiments. If such hidden sectors couple to the Standard Model through mediators heavier than $\sim 10$ GeV, their production at low-energy accelerators is kinematically suppressed, leaving open significant pockets of viable parameter space. We investigate this scenario in models of inelastic dark matter, which give rise to visible signals at various existing and proposed LHC experiments, such as ATLAS, CMS, LHCb, CODEX-b, FASER, and MATHUSLA. These experiments can leverage the large center of mass energy of the LHC to produce GeV-scale dark matter from the decays of dark photons in the cosmologically motivated mass range of $\sim 1-100$ GeV. We also provide a detailed calculation of the radiative dark matter-nucleon/electron elastic scattering cross section, which is relevant for estimating rates at direct detection experiments.
    Dark matterLarge Hadron ColliderHidden photonStandard ModelForwArd Search ExpeRimentMATHUSLA experimentLHCb experimentCMS experimentATLAS Experiment at CERNMuon...
  • We consider tritium beta decay with additional emission of light pseudoscalar or vector bosons coupling to electrons or neutrinos. The electron energy spectrum for all cases is evaluated and shown to be well estimated by approximated analytical expressions. We give the statistical sensitivity of KATRIN to the mass and coupling of the new bosons, both in the standard setup of the experiment as well as for future modifications in which the full energy spectrum of tritium decay is accessible.
    NeutrinoLight bosonPseudoscalarNeutrino massVector bosonDecay rateStandard ModelTritium beta-decaySupernovaCosmic microwave background...
  • Long-lived light particles (LLLPs) appear in many extensions of the standard model. LLLPs are usually motivated by the observed small neutrino masses, by dark matter or both. Typical examples for fermionic LLLPs (a.k.a. heavy neutral fermions, HNFs) are sterile neutrinos or the lightest neutralino in R-parity violating supersymmetry. The high luminosity LHC is expected to deliver up to 3/ab of data. Searches for LLLPs in dedicated experiments at the LHC could then probe the parameter space of LLLP models with unprecedented sensitivity. Here, we compare the prospects of several recent experimental proposals, FASER, CODEX-b and MATHUSLA, to search for HNFs and discuss their relative merits.
    ForwArd Search ExpeRimentNeutralinoLong-lived light particleMATHUSLA experimentSterile neutrinoR-parity violationLarge Hadron ColliderD mesonR-parityB meson...
  • Recently Gligorov $\textit{et al.}$ [arXiv:1810.03636] proposed to build a cylindrical detector named '$\texttt{AL3X}$' close to the $\texttt{ALICE}$ experiment at interaction point (IP) 2 of the LHC, aiming for discovery of long-lived particles (LLPs) during Run 5 of the HL-LHC. We investigate the potential sensitivity reach of this detector in the parameter space of different new-physics models with long-lived fermions namely heavy neutral leptons (HNLs) and light supersymmetric neutralinos, which have both not previously been studied in this context. Our results show that the $\texttt{AL3X}$ reach can be complementary or superior to that of other proposed detectors such as $\texttt{CODEX-b}$, $\texttt{FASER}$, $\texttt{MATHUSLA}$ and $\texttt{SHiP}$.
    NeutralinoSterile neutrinoLong Lived ParticleBranching ratioLarge Hadron ColliderLight neutralinoR-parity violationAL3X experimentStandard ModelSupersymmetry...
  • A persistence of several anomalies in muon physics, such as the muon anomalous magnetic moment and the muonic hydrogen Lamb shift, hints at new light particles beyond the Standard Model. We address a subset of these models that have a new light scalar state with sizable couplings to muons and suppressed couplings to electrons. A novel way to search for such particles would be through muon beam-dump experiments by (1) missing momentum searches; (2) searches for decays with displaced vertices. The muon beams available at CERN and Fermilab present attractive opportunities for exploring the new scalar with a mass below the di-muon threshold, and potentially covering a range of relevant candidate models. For the models considered in this paper, both types of signals, muon missing momentum and anomalous energy deposition at a distance, can probe a substantial fraction of the unexplored parameter space of the new light scalar, including a region that can explain the muon anomalous magnetic moment discrepancy.
    MuonMuon beamFermilabMuon beam-dump experimentBeam dumpLight scalarMuon anomalous magnetic momentKaonCERNMissing energy...
  • Run 5 of the HL-LHC era (and beyond) may provide new opportunities to search for physics beyond the standard model (BSM) at interaction point 2 (IP2). In particular, taking advantage of the existing ALICE detector and infrastructure provides an opportunity to search for displaced decays of beyond standard model long-lived particles (LLPs). While this proposal may well be preempted by ongoing ALICE physics goals, examination of its potential new physics reach provides a compelling comparison with respect to other LLP proposals. In particular, full event reconstruction and particle identification could be possible by making use of the existing L3 magnet and ALICE time projection chamber. For several well-motivated portals, the reach competes with or exceeds the sensitivity of MATHUSLA and SHiP, provided that a total integrated luminosity of approximately $100\, \text{fb}^{-1}$ could be delivered to IP2.
    Long Lived ParticleALICE experimentTime projection chamberL3MATHUSLA experimentLarge Hadron ColliderAL3X experimentSHiP experimentLuminosityBeamline...
  • Dark matter interactions with massless or very light Standard Model particles, as photons or neutrinos, may lead to a suppression of the matter power spectrum at small scales and of the number of low mass haloes. Bounds on the dark matter scattering cross section with light degrees of freedom in such interacting dark matter (IDM) scenarios have been obtained from e.g. early time cosmic microwave background physics and large scale structure observations. Here we scrutinize dark matter microphysics in light of the claimed 21 cm EDGES 78 MHz absorption signal. IDM is expected to delay the 21 cm absorption features due to collisional damping effects. We identify the astrophysical conditions under which the existing constraints on the dark matter scattering cross section could be largely improved due to the IDM imprint on the 21 cm signal, providing also an explicit comparison to the WDM scenario.
    Interacting dark matterHydrogen 21 cm lineDark matterScattering cross sectionCold dark matterWarm dark matterEDGES experimentHalo mass functionCosmic microwave backgroundMolecular cooling...
  • A nonlinear dynamics semi-classical model is used to show that standard quantum spin analysis can be obtained. The model includes a classically driven nonlinear differential equation with dissipation and a semi-classical interpretation of the torque on a spin magnetic moment in the presence of a realistic magnetic field, which will represent two equilibrium positions. The highly complicated driven nonlinear dissipative semi-classical model is used to introduce chaos, which is necessary to produce the correct statistical quantum results. The resemblance between this semi-classical spin model and the thoroughly studied classical driven-damped nonlinear pendulum are shown and discussed.
    ChaosDissipationSpinMagnetic momentMagnetic fieldDifferential equations...
  • The von Neumann trace form of quantum statistical mechanics is transformed to an integral over classical phase space. Formally exact expressions for the resultant position-momentum commutation function are given. A loop expansion for wave function symmetrization is also given. The method is tested for quantum harmonic oscillators. For both the boson and fermion cases, the grand potential and the average energy obtained by numerical quadrature over classical phase space are shown to agree with the known analytic results. A mean field approximation is given which is suitable for condensed matter, and which allows the quantum statistical mechanics of interacting particles to be obtained in classical phase space.
    Phase spaceStatistical mechanicsHarmonic oscillatorEigenfunctionPartition functionQuantum harmonic oscillatorPermutationHigh-temperature expansionsMaxwell-Boltzmann statisticsHamiltonian...
  • Almost a third of the cosmic baryons are "missing" at low redshifts, as they reside in the invisible warm-hot intergalactic medium (WHIM). The thermal Sunyaev-Zeldovich (tSZ) effect, which measures the line-of-sight integral of the plasma pressure, can potentially detect this WHIM, although its expected signal is hidden below the noise. Extragalactic dispersion measures (DMs)---obtained through observations of fast radio bursts (FRBs)---are excellent tracers of the WHIM, as they measure the column density of plasma, regardless of its temperature. Here we propose cross correlating DMs and tSZ maps as a new way to find and characterize the missing baryons in the WHIM. Our method relies on the precise ($\sim$ arcminute) angular localization of FRBs to assign each burst a DM and a $y$ parameter. We forecast that the signal from the WHIM should be confidently detected in a cross-correlation analysis of $\sim10^4$ FRBs, expected to be gathered in a year of operation of the upcoming CHIME and HIRAX radio arrays, confirming the recent tentative detections of filamentary WHIM. Using this technique, future CMB probes (which might lower the tSZ noise) could determine both the temperature of the WHIM and its evolution to within tens of percent. Altogether, DM-tSZ cross correlations hold great promise for studying the baryons in the local Universe.
    Dispersion measureFast Radio BurstsWarm hot intergalactic mediumCross-correlationIntergalactic mediumMissing baryonsThermal Sunyaev-Zel'dovich effectGalaxyLine of sightLocal Universe...
  • We test a method to reduce unwanted sample variance when predicting Lyman-$\alpha$ (ly$\alpha$) forest power spectra from cosmological hydrodynamical simulations. Sample variance arises due to sparse sampling of modes on large scales and propagates to small scales through non-linear gravitational evolution. To tackle this, we generate initial conditions in which the density perturbation amplitudes are {\it fixed} to the ensemble average power spectrum -- and are generated in {\it pairs} with exactly opposite phases. We run $50$ such simulations ($25$ pairs) and compare their performance against $50$ standard simulations by measuring the ly$\alpha$ 1D and 3D power spectra at redshifts $z=2$, 3, and 4. Both ensembles use periodic boxes of $40$ Mpc/h containing $512^3$ particles each of dark matter and gas. As a typical example of improvement, for wavenumbers $k=0.25$ h/Mpc at $z=3$, we find estimates of the 1D and 3D power spectra converge $34$ and $12$ times faster in a paired-fixed ensemble compared with a standard ensemble. We conclude that, by reducing the computational time required to achieve fixed accuracy on predicted power spectra, the method frees up resources for exploration of varying thermal and cosmological parameters -- ultimately allowing the improved precision and accuracy of statistical inference.
    Matter power spectrumSample varianceHydrodynamical simulationsCosmological parametersCosmological hydrodynamic simulationCold dark matterDark matterNeutral hydrogen gasCosmological hydrodynamical simulationsPrecision...
  • We combine available constraints on the local CII 158 $\mu$m line luminosity function from galaxy observations (Hemmati et al. 2017), with the evolution of the star-formation rate density and the recent CII intensity mapping measurement in Pullen et al. (2018, assuming detection), to derive the evolution of the CII luminosity - halo mass relation over $z \sim 0-6$. We develop convenient fitting forms for the evolution of the CII luminosity - halo mass relation, and forecast constraints on the CII intensity mapping power spectrum and its associated uncertainty across redshifts. We predict the sensitivities to detect the power spectrum for upcoming PIXIE-, STARFIRE-, EXCLAIM-, CONCERTO-, TIME- and CCAT-p-like surveys, as well as possible future intensity mapping observations with the ALMA facility.
    GalaxyIntensityLuminosityVirial massReionizationLuminosity functionStar formation ratePIXIE experimentTelescopesInterstellar medium...
  • Many important data analysis applications present with severely imbalanced datasets with respect to the target variable. A typical example is medical image analysis, where positive samples are scarce, while performance is commonly estimated against the correct detection of these positive examples. We approach this challenge by formulating the problem as anomaly detection with generative models. We train a generative model without supervision on the `negative' (common) datapoints and use this model to estimate the likelihood of unseen data. A successful model allows us to detect the `positive' case as low likelihood datapoints. In this position paper, we present the use of state-of-the-art deep generative models (GAN and VAE) for the estimation of a likelihood of the data. Our results show that on the one hand both GANs and VAEs are able to separate the `positive' and `negative' samples in the MNIST case. On the other hand, for the NLST case, neither GANs nor VAEs were able to capture the complexity of the data and discriminate anomalies at the level that this task requires. These results show that even though there are a number of successes presented in the literature for using generative models in similar applications, there remain further challenges for broad successful implementation.
    Generative modelGenerative Adversarial NetAnomaly detectionArchitectureReceiver operating characteristicNeural networkOptimizationLatent variableBackpropagationStatistical estimator...
  • We use mock interferometric HI measurements and a conventional tilted-ring modelling procedure to estimate circular velocity curves of dwarf galaxy discs from the APOSTLE suite of {\Lambda}CDM cosmological hydrodynamical simulations. The modelling yields a large diversity of rotation curves for an individual galaxy at fixed inclination, depending on the line-of-sight orientation. The diversity is driven by non-circular motions in the gas; in particular, by strong bisymmetric fluctuations in the azimuthal velocities that the tilted-ring model is ill-suited to account for and that are difficult to detect in model residuals. Large misestimates of the circular velocity arise when the kinematic major axis coincides with the extrema of the fluctuation pattern, in some cases mimicking the presence of kiloparsec-scale density 'cores', when none are actually present. The thickness of APOSTLE discs compounds this effect: more slowly-rotating extra-planar gas systematically reduces the average line-of-sight speeds. The recovered rotation curves thus tend to underestimate the true circular velocity of APOSTLE galaxies in the inner regions. Non-circular motions provide an appealing explanation for the large apparent cores observed in galaxies such as DDO 47 and DDO 87, where the model residuals suggest that such motions might have affected estimates of the inner circular velocities. Although residuals from tilted ring models in the simulations appear larger than in observed galaxies, our results suggest that non-circular motions should be carefully taken into account when considering the evidence for dark matter cores in individual galaxies.
    GalaxyRotation CurveCircular velocityKinematicsMilky WayInclinationLine of sightOrientationMajor axisCold dark matter...
  • We use cosmological hydrodynamical simulations of the APOSTLE project along with high-quality rotation curve observations to examine the fraction of baryons in {\Lambda}CDM haloes that collect into galaxies. This 'galaxy formation efficiency' correlates strongly and with little scatter with halo mass, dropping steadily towards dwarf galaxies. The baryonic mass of a galaxy may thus be used to place a lower limit on total halo mass and, consequently, on its asymptotic maximum circular velocity. A number of observed dwarfs seem to violate this constraint, having baryonic masses up to ten times higher than expected from their rotation speeds, or, alternatively, rotating at only half the speed expected for their mass. Taking the data at face value, either these systems have formed galaxies with extraordinary efficiency - highly unlikely given their shallow potential wells - or their dark matter content is much lower than expected from {\Lambda}CDM haloes. This 'missing dark matter' is reminiscent of the inner mass deficit of galaxies with slowly-rising rotation curves, but cannot be explained away by star formation-induced 'cores' in the dark mass profile, since the anomalous deficit applies to regions larger than the luminous galaxies themselves. We argue that explaining the structure of these galaxies would require either substantial modification of the standard Lambda cold dark matter paradigm or else significant revision to the uncertainties in their inferred mass profiles, which should be much larger than reported. Systematic errors in inclination may provide a simple resolution to what would otherwise be a rather intractable problem for the current paradigm.
    GalaxyRotation CurveDark matterAPOSTLE simulationInclinationGalaxy FormationCircular velocityDwarf galaxyVirial massBaryonic Tully-Fisher relation...
  • Blazar observations point toward the possible presence of magnetic fields over intergalactic scales of the order of up to $\sim1\,$Mpc, with strengths of at least $\sim10^{-16}\,$G. Understanding the origin of these large-scale magnetic fields is a challenge for modern astrophysics. Here we discuss the cosmological scenario, focussing on the following questions: (i) How and when was this magnetic field generated? (ii) How does it evolve during the expansion of the universe? (iii) Are the amplitude and statistical properties of this field such that they can explain the strengths and correlation lengths of observed magnetic fields? We also discuss the possibility of observing primordial turbulence through direct detection of stochastic gravitational waves in the mHz range accessible to LISA.
    TurbulenceThe early UniverseMagnetismGravitational waveLaser Interferometer Space AntennaPhase transitionsPrimordial turbulenceObserved magnetic fieldsLong-range magnetic fieldsExpansion of the Universe...
  • Heavy Neutral Leptons (HNLs) are hypothetical particles predicted by many extensions of the Standard Model. These particles can, among other things, explain the origin of neutrino masses, generate the observed matter-antimatter asymmetry in the Universe and provide a dark matter candidate. The SHiP experiment will be able to search for HNLs produced in decays of heavy mesons and travelling distances ranging between $\mathcal{O}(50\text{ m})$ and tens of kilometers before decaying. We present the sensitivity of the SHiP experiment to a number of HNL's benchmark models and provide a way to calculate the SHiP's sensitivity to HNLs for arbitrary patterns of flavour mixings. The corresponding tools and data files are also made publicly available.
    Sterile neutrinoSHiP experimentFlavourMuonDecay volumeDetector fiducial volumeHidden sectorSterile neutrino productionBranching ratioSpectrometers...
  • Cosmological $N$-body simulations are typically purely run with particles using Newtonian equations of motion. However, such simulations can be made fully consistent with general relativity using a well-defined prescription. Here, we extend the formalism previously developed for $\Lambda$CDM cosmologies with massless neutrinos to include the effects of massive, but light neutrinos. We have implemented the method in two different $N$-body codes, CONCEPT and PKDGRAV, and demonstrate that they produce consistent results. We furthermore show that we can recover all appropriate limits, including the full GR solution in linear perturbation theory at the per mille level of precision.
    NeutrinoGeneral relativityMassive neutrinoNeutrino massPerturbation theoryCold dark matterPrecisionCosmologyNewtonian gaugeWeak field limit...
  • In this Letter of Intent (LOI) we propose the construction of MATHUSLA (MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles), a dedicated large-volume displaced vertex detector for the HL-LHC on the surface above ATLAS or CMS. Such a detector, which can be built using existing technologies with a reasonable budget in time for the HL-LHC upgrade, could search for neutral long-lived particles (LLPs) with up to several orders of magnitude better sensitivity than ATLAS or CMS, while also acting as a cutting-edge cosmic ray telescope at CERN to explore many open questions in cosmic ray and astro-particle physics. We review the physics motivations for MATHUSLA and summarize its LLP reach for several different possible detector geometries, as well as outline the cosmic ray physics program. We present several updated background studies for MATHUSLA, which help inform a first detector-design concept utilizing modular construction with Resistive Plate Chambers (RPCs) as the primary tracking technology. We present first efficiency and reconstruction studies to verify the viability of this design concept, and we explore some aspects of its total cost. We end with a summary of recent progress made on the MATHUSLA test stand, a small-scale demonstrator experiment currently taking data at CERN Point 1, and finish with a short comment on future work.
    MATHUSLA experimentATLAS Experiment at CERNDisplaced verticesLong Lived ParticleCosmic rayResistive plate chambersCERNLarge Hadron ColliderTelescopesOrder of magnitude...
  • Large-scale cosmological simulations of galaxy formation currently do not resolve the densities at which molecular hydrogen forms, implying that the atomic-to-molecular transition must be modeled either on the fly or in postprocessing. We present an improved postprocessing framework to estimate the abundance of atomic and molecular hydrogen and apply it to the IllustrisTNG simulations. We compare five different models for the atomic-to-molecular transition, including empirical, simulation-based, and theoretical prescriptions. Most of these models rely on the surface density of neutral hydrogen and the ultraviolet (UV) flux in the Lyman-Werner band as input parameters. Computing these quantities on the kiloparsec scales resolved by the simulations emerges as the main challenge. We show that the commonly used Jeans length approximation to the column density of a system can be biased and exhibits large cell-to-cell scatter. Instead, we propose to compute all surface quantities in face-on projections and perform the modeling in two dimensions. In general, the two methods agree on average, but their predictions diverge for individual galaxies and for models based on the observed midplane pressure of galaxies. We model the UV radiation from young stars by assuming a constant escape fraction and optically thin propagation throughout the galaxy. With these improvements, we find that the five models for the atomic-to-molecular transition roughly agree on average but that the details of the modeling matter for individual galaxies and the spatial distribution of molecular hydrogen. We emphasize that the estimated molecular fractions are approximate due to the significant systematic uncertainties.
    GalaxyStar formationIllustrisTNG simulationStar formation rateMetallicityStellar massJeans lengthMass functionSimulations of structure formationMilky Way...