- Magnetic topology (Magnetic topology)

by Emmanouil Markoulakis19 Sep 2020 15:08 - Magnetization (Magnetization)

by Emmanouil Markoulakis18 Sep 2020 03:17 - Display calculus (Display calculus)

by Valentin D. Richard16 Jul 2020 21:35 - P.R. Firms and News agency's must know the truth about Samuel Groft and his Royal DNA (P.R. Firms and News agency's must know the truth about Samuel Groft and his Royal DNA)

by Samuel Groft16 Jun 2020 14:15 - Save Samuel Groft in Los Angeles from evil torture in America (Save Samuel Groft in Los Angeles from evil torture in America)

by Samuel Groft16 Jun 2020 14:11 - Benjamin-Ono equation (Benjamin-Ono equation)

by Prof. Alexander Abanov03 Nov 2009 21:51 - Neutrino Minimal Standard Model (Neutrino Minimal Standard Model)

by Prof. Mikhail Shaposhnikov15 Nov 2016 16:20 - Geometric flattening (Geometric flattening)

by Dr. Ganna Ivashchenko05 Dec 2010 22:14 - Dzyaloshinskii-Moriya interaction (Dzyaloshinskii-Moriya interaction)

by Dr. George Jackeli28 Aug 2009 09:41 - Universal Conductance Fluctuations (Universal Conductance Fluctuations)

by Prof. Carlo Beenakker08 Dec 2010 13:33

- Following ideas of Szilard, Mandelbrot and Hill, we show that a statistical thermodynamic structure can emerge purely from the infinitely large data limit under a probabilistic framework independent from their underlying details. Systems with distinct values of a set of observables are identified as different thermodynamic states, which are parameterized by the entropic forces conjugated to the observables. The ground state with zero entropic forces usually has a probabilistic model equipped with a symmetry of interest. The entropic forces lead to symmetry breaking for each particular system that produces the data, cf. emerging of time correlation and breakdown of detailed balance. Probabilistic models for the excited states are predicted by the Maximum Entropy Principle for sequences of i.i.d. and correlated Markov samples. Asymptotically-equivalent models are also found by the Maximum Caliber Principle. With a novel derivation of Maximum Caliber, conceptual differences between the two principles are clarified. The emergent thermodynamics in the data infinitus limit has a mesoscopic origin from the Maximum Caliber. In the canonical probabilistic models of Maximum Caliber, the variances of the observables and their conjugated forces satisfy the asymptotic thermodynamic uncertainty principle, which stems from the reciprocal-curvature relation between "entropy" and "free energy" functions in the theory of large deviations. The mesoscopic origin of the reciprocality is identified. As a consequence of limit theorems in probability theory, the phenomenological statistical thermodynamics is universal without the need of mechanics.EntropyMarkov processStatisticsUncertainty principleSymmetry breakingExcited stateEntropy productionCurvatureMarkov chainWentzel-Kramers-Brillouin...
- We derive astroparticle constraints in different dark matter scenarios alternative to cold dark matter (CDM): thermal relic warm dark matter, WDM; fuzzy dark matter, $\psi$DM; self-interacting dark matter, SIDM; sterile neutrino dark matter, $\nu$DM. Our framework is based on updated determinations of the high-redshift UV luminosity functions for primordial galaxies out to redshift $z\sim 10$, on redshift-dependent halo mass functions in the above DM scenarios from numerical simulations, and on robust constraints on the reionization history of the Universe from recent astrophysical and cosmological datasets. First, we build up an empirical model of cosmic reionization characterized by two parameters, namely the escape fraction $f_{\rm esc}$ of ionizing photons from primordial galaxies, and the limiting UV magnitude $M_{\rm UV}^{\rm lim}$ down to which the extrapolated UV luminosity functions are steeply increasing. Second, we perform standard abundance matching of the UV luminosity function and the halo mass function, obtaining a relationship between UV luminosity and halo mass whose shape depends on an astroparticle quantity $X$ specific of each DM scenario (e.g., WDM particle mass); we exploit such a relation to introduce in the analysis a constraint from primordial galaxy formation, in terms of the threshold halo mass above which primordial galaxies can efficiently form stars. Third, we implement a sequential updating Bayesian MCMC technique to perform joint inference on the three parameters $f_{\rm esc}$, $M_{\rm UV}^{\rm lim}$, $X$, and to compare the outcomes of different DM scenarios on the reionization history. Finally, we highlight the relevance of our astroparticle estimates in predicting the behavior of the high-redshift UV luminosity function at faint, yet unexplored magnitudes, that may be tested with the advent of the James Webb Space Telescope.Cold dark matterGalaxy FormationPrimordial galaxiesUV luminosity functionVirial massWarm dark matterHalo mass functionSelf-interacting dark matterReionizationHalo abundance matching...
- The present white paper is submitted as part of the "Snowmass" process to help inform the long-term plans of the United States Department of Energy and the National Science Foundation for high-energy physics. It summarizes the science questions driving the Ultra-High-Energy Cosmic-Ray (UHECR) community and provides recommendations on the strategy to answer them in the next two decades.Ultra-high-energy cosmic rayEnergy Frontier experimentEnergyUnits...
- TeV halos are regions of enhanced photon emissivity surrounding pulsars. While multiple sources have been discovered, a self-consistent explanation of their radial profile and spherically-symmetric morphology remains elusive due to the difficulty in confining high-energy electrons and positrons within ~20 pc regions of the interstellar medium. One proposed solution utilizes anisotropic diffusion to confine the electron population within a "tube" that is auspiciously oriented along the line of sight. In this work, we show that while such models may explain a unique source such as Geminga, the phase space of such solutions is very small and they are unable to simultaneously explain the size and approximate radial symmetry of the TeV halo population.PulsarLine of sightDiffusion coefficientGemingaTurbulenceInterstellar mediumInverse ComptonHigh Altitude Water CherenkovHalo populationMilky Way...
#### The strange side of LHCbver. 2

We provide general effective-theory arguments relating present-day discrepancies in semi-leptonic $B$-meson decays to signals in kaon physics, in particular lepton-flavour violating ones of the kind $K \to (\pi) e^\pm \mu^\mp$. We show that $K$-decay branching ratios of around $10^{-12} - 10^{-13}$ are possible, for effective-theory cutoffs around $5-15$ TeV compatible with discrepancies in $B\to K^{(\ast)} \mu\mu$ decays. We perform a feasibility study of the reach for such decays at LHCb, taking $K^+ \to \pi^+ \mu^\pm e^\mp$ as a benchmark. In spite of the long lifetime of the $K^+$ compared to the detector size, the huge statistics anticipated as well as the overall detector performance translate into encouraging results. These include the possibility to reach the $10^{-12}$ ballpark, and thereby significantly improve current limits. Our results advocate LHC's high-luminosity Upgrade phase, and support analogous sensitivity studies at other facilities. Given the performance uncertainties inherent in the Upgrade phase, our conclusions are based on a range of assumptions we deem realistic on the particle identification performance as well as on the kinematic reconstruction thresholds for the signal candidates.LHCb experimentLepton flavour violationMuonKaon decayKaonEffective theoryKinematicsBranching ratioStandard ModelPion...- <p>The LHCb Collaboration’s measurement of <inline-formula><mml:math display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>K</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi mathvariant="script">B</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup><mml:mo stretchy="false">→</mml:mo><mml:msup><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mo>/</mml:mo><mml:mi mathvariant="script">B</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup><mml:mo stretchy="false">→</mml:mo><mml:msup><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> lies <inline-formula><mml:math display="inline"><mml:mrow><mml:mn>2.6</mml:mn><mml:mi>σ</mml:mi></mml:mrow></mml:math></inline-formula> below the Standard Model prediction. Several groups suggest this deficit to result from new lepton nonuniversal interactions of muons. But nonuniversal leptonic interactions imply lepton flavor violation in <inline-formula><mml:math display="inline"><mml:mi>B</mml:mi></mml:math></inline-formula> decays at rates much larger than are expected in the Standard Model. A simple model shows that these rates could lie just below current limits. An interesting consequence of our model, that <inline-formula><mml:math display="inline"><mml:mrow><mml:mi mathvariant="script">B</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">→</mml:mo><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mtext>exp</mml:mtext></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mspace linebreak="goodbreak"/><mml:mi mathvariant="script">B</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">→</mml:mo><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi>SM</mml:mi></mml:mrow></mml:msub><mml:mo>≅</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>K</mml:mi></mml:mrow></mml:msub><mml:mo>≅</mml:mo><mml:mn>0.75</mml:mn></mml:mrow></mml:math></inline-formula>, is compatible with recent measurements of these rates. We stress the importance of searches for lepton flavor violations, especially for <inline-formula><mml:math display="inline"><mml:mi>B</mml:mi><mml:mo stretchy="false">→</mml:mo><mml:mi>K</mml:mi><mml:mi>μ</mml:mi><mml:mi>e</mml:mi></mml:math></inline-formula>, <inline-formula><mml:math display="inline"><mml:mi>K</mml:mi><mml:mi>μ</mml:mi><mml:mi>τ</mml:mi></mml:math></inline-formula>, and <inline-formula><mml:math display="inline"><mml:msub><mml:mi>B</mml:mi><mml:mi>s</mml:mi></mml:msub><mml:mo stretchy="false">→</mml:mo><mml:mi>μ</mml:mi><mml:mi>e</mml:mi></mml:math></inline-formula>, <inline-formula><mml:math display="inline"><mml:mi>μ</mml:mi><mml:mi>τ</mml:mi></mml:math></inline-formula>.</p>MuonLepton flavour violationRadiative decayRare decaySemileptonic decayFlavour Changing Neutral CurrentsStandard ModelPositronLHCb experimentElectron...
- We summarize the presentations made within Working Group 3 of the CKM2021 workshop. This working group is devoted to rare $B$, $D$ and $K$ decays, radiative and electroweak-penguin decays, including constraints on $V_{\rm td}/V_{\rm ts}$ and $\epsilon^\prime / \epsilon$. The working group has thus a very broad scope, and includes very topical subjects such as the coherent array of discrepancies in semi-leptonic $B$ decays. Each contribution is here summarized very succinctly with the aim of providing an overview of the main results. The reader interested in fuller details is referred to the individual contributions.Standard ModelBranching ratioLHCb experimentLeptoquarkFlavourLattice QCDWilson coefficientsIsospinFinal stateLattice (order)...
- We prove a generalization of Tur\'{a}n's theorem proposed by Balogh and Lidick\'{y}.Turán's theoremGraphContradictionAlgebraProbabilityLagrangian...
- Runko is a new open-source plasma simulation framework implemented in C++ and Python. It is designed to function as an easy-to-extend general toolbox for simulating astrophysical plasmas with different theoretical and numerical models. Computationally intensive low-level kernels are written in modern C++ taking advantage of polymorphic classes, multiple inheritance, and template metaprogramming. High-level functionality is operated with Python scripts. The hybrid program design ensures good code performance together with ease of use. The framework has a modular object-oriented design that allows the user to easily add new numerical algorithms to the system. The code can be run on various computing platforms ranging from laptops (shared-memory systems) to massively parallel supercomputer architectures (distributed-memory systems). The framework supports heterogeneous multiphysics simulations in which different physical solvers can be combined and run simultaneously. Here we showcase the framework's relativistic particle-in-cell (PIC) module by presenting (i) 1D simulations of relativistic Weibel instability, (ii) 2D simulations of relativistic kinetic turbulence in a suddenly stirred magnetically-dominated pair plasma, and (iii) 3D simulations of collisionless shocks in an unmagnetized medium.Particle-in-cellRankTurbulenceLattice (order)Weibel instabilityLorentz factorInstabilityMagnetohydrodynamicsPythonCharged particle...
- Relativistic magnetized jets, such as those from AGN, GRBs and XRBs, are susceptible to current- and pressure-driven MHD instabilities that can lead to particle acceleration and non-thermal radiation. Here we investigate the development of these instabilities through 3D kinetic simulations of cylindrically symmetric equilibria involving toroidal magnetic fields with electron-positron pair plasma. Generalizing recent treatments by Alves et al. (2018) and Davelaar et al. (2020), we consider a range of initial structures in which the force due to toroidal magnetic field is balanced by a combination of forces due to axial magnetic field and gas pressure. We argue that the particle energy limit identified by Alves et al. (2018) is due to the finite duration of the fast magnetic dissipation phase. We find a rather minor role of electric fields parallel to the local magnetic fields in particle acceleration. In all investigated cases a kink mode arises in the central core region with a growth timescale consistent with the predictions of linearized MHD models. In the case of a gas-pressure-balanced (Z-pinch) profile, we identify a weak local pinch mode well outside the jet core. We argue that pressure-driven modes are important for relativistic jets, in regions where sufficient gas pressure is produced by other dissipation mechanisms.DissipationPinchInstabilityToroidal magnetic fieldZ-pinchRelativistic jetPositronAxial magnetic fieldCurrent densityMagnetic reconnection...
- In this work, we study the magnetic field morphology of selected star-forming clouds spread over the galactic latitude ($b$) range, $-10^\circ$ to $10^\circ$. The polarimetric observation of clouds CB24, CB27 and CB188 are conducted to study the magnetic field geometry of those clouds from ARIES, Manora Peak, Nainital, India. These observations are combined with those of 14 further low latitude clouds available in the literature. Analyzing the polarimetric data of 17 clouds, we find that the alignment between the envelope magnetic field ($\theta_{B}^{env}$) and Galactic plane ($\theta_{GP}$) of the low-latitude clouds varies with their galactic longitudes ($l$). We observe a strong correlation between the longitude (\textit{l}) and the offset ($\theta_{off}=|\theta_B^{env}-\theta_{GP}|$) which shows that $\theta_{B}^{env}$ is parallel to the Galactic plane (GP) when the clouds are situated in the region, $115^\circ<l<250^\circ$. However, $\theta_{B}^{env}$ has its own local deflection irrespective of the orientation of $\theta_{GP}$ when the clouds are at $l<100^\circ$ and $l>250^\circ$. To check the consistency of our results, the stellar polarization data available at Heiles (2000) catalogue are overlaid on DSS image of the clouds having mean polarization vector of field stars. The results are almost consistent with the Heiles data. The effect of turbulence of the cloud is also studied which may play an important role in causing the misalignment phenomenon observed between $\theta_{B}^{env}$ and $\theta_{GP}$. We have used \textit{Herschel} \textit{SPIRE} 500 $\mu m$ and \textit{SCUBA} 850 $\mu m$ dust continuum emission maps in our work to understand the density structure of the clouds.OrientationPolarization vectorStarGalactic CenterPosition angleMolecular cloudGalactic planeStar formationTurbulenceGalactic latitude...
- In the past decade, electroweak penguin decays have provided a number of precision measurements, turning into one of the most competitive ways to search for New Physics that describe beyond the Standard Model phenomena. An overview of the measurements made at the $B$ factories and hadron colliders are given and the experimental methods are presented. Experimental measurements required to provide further insight into present indications of New Physics are discussed.Standard ModelBranching ratioFinal stateElectroweakMuonWilson coefficientsFlavourForm factorLHCb experimentCollider...
- Antimatter is one of the most fascinating aspects of Particle Physics and one of the most unknown ones too. In this article we concisely explain what is antimatter and its distinction between primordial and secondary, how it is produced, where it can be found, the experiments carried out at CERN to create and analyze antiatoms, the problem of the matter-antimatter asymmetry, and the medical and technological applications of antimatter in our society. -- La antimateria es uno de los aspectos m\'as fascinantes de la F\'isica de Part\'iculas, y tambi\'en uno de los m\'as desconocidos. En este art\'iculo explicamos concisamente qu\'e es la antimateria y su diferenciaci\'on entre primordial y secundaria, c\'omo se produce, donde se encuentra, los experimentos que se realizan en el CERN para crear y analizar anti\'atomos, el problema de la asimetr\'ia materia-antimateria, y las aplicaciones m\'edicas y tecnol\'ogicas de la antimateria en nuestra sociedad.AntimatterCERNBaryon asymmetry of the UniverseParticle physics...
- In laser-wakefield acceleration, an ultra-intense laser pulse is focused into an underdense plasma in order to accelerate electrons to relativistic velocities. In most cases, the pulses consist of multiple optical cycles and the interaction is well described in the framework of the ponderomotive force where only the envelope of the laser has to be considered. But when using single-cycle pulses, the ponderomotive approximation breaks down, and the actual waveform of the laser has to be taken into account. In this paper, we use near-single cycle laser pulses to drive a laser-wakefield accelerator. We observe variations of the electron beam pointing on the order of 10 mrad in the polarisation direction, as well as 30% variations of the beam charge, locked to the value of the controlled laser carrier-envelope phase, in both nitrogen and helium plasma. Those findings are explained through particle-in-cell simulations indicating that low-emittance, ultra-short electron bunches are periodically injected off-axis by the transversally oscillating bubble associated with the slipping carrier-envelope phase.LasersIonizationTransverse momentumBetatronRelativistic electronParticle-in-cellIntensityTotal-Variation regularizationPhase effectSpectrometers...
- Blockchains provide environments where parties can interact transparently and securely peer-to-peer without needing a trusted third party. Parties can trust the integrity and correctness of transactions and the verifiable execution of binary code on the blockchain (smart contracts) inside the system. Including information from outside of the blockchain remains challenging. A challenge is data privacy. In a public system, shared data becomes public and, coming from a single source, often lacks credibility. A private system gives the parties control over their data and sources but trades in positive aspects as transparency. Often, not the data itself is the most critical information but the result of a computation performed on it. An example is research data certification. To keep data private but still prove data provenance, researchers can store a hash value of that data on the blockchain. This hash value is either calculated locally on private data without the chance for validation or is calculated on the blockchain, meaning that data must be published and stored on the blockchain -- a problem of the overall data amount stored on and distributed with the ledger. A system we called moving smart contracts bypasses this problem: Data remain local, but trusted nodes can access them and execute trusted smart contract code stored on the blockchain. This method avoids the system-wide distribution of research data and makes it accessible and verifiable with trusted software.BlockchainSoftwarePrivacyProgrammingIntellectual PropertyEncryptionPythonP2pResearch and DevelopmentFile system...
- The past decades have witnessed the flourishing of non-Hermitian physics in non-conservative systems, leading to unprecedented phenomena of unidirectional invisibility, enhanced sensitivity and more recently the novel topological features such as bulk Fermi arcs. Among them, growing efforts have been invested to an intriguing phenomenon, known as the non-Hermitian skin effect (NHSE). Here, we review the recent progress in this emerging field. By starting from the one-dimensional (1D) case, the fundamental concepts of NHSE, its minimal model, the physical meanings and consequences are elaborated in details. In particular, we discuss the NHSE enriched by lattice symmetries, which gives rise to unique non-Hermitian topological properties with revised bulk-boundary correspondence (BBC) and new definitions of topological invariants. Then we extend the discussions to two and higher dimensions, where dimensional surprises enable even more versatile NHSE phenomena. Extensions of NHSE assisted with extra degrees of freedom such as long-range coupling, pseudospins, magnetism, non-linearity and crystal defects are also reviewed. This is followed by the contemporary experimental progress for NHSE. Finally, we provide the outlooks to possible future directions and developments.Skin effectLattice (order)Topological invariantDegree of freedomMagnetismMinimal modelsDimensionsSymmetryFieldCrystal...
- The paper describes the practical work for students visually clarifying the mechanism of the Monte Carlo method applying to approximating the value of Pi. Considering a traditional quadrant (circular sector) inscribed in a square, here we demonstrate the original algorithm for generating random points on the paper: you should arbitrarily tear up a paper blank to small pieces (the first experiment). By the similar way the second experiment (with a preliminary staining procedure by bright colors) can be used to prove the quadratic dependence of the area of a circle on its radius. Manipulations with tearing up a paper as a random sampling algorithm can be applied for solving other teaching problems in physics.Monte Carlo methodQuadrantsAlgorithms
- We explore the assumption, widely used in many astrophysical calculations, that the stellar initial mass function (IMF) is universal across all galaxies. By considering both a canonical Salpeter-like IMF and a non-universal IMF, we are able to compare the effect of different IMFs on multiple observables and derived quantities in astrophysics. Specifically, we consider a non-universal IMF which varies as a function of the local star formation rate, and explore the effects on the star formation rate density (SFRD), the extragalactic background light, the supernova (both core-collapse and thermonuclear) rates, and the diffuse supernova neutrino background. Our most interesting result is that our adopted varying IMF leads to much greater uncertainty on the SFRD at $z \approx 2-4$ than is usually assumed. Indeed, we find a SFRD (inferred using observed galaxy luminosity distributions) that is a factor of $\gtrsim 3$ lower than canonical results obtained using a universal Salpeter-like IMF. Secondly, the non-universal IMF we explore implies a reduction in the supernova core-collapse rate of a factor of $\sim2$, compared against a universal IMF. The other potential tracers are only slightly affected by changes to the properties of the IMF. We find that currently available data do not provide a clear preference for universal or non-universal IMF. However, improvements to measurements of the star formation rate and core-collapse supernova rate at redshifts $z \gtrsim 2$ may offer the best prospects for discernment.Initial mass functionStar formation rateGalaxyDiffuse supernova neutrino backgroundStarLuminositySupernovaCalibrationNeutrinoCore-collapse supernova...
- When interpreted within the standard framework of Newtonian gravity and dynamics, the kinematics of stars and gas in dwarf galaxies reveals that most of these systems are completely dominated by their dark matter halos. These dwarf galaxies are thus among the best astrophysical laboratories to study the structure of dark halos and the nature of dark matter. We review the properties of the dwarf galaxies of the Local Group from the point of view of stellar dynamics. After describing the observed kinematics of their stellar components and providing an overview of the dynamical modelling techniques, we look into the dark matter content and distribution of these galaxies, as inferred from the combination of observed data and dynamical models. We also briefly touch upon the prospects of using nearby dwarf galaxies as targets for indirect detection of dark matter via annihilation or decay emission.Local groupDark matterKinematicsMilky WayDark matter haloUltra-faint dwarf spheroidal galaxyStarProper motionDwarf galaxyGalaxy...
- Constraints on dark matter halo masses from weak gravitational lensing can be improved significantly by using additional information about the morphology of their density distribution, leading to tighter cosmological constraints derived from the halo mass function. This work is the first of two in which we investigate the accuracy of halo morphology and mass measurements in 2D and 3D. To this end, we determine several halo physical properties in the MICE-Grand Challenge dark matter only simulation. We present a public catalogue of these properties that includes density profiles and shape parameters measured in 2D and 3D, the halo centre at the peak of the 3D density distribution as well as the gravitational and kinetic energies and angular momentum vectors. The density profiles are computed using spherical and ellipsoidal radial bins, taking into account the halo shapes. We also provide halo concentrations and masses derived from fits to 2D and 3D density profiles using NFW and Einasto models for halos with more than $1000$ particles ($\gtrsim 3 \times 10^{13} h^{-1} M_{\odot}$). We find that the Einasto model provides better fits compared to NFW, regardless of the halo relaxation state and shape. The mass and concentration parameters of the 3D density profiles derived from fits to the 2D profiles are in general biased. Similar biases are obtained when constraining mass and concentrations using a weak-lensing stacking analysis. We show that these biases depend on the radial range and density profile model adopted in the fitting procedure, but not on the halo shape.Navarro-Frenk-White profileDark matter particleDark matter haloVirial massRelaxationHalo concentrationsWeak lensingDensity contrastWeak lensing mass estimateDark matter...
- Feedback to the interstellar medium (ISM) from ionising radiation, stellar winds and supernovae is central to regulating star formation in galaxies. Due to their low mass ($M_{*} < 10^{9}$\,M$_\odot$), dwarf galaxies are particularly susceptible to such processes, making them ideal sites to study the detailed physics of feedback. In this perspective, we summarise the latest observational evidences for feedback from star forming regions and how this drives the formation of 'superbubbles' and galaxy-wide winds. We discuss the important role of external ionising radiation -- 'reionisation' -- for the smallest galaxies. And, we discuss the observational evidences that this feedback directly impacts galaxy properties such as their star formation histories, metal content, colours, sizes, morphologies and even their inner dark matter densities. We conclude with a look to the future, summarising the key questions that remain unanswered and listing some of the outstanding challenges for galaxy formation theories.Dwarf galaxyStar formationGalaxyStellar feedbackInterstellar mediumDark matterStarStar formation rateHealth informaticsOf stars...
- In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks. The great success of deep learning is mainly due to its scalability to encode large-scale data and to maneuver billions of model parameters. However, it is a challenge to deploy these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the high computational complexity but also the large storage requirements. To this end, a variety of model compression and acceleration techniques have been developed. As a representative type of model compression and acceleration, knowledge distillation effectively learns a small student model from a large teacher model. It has received rapid increasing attention from the community. This paper provides a comprehensive survey of knowledge distillation from the perspectives of knowledge categories, training schemes, teacher-student architecture, distillation algorithms, performance comparison and applications. Furthermore, challenges in knowledge distillation are briefly reviewed and comments on future research are discussed and forwarded.DistillationArchitectureGraphAttentionDeep Neural NetworksNeural networkComputational linguisticsData samplingTraining setQuantization...
- Large-amplitude Alfv\'en waves are subject to parametric decays which can have important consequences in space, astrophysical, and fusion plasmas. Though this Alfv\'en wave parametric decay instability was predicted decades ago, its observational evidence has not been well established, stimulating considerable interest in laboratory demonstration of the instability and associated numerical modeling. Here, we report on novel hybrid simulation modeling of the Alfv\'en wave parametric decay instability in a laboratory plasma (based on the Large Plasma Device), including collisionless ion kinetics. Using realistic wave injection and wave-plasma parameters we identify the threshold Alfv\'en wave amplitudes and frequencies required for triggering the instability in the bounded plasma. These threshold behaviors are corroborated by simple theoretical considerations. Compounding effects such as finite source sizes and ion-neutral collisions are briefly discussed. These hybrid simulations represent a promising tool for investigating laboratory Alfv\'en wave dynamics and our results may help to guide the first laboratory demonstration of the parametric decay instability.InstabilityAcoustic waveLandau dampingElectron temperaturePlasma parameterAlfvén waveDamping rateThermal speedPlane waveAstrophysical plasma...
- Deep neural networks (DNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past five years, tremendous progress has been made in this area. In this paper, we review the recent techniques for compacting and accelerating DNN models. In general, these techniques are divided into four categories: parameter pruning and quantization, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and quantization are described first, after that the other techniques are introduced. For each category, we also provide insightful analysis about the performance, related applications, advantages, and drawbacks. Then we go through some very recent successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrices, the main datasets used for evaluating the model performance, and recent benchmark efforts. Finally, we conclude this paper, discuss remaining the challenges and possible directions for future work.Deep Neural NetworksQuantizationRankConvolution Neural NetworkFully connected layerArchitectureDistillationClassificationNeural networkSparsity...
- We introduce a data-free quantization method for deep neural networks that does not require fine-tuning or hyperparameter selection. It achieves near-original model performance on common computer vision architectures and tasks. 8-bit fixed-point quantization is essential for efficient inference on modern deep learning hardware. However, quantizing models to run in 8-bit is a non-trivial task, frequently leading to either significant performance reduction or engineering time spent on training a network to be amenable to quantization. Our approach relies on equalizing the weight ranges in the network by making use of a scale-equivariance property of activation functions. In addition the method corrects biases in the error that are introduced during quantization. This improves quantization accuracy performance, and can be applied to many common computer vision architectures with a straight forward API call. For common architectures, such as the MobileNet family, we achieve state-of-the-art quantized model performance. We further show that the method also extends to other computer vision architectures and tasks such as semantic segmentation and object detection.QuantizationArchitectureImage ProcessingActivation functionDeep learningSemantic segmentationInferenceObject detectionHyperparameterApplication programming interface...
- We present a post-training weight pruning method for deep neural networks that achieves accuracy levels tolerable for the production setting and that is sufficiently fast to be run on commodity hardware such as desktop CPUs or edge devices. We propose a data-free extension of the approach for computer vision models based on automatically-generated synthetic fractal images. We obtain state-of-the-art results for data-free neural network pruning, with ~1.5% top@1 accuracy drop for a ResNet50 on ImageNet at 50% sparsity rate. When using real data, we are able to get a ResNet50 model on ImageNet with 65% sparsity rate in 8-bit precision in a post-training setting with a ~1% top@1 accuracy drop. We release the code as a part of the OpenVINO(TM) Post-Training Optimization tool.SparsityDeep Neural NetworksQuantizationImage ProcessingOptimizationFractalTraining setCalibrationInferenceDistillation...
- Lately, post-training quantization methods have gained considerable attention, as they are simple to use, and require only a small unlabeled calibration set. This small dataset cannot be used to fine-tune the model without significant over-fitting. Instead, these methods only use the calibration set to set the activations' dynamic ranges. However, such methods always resulted in significant accuracy degradation, when used below 8-bits (except on small datasets). Here we aim to break the 8-bit barrier. To this end, we minimize the quantization errors of each layer separately by optimizing its parameters over the calibration set. We empirically demonstrate that this approach is: (1) much less susceptible to over-fitting than the standard fine-tuning approaches, and can be used even on a very small calibration set; and (2) more powerful than previous methods, which only set the activations' dynamic ranges. Furthermore, we demonstrate how to optimally allocate the bit-widths for each layer, while constraining accuracy degradation or model compression by proposing a novel integer programming formulation. Finally, we suggest model global statistics tuning, to correct biases introduced during quantization. Together, these methods yield state-of-the-art results for both vision and text models. For instance, on ResNet50, we obtain less than 1\% accuracy degradation --- with 4-bit weights and activations in all layers, but the smallest two. We open-sourced our code.QuantizationCalibrationStatisticsOverfittingProgrammingOptimizationTraining setAttentionInferenceMean squared error...
- Overparameterized networks trained to convergence have shown impressive performance in domains such as computer vision and natural language processing. Pushing state of the art on salient tasks within these domains corresponds to these models becoming larger and more difficult for machine learning practitioners to use given the increasing memory and storage requirements, not to mention the larger carbon footprint. Thus, in recent years there has been a resurgence in model compression techniques, particularly for deep convolutional neural networks and self-attention based networks such as the Transformer. Hence, this paper provides a timely overview of both old and current compression techniques for deep neural networks, including pruning, quantization, tensor decomposition, knowledge distillation and combinations thereof. We assume a basic familiarity with deep learning architectures\footnote{For an introduction to deep learning, see ~\citet{goodfellow2016deep}}, namely, Recurrent Neural Networks~\citep[(RNNs)][]{rumelhart1985learning,hochreiter1997long}, Convolutional Neural Networks~\citep{fukushima1980neocognitron}~\footnote{For an up to date overview see~\citet{khan2019survey}} and Self-Attention based networks~\citep{vaswani2017attention}\footnote{For a general overview of self-attention networks, see ~\citet{chaudhari2019attentive}.},\footnote{For more detail and their use in natural language processing, see~\citet{hu2019introductory}}. Most of the papers discussed are proposed in the context of at least one of these DNN architectures.QuantizationArchitectureDistillationSparsityConvolution Neural NetworkAttentionRankRegularizationHidden layerNeural network...
- Firehose-like instabilities (FIs) are cited in multiple astrophysical applications. Of particular interest are the kinetic manifestations in weakly-collisional or even collisionless plasmas, where these instabilities are expected to contribute to the evolution of macroscopic parameters. Relatively recent studies have initiated a realistic description of FIs, as induced by the interplay of both species, electrons and protons, dominant in the solar wind plasma. This work complements the current knowledge with new insights from linear theory and the first disclosures from 2D PIC simulations, identifying the fastest growing modes near the instability thresholds and their long-run consequences on the anisotropic distributions. Thus, unlike previous setups, these conditions are favorable to those aperiodic branches that propagate obliquely to the uniform magnetic field, with (maximum) growth rates higher than periodic, quasi-parallel modes. Theoretical predictions are, in general, confirmed by the simulations. The aperiodic electron FI (a-EFI) remains unaffected by the proton anisotropy, and saturates rapidly at low-level fluctuations. Regarding the firehose instability at proton scales, we see a stronger competition between the periodic and aperiodic branches. For the parameters chosen in our analysis, the a-PFI is excited before than the p-PFI, with the latter reaching a significantly higher fluctuation power. However, both branches are significantly enhanced by the presence of anisotropic electrons. The interplay between EFIs and PFIs also produces a more pronounced proton isotropization.InstabilityAnisotropyFirehose instabilitySolar windMass ratioNumerical simulationTemperature anisotropyRelaxationReduced massPlasma Beta...
- The spatial distribution of cosmic ray (CR) particles in the interstellar medium (ISM) is of major importance in radio astronomy, where its knowledge is essential for the interpretation of observations, and in theoretical astrophysics, where CR contribute to the structure and dynamics of the ISM. Local inhomogeneities in interstellar magnetic field strength and structure can affect the local diffusivity and ensemble dynamics of the cosmic ray particles. Magnetic traps (regions between magnetic mirrors located on the same magnetic line) can lead to especially strong and persistent features in the CR spatial distribution. Using test particle simulations, we study the spatial distribution of an ensemble of CR particles (both protons and electrons) in various magnetic field configurations, from an idealized axisymmetric trap to those that emerge in intermittent (dynamo-generated) random magnetic fields. We demonstrate that both the inhomogeneity in the CR sources and the energy losses by the CR particles can lead to persistent local inhomogeneities in the CR distribution and that the protons and electrons have different spatial distributions. Our results can have profound implications for the interpretation of the synchrotron emission from astronomical objects, and in particular its random fluctuations.Cosmic rayMagnetic trapMagnetic field strengthInterstellar mediumMagnetic mirrorLarmor radiusLiouville theoremSynchrotron radiationPitch angleGalactic magnetic field...
- Straining of magnetic fields by large-scale shear flow, generally assumed to lead to intensification and generation of small scales, is re-examined in light of the persistent observation of large-scale magnetic fields in astrophysics. It is shown that in magnetohydrodynamic turbulence, unstable shear flows have the unexpected effect of sequestering magnetic energy at large scales, due to counteracting straining motion of nonlinearly excited large-scale stable eigenmodes. This effect is quantified via dissipation rates, energy transfer rates, and visualizations of magnetic field evolution by artificially removing the stable modes.Shear flowMagnetic energyInstabilityTurbulenceDissipationShearedMagnetohydrodynamic turbulenceMagnetic Prandtl numberLong-range magnetic fieldsCurrent density...
- The nearby M dwarf WX UMa has recently been detected at radio wavelengths with LOFAR. The combination of its observed brightness temperature and circular polarisation fraction suggests that the emission is generated via the electron-cyclotron maser instability. Two distinct mechanisms have been proposed to power such emission from low-mass stars: either a sub-Alfv\'enic interaction between the stellar magnetic field and an orbiting planet, or reconnection at the edge of the stellar magnetosphere. In this paper, we investigate the feasibility of both mechanisms, utilising the information about the star's surrounding plasma environment obtained from modelling its stellar wind. Using this information, we show that a Neptune-sized exoplanet with a magnetic field strength of 10-100 G orbiting at ~0.034 au can accurately reproduce the observed radio emission from the star, with corresponding orbital periods of 7.4 days. Due to the stellar inclination, a planet in an equatorial orbit is unlikely to transit the star. While such a planet could induce radial velocity semi-amplitudes from 7 to 396 m s$^{-1}$, it is unlikely that this signal could be detected with current techniques due to the activity of the host star. The application of our planet-induced radio emission model here illustrates its exciting potential as a new tool for identifying planet-hosting candidates from long-term radio monitoring. We also develop a model to investigate the reconnection-powered emission scenario. While this approach produces less favourable results than the planet-induced scenario, it nevertheless serves as a potential alternative emission mechanism which is worth exploring further.PlanetStarStellar windMagnetic field strengthStellar rotationAlfvén waveMaserNeptuneJupiterExtrasolar planet...
- This paper is concerned with the taxonomy of finitely complete categories, based on 'matrix properties' - these are a particular type of exactness properties that can be represented by integer matrices. In particular, the main result of the paper gives an algorithm for deciding whether a conjunction of such properties implies another such property. Computer implementation of this algorithm allows one to peer into the complex structure of the poset of `matrix classes', i.e., the poset of all collections of finitely complete categories determined by matrix properties. Among elements of this poset are the collections of Mal'tsev categories, majority categories, (finitely complete) arithmetical categories, as well as finitely complete extensions of various classes of varieties defined by a special type of Mal'tsev conditions found in the literature.MorphismPartially ordered setInjective objectEpimorphismMonomorphismSubcategoryFactorisationTaxonomyLattice (order)Conjunction...
- It is well-known that the category of presheaf functors is complete and cocomplete, and that the Yoneda embedding into the presheaf category preserves products. However, the Yoneda embedding does not preserve coproducts. It is perhaps less well-known that if we restrict the codomain of the Yoneda embedding to the full subcategory of limit-preserving functors, then this embedding preserves colimits, while still enjoying most of the other useful properties of the Yoneda embedding. We call this modified embedding the Lambek embedding. The category of limit-preserving functors is known to be a reflective subcategory of the category of all functors, i.e., there is a left adjoint for the inclusion functor. In the literature, the existence of this left adjoint is often proved non-constructively, e.g., by an application of Freyd's adjoint functor theorem. In this paper, we provide an alternative, more constructive proof of this fact. We first explain the Lambek embedding and why it preserves coproducts. Then we review some concepts from multi-sorted algebras and observe that there is a one-to-one correspondence between product-preserving presheaves and certain multi-sorted term algebras. We provide a construction that freely turns any presheaf functor into a product-preserving one, hence giving an explicit definition of the left adjoint functor of the inclusion. Finally, we sketch how to extend our method to prove that the subcategory of limit-preserving functors is also reflective.EmbeddingSubcategoryTerm algebraMorphismHomomorphismProgramming LanguageAxiom of choiceEngineeringFunctional programmingQuantum programming...
- Although it has been a well-known fact, for more than two decades, that category theory is needed for the study of topological orders, it is still a non-trivial challenge for students and working physicists to master the abstract language of category theory. In this work, for those who have no background in category theory, we explain in great details how the structure of a (braided) fusion category naturally emerges from lattice models and physical intuitions. Moreover, we show that nearly all mathematical notions and constructions in fusion categories and its representation theory, such as (monoidal) functors, Drinfeld center, module categories, Morita equivalence, condensation completion and fusion 2-categories, naturally emerge from lattice models and physical intuitions. In this process, we also introduce some basic notions and important results of topological orders.Category theoryTopological orderLattice modelCondensationMorita equivalenceRepresentation theoryLanguage...
- We survey the theory of Hopf monads on monoidal categories, and present new examples and applications. As applications, we utilise this machinery to present a new theory of cross products, as well as analogues of the Fundamental Theorem of Hopf algebras and Radford's biproduct Theorem for Hopf algebroids. Additionally, we describe new examples of Hopf monads which arise from Galois and Ore extensions of bialgebras. We also classify Lawvere theories whose corresponding monads on the category of sets and functions become Hopf, as well as Hopf monads on the poset of natural numbers.MorphismHopf algebraIsomorphismDualityMonoidCrossed productUniversal propertyTensor productPartially ordered setFinitary...
- Magnitude and (co)weightings are quite general constructions in enriched categories, yet they have been developed almost exclusively in the context of Lawvere metric spaces. We construct a meaningful notion of magnitude for flow graphs based on the observation that topological entropy provides a suitable map into the max-plus semiring, and we outline its utility. Subsequently, we identify a separate point of contact between magnitude and topological entropy in digraphs that yields an analogue of volume entropy for geodesic flows. Finally, we sketch the utility of this construction for feature engineering in downstream applications with generic digraphs.GraphTopological entropy in physicsSemiringMetric spaceEngineeringEntropyEnriched categoryArithmeticMorphismGeodesic...
- The interstellar medium (ISM) of galaxies very often contains a gas component that reaches the temperature of several million degrees, whose physical and chemical properties can be investigated through imaging and spectroscopy in the X-rays. We review the current knowledge on the origin and retention of the hot ISM in star-forming and early-type galaxies, from a combined theoretical and observational standpoint. As a complex interplay between gravitational processes, environmental effects, and feedback mechanisms contributes to its physical conditions, the hot ISM represents a key diagnostic of the evolution of galaxies.Early-type galaxyInterstellar mediumHot ISMGalaxyHot gasMilky WayCoolingStar formationAccretionActive Galactic Nuclei...
- Cosmic rays are mostly composed of protons accelerated to relativistic speeds. When those protons encounter interstellar material, they produce neutral pions which in turn decay into gamma rays. This offers a compelling way to identify the acceleration sites of protons. A characteristic hadronic spectrum, with a low-energy break around 200 MeV, was detected in the gamma-ray spectra of four Supernova Remnants (SNRs), IC 443, W44, W49B and W51C, with the Fermi Large Area Telescope. This detection provided direct evidence that cosmic-ray protons are (re-)accelerated in SNRs. Here, we present a comprehensive search for low-energy spectral breaks among 311 4FGL catalog sources located within 5 degrees from the Galactic plane. Using 8 years of data from the Fermi Large Area Telescope between 50 MeV and 1 GeV, we find and present the spectral characteristics of 56 sources with a spectral break confirmed by a thorough study of systematic uncertainty. Our population of sources includes 13 SNRs for which the proton-proton interaction is enhanced by the dense target material; the high-mass gamma-ray binary LS~I +61 303; the colliding wind binary eta Carinae; and the Cygnus star-forming region. This analysis better constrains the origin of the gamma-ray emission and enlarges our view to potential new cosmic-ray acceleration sites.Spectral breakCosmic rayMolecular cloudGalactic planeSystematic errorStar-forming regionRegion of interestPoint sourceProton-proton interactionPulsar wind nebula...
- We analyse the kinematical and dynamical state of the galaxy cluster RXCJ1230.7+3439, at z=0.332, using 93 new spectroscopic redshifts of galaxies acquired at the 3.6m TNG telescope and from SDSS DR16 public data. We find that RXCJ1230 appears as a clearly isolated peak in the redshift space, with a global line-of-sight velocity dispersion of $1004_{-122}^{+147}$ km s$^{-1}$, and showing a very complex structure with the presence of three subclusters. Our analyses confirm that the three substructures detected are in a pre-merger phase, where the main interaction takes place with the south-west subclump. We compute a velocity dispersion of $\sigma_\textrm{v} \sim 1000$ and $\sigma_\textrm{v} \sim 800$ km s$^{-1}$ for the main cluster and the south-west substructure, respectively. The central main body and south-west substructure differ by $\sim 870$ km s$^{-1}$ in the LOS velocity. From these data, we estimate a dynamical mass of $M_{200}= 9.0 \pm 1.5 \times 10^{14}$ M$_{\odot}$ and $4.4 \pm 3.3 \times 10^{14}$ M$_{\odot}$ for the RXCJ1230 main body and south-west clump, respectively, which reveals that the cluster will suffer a merging characterized by a 2:1 mass ratio impact. We solve a two-body problem for this interaction and find that the most likely solution suggests that the merging axis lies almost contained in the plane of the sky and the subcluster will fully interact in $\sim0.3$ Gyr. The comparison between the dynamical masses and those derived from X-ray data reveals a good agreement within errors (differences $\sim 15$\%), which suggests that the innermost regions ($<r_{500}$) of the galaxy clumps are almost in hydrostatical equilibrium. To summarize, RXCJ1230 is a young but also massive cluster in a pre-merging phase accreeting other galaxy systems from its environment.Cluster of galaxiesGalaxyVelocity dispersionSloan Digital Sky SurveyMilky WaySpectroscopic redshiftIntra-cluster mediumMass ratioBrightest cluster galaxyLine of sight velocity...
- We probe the environmental properties of X-ray supernova remnants (SNRs) at various points along their evolutionary journey, especially the S-T phase, and their conformance with theoretically derived models of SNR evolution. The remnant size is used as a proxy for the age of the remnant. Our data set includes 34 Milky Way, 59 Large Magellanic Cloud (LMC), and 5 Small Magellanic Cloud (SMC) SNRs. We select remnants that have been definitively typed as either core-collapse (CC) or Type Ia supernovae, with well-defined size estimates, and a thermal X-ray flux measured over the entire remnant. A catalog of SNR size and X-ray luminosity is presented and plotted, with ambient density and age estimates from the literature. Model remnants with a given density, in the Sedov-Taylor (S-T) phase, are overplotted on the diameter-vs-luminosity plot, allowing the evolutionary state and physical properties of SNRs to be compared to each other, and to theoretical models. We find that small, young remnants are predominantly Type Ia remnants or high luminosity CCs, suggesting that many CC SNRs are not detected until after they have emerged from the progenitor's wind-blown bubble. An examination of the distribution of SNR diameters in the Milky Way and LMC reveals that LMC SNRs must be evolving in an ambient medium which is 30% as dense as that in the Milky Way. This is consistent with ambient density estimates for the Galaxy and LMC.Supernova remnantLarge Magellanic CloudMilky WayLuminosityX-ray luminosityEjectaCore collapseX-ray spectrumSmall Magellanic CloudStar...
- Nowadays, new branches of research are proposing the use of non-traditional data sources for the study of migration trends in order to find an original methodology to answer open questions about the human mobility framework. In this context we presents the Multi-aspect Integrated Migration Indicators (MIMI) dataset, an new dataset of migration drivers, resulting from the process of acquisition, transformation and merge of both official data about international flows and stocks and original indicators not typically used in migration studies, such as online social networks. This work describes the process of gathering, embedding and merging traditional and novel features, resulting in this new multidisciplinary dataset that we believe could significantly contribute to nowcast to forecast both present and future bilateral migration trends.MobilityFacebookNoise-equivalent temperaturePythonSocial networkStatisticsCOVID 19EmbeddingPearson's correlationOrientation...
- The $\Lambda$CDM model faces several tensions with recent cosmological data and their increased accuracy. The mismatch between the values of the Hubble constant $H_0$ obtained from direct distance ladder measurements and from the cosmic microwave background (CMB) is the most statistically significant, but the amplitude of the matter fluctuations is also regarded as a serious concern, leading to the investigation of a plethora of models. We first show that the combination of several recent measurements from local probes leads to a tight constraint on the present-day matter density $\Omega_M$ as well as on the amplitude of the matter fluctuations, both acceptably consistent with the values inferred from the CMB. Secondly, we show that the data on cosmic chronometers allow to derive an accurate value of the Hubble constant $H_0$ for $\Lambda$CDM models: $H_0 = 67.4 \pm 1.34$ km/s/Mpc. This implies that, within $\Lambda$CDM, some determinations of $H_0$ are biased. Considering a bias on the Hubble constant as a nuisance parameter within $\Lambda$CDM, we examine such a $\Lambda$CDM$+ H_0$ bias model on the same statistical grounds as alternative cosmological models. We show that the former statistically supersede most existing extended models proposed up to now. In a third step, we show that the value of $\Omega_M$ we obtained, combined with $H_0$ from SH0ES, leads to an accurate measurement of $\omega_M$, providing an additional low-redshift test for cosmological models. From this test, most extensions seem to be confronted with a new tension, whereas the $\Lambda$CDM with $H_0 \sim 67 $ has none. We conclude that a standard $\Lambda$CDM model with an unknown bias in the Cepheids distance calibration represents a model that reaches a remarkable agreement, statistically better than previously proposed extensions with $H_0 \sim 73 $ for which such a comparison can be performed. (abridged)Hubble constantCosmic microwave backgroundLambda-CDM modelSupernovae H0 for the Equation of StatePlanck missionBaryon acoustic oscillationsCalibrationCepheidCosmological parametersHubble constant measurement...
- The rotation curves of some star forming massive galaxies at redshift two decline over the radial range of a few times the effective radius, indicating a significant deficit of dark matter (DM) mass in the galaxy centre. The DM mass deficit is interpreted as the existence of a DM density core rather than the cuspy structure predicted by the standard cosmological model. A recent study proposed that a galaxy merger, in which the smaller satellite galaxy is significantly compacted by dissipative contraction of the galactic gas, can heat the centre of the host galaxy and help make a large DM core. By using an $N$-body simulation, we find that a large amount of DM mass is imported to the centre by the merging satellite, making this scenario an unlikely solution for the DM mass deficit. In this work, we consider giant baryonic clumps in high redshift galaxies as alternative heating source for creating the baryon dominated galaxies with a DM core. Due to dynamical friction, the orbit of clumps decays in a few Gyr and the baryons condensate at the galactic centre. As a back-reaction, the halo centre is heated up and the density cusp is flattened out. The combination of the baryon condensation and core formation makes the galaxy baryon dominated in the central 2-5 kpc, comparable to the effective radius of the observed galaxies. Thus, the dynamical heating by giant baryonic clumps is a viable mechanism for explaining the observed dearth of DM in high redshift galaxies.Dark matterGalaxyDark matter particle massDynamical frictionMilky WayGalaxy mergerMassive galaxiesSatellite galaxyEllipticityRotation Curve...
- We present a new investigation of the intergalactic medium (IGM) near reionization using dark gaps in the Lyman-$\beta$ (Ly$\beta$) forest. With its lower optical depth, Ly$\beta$ offers a potentially more sensitive probe to any remaining neutral gas compared to commonly used Ly$\alpha$ line. We identify dark gaps in the Ly$\beta$ forest using spectra of 42 QSOs at $z_{\rm em}>5.5$, including new data from the XQR-30 VLT Large Programme. Approximately $40\%$ of these QSO spectra exhibit dark gaps longer than $10h^{-1}{\rm Mpc}$ at $z\simeq5.8$. By comparing the results to predictions from simulations, we find that the data are broadly consistent both with models where fluctuations in the Ly$\alpha$ forest are caused solely by ionizing ultraviolet background (UVB) fluctuations and with models that include large neutral hydrogen patches at $z<6$ due to a late end to reionization. Of particular interest is a very long ($L=28h^{-1}{\rm Mpc}$) and dark ($\tau_{\rm eff} \gtrsim 6$) gap persisting down to $z\simeq 5.5$ in the Ly$\beta$ forest of the $z_{\rm}=5.85$ QSO PSO J025$-$11. This gap may support late reionization models with a volume-weighted average neutral hydrogen fraction of $ \langle x_{\rm HI}\rangle \gtrsim 5\%$ by $z=5.6$. Finally, we infer constraints on $\langle x_{\rm HI}\rangle$ over $5.5 \lesssim z \lesssim 6.0$ based on the observed Ly$\beta$ dark gap length distribution and a conservative relationship between gap length and neutral fraction derived from simulations. We find $\langle x_{\rm HI}\rangle \leq 0.05$, 0.17, and 0.29 at $z\simeq 5.55$, 5.75, and 5.95, respectively. These constraints are consistent with models where reionization ends significantly later than $z = 6$.QuasarReionizationReionization modelsNeutral hydrogen gasUltraviolet backgroundUVB fluctuationsGalaxyVLT telescopeLine of sightPrincipal component analysis...
- While a quantum spin liquid (QSL) phase has been identified in the $J_1$-$J_2$ Heisenberg model on a triangular lattice via numerical calculations, debate persists about whether or not such a QSL is gapped or gapless, with contradictory conclusions from different techniques. Moreover, information about excitations and dynamics is crucial for the experimental detection of such a phase. In this work, we use exact diagonalization to characterize signatures of a QSL phase on the triangular lattice through the dynamical spin structure factor $\mathcal{S}(q,\omega)$ and Raman susceptibility $\mathcal{\chi}(\omega)$. We find that spectra for the QSL phase show distinct features compared to those of neighboring phases; and both the Raman spectra and spin structure factor show gapped behaviour in the QSL phase. Interestingly, there is a prominent excitation mode in the Raman $A_2$ channel, indicating a strong subleading tendency toward a chiral spin liquid phase.Triangular latticeNearest-neighbor siteStripe phasesSpin structureSpin liquidRaman scatteringHamiltonianBrillouin zoneDensity matrix renormalization groupIntensity...
- We use cosmological simulations from the FIRE (Feedback In Realistic Environments) project to study the baryon cycle and galaxy mass assembly for central galaxies in the halo mass range $M_{\rm halo} \sim 10^{10} - 10^{13} M_{\odot}$. By tracing cosmic inflows, galactic outflows, gas recycling, and merger histories, we quantify the contribution of physically distinct sources of material to galaxy growth. We show that in situ star formation fueled by fresh accretion dominates the early growth of galaxies of all masses, while the re-accretion of gas previously ejected in galactic winds often dominates the gas supply for a large portion of every galaxy's evolution. Externally processed material contributes increasingly to the growth of central galaxies at lower redshifts. This includes stars formed ex situ and gas delivered by mergers, as well as smooth intergalactic transfer of gas from other galaxies, an important but previously under-appreciated growth mode. By $z=0$, wind transfer, i.e. the exchange of gas between galaxies via winds, can dominate gas accretion onto $\sim L^{*}$ galaxies over fresh accretion and standard wind recycling. Galaxies of all masses re-accrete >50% of the gas ejected in winds and recurrent recycling is common. The total mass deposited in the intergalactic medium per unit stellar mass formed increases in lower mass galaxies. Re-accretion of wind ejecta occurs over a broad range of timescales, with median recycling times ($\sim 100-350$ Myr) shorter than previously found. Wind recycling typically occurs at the scale radius of the halo, independent of halo mass and redshift, suggesting a characteristic recycling zone around galaxies that scales with the size of the inner halo and the galaxy's stellar component.GalaxyAccretionInterstellar mediumVirial massStellar massStar formationStarMilky WayIntergalactic mediumFIRE simulations...
- Fighting the ongoing COVID-19 infodemic has been declared as one of the most important focus areas by the World Health Organization since the onset of the COVID-19 pandemic. While the information that is consumed and disseminated consists of promoting fake cures, rumors, and conspiracy theories to spreading xenophobia and panic, at the same time there is information (e.g., containing advice, promoting cure) that can help different stakeholders such as policy-makers. Social media platforms enable the infodemic and there has been an effort to curate the content on such platforms, analyze and debunk them. While a majority of the research efforts consider one or two aspects (e.g., detecting factuality) of such information, in this study we focus on a multifaceted approach, including an API,\url{https://app.swaggerhub.com/apis/yifan2019/Tanbih/0.8.0/} and a demo system,\url{https://covid19.tanbih.org}, which we made freely and publicly available. We believe that this will facilitate researchers and different stakeholders. A screencast of the API services and demo is available.\url{https://youtu.be/zhbcSvxEKMk}COVID 19Social mediaApplication programming interfaceConspiracy theory...
- The spread of fake news, propaganda, misinformation, disinformation, and harmful content online raised concerns among social media platforms, government agencies, policymakers, and society as a whole. This is because such harmful or abusive content leads to several consequences to people such as physical, emotional, relational, and financial. Among different harmful content \textit{trolling-based} online content is one of them, where the idea is to post a message that is provocative, offensive, or menacing with an intent to mislead the audience. The content can be textual, visual, a combination of both, or a meme. In this study, we provide a comparative analysis of troll-based memes classification using the textual, visual, and multimodal content. We report several interesting findings in terms of code-mixed text, multimodal setting, and combining an additional dataset, which shows improvements over the majority baseline.Social media
- Harmful or abusive online content has been increasing over time, raising concerns for social media platforms, government agencies, and policymakers. Such harmful or abusive content can have major negative impact on society, e.g., cyberbullying can lead to suicides, rumors about COVID-19 can cause vaccine hesitance, promotion of fake cures for COVID-19 can cause health harms and deaths. The content that is posted and shared online can be textual, visual, or a combination of both, e.g., in a meme. Here, we describe our experiments in detecting the roles of the entities (hero, villain, victim) in harmful memes, which is part of the CONSTRAINT-2022 shared task, as well as our system for the task. We further provide a comparative analysis of different experimental settings (i.e., unimodal, multimodal, attention, and augmentation). For reproducibility, we make our experimental code publicly available. \url{https://github.com/robi56/harmful_memes_block_fusion}COVID 19Social mediaAttentionVaccine...