- P.R. Firms and News agency's must know the truth about Samuel Groft and his Royal DNA (P.R. Firms and News agency's must know the truth about Samuel Groft and his Royal DNA)

by Samuel Groft16 Jun 2020 14:15 - Save Samuel Groft in Los Angeles from evil torture in America (Save Samuel Groft in Los Angeles from evil torture in America)

by Samuel Groft16 Jun 2020 14:11 - Samuel Groft world leader (Samuel Groft world leader)

by Samuel Groft16 Jun 2020 13:55 - Master-apprentice approach (Master-apprentice approach)

by Mika Hämäläinen27 May 2020 21:36 - Estimation of speed, armature temperature and resistance in brushed dc machines using a cfnn based on bfgs bp (Estimation of speed, armature temperature and resistance in brushed dc machines using a cfnn based on bfgs bp)

by Hacene Mellah10 Jan 2020 12:27 - Quantum point contact (Quantum point contact)

by Prof. Carlo Beenakker08 Jan 2011 20:32 - Universal Conductance Fluctuations (Universal Conductance Fluctuations)

by Prof. Carlo Beenakker08 Dec 2010 13:33 - Quantum shot noise (Quantum shot noise)

by Prof. Carlo Beenakker04 Feb 2014 08:52 - Grey-body radiation (Grey-body radiation)

by Prof. Carlo Beenakker08 Jan 2011 20:36 - Geometric flattening (Geometric flattening)

by Dr. Ganna Ivashchenko05 Dec 2010 22:14

- We study the datafor the cumulative as well as daily number of cases in the Covid-19 outbreak in China. The cumulative data can be fit to an empirical form obtained from a Susceptible-Infected-Removed (SIR) model studied on an Euclidean network previously. Plotting the number of cases against the distance from the epicenter for both China and Italy, we find an approximate power law variation with an exponent $\sim 1.85$ showing strongly that the spatial dependence plays a key role, a factor included in the model. We report here that the SIR model on the Eucledean network can reproduce with a high accuracy the data for China for given parameter values, and can also predict when the epidemic, at least locally, can be expected to be over.MobilityEllipticityMonte Carlo methodCOVID 19Network modelCritical valueCoronaNetworksSimulationsProbability...
- Measuring the primordial power spectrum on small scales is a powerful tool in inflation model building, yet constraints from Cosmic Microwave Background measurements alone are insufficient to place bounds stringent enough to be appreciably effective. For the very small scale spectrum, those which subtend angles of less than 0.3 degrees on the sky, an upper bound can be extracted from the astrophysical constraints on the possible production of primordial black holes in the early universe. A recently discovered observational by-product of an enhanced power spectrum on small scales, induced gravitational waves, have been shown to be within the range of proposed space based gravitational wave detectors; such as NASA's LISA and BBO detectors, and the Japanese DECIGO detector. In this paper we explore the impact such a detection would have on models of inflation known to lead to an enhanced power spectrum on small scales, namely the Hilltop-type and running mass models. We find that the Hilltop-type model can produce observable induced gravitational waves within the range of BBO and DECIGO for integral and fractional powers of the potential within a reasonable number of e-folds. We also find that the running mass model can produce a spectrum within the range of these detectors, but require that inflation terminates after an unreasonably small number of e-folds. Finally, we argue that if the thermal history of the Universe were to accomodate such a small number of e-folds the Running Mass Model can produce Primordial Black Holes within a mass range compatible with Dark Matter, i.e. within a mass range 10^{20}g< M_{BH}<10^{27}g.Gravitational wavePrimordial black holeDeci-hertz Interferometer Gravitational wave ObservatoryModel of inflationHorizonWilkinson Microwave Anisotropy ProbeCross-correlationReheatingLaser Interferometer Space AntennaE-folding...
- Electron correlations play a central role in iron-based superconductors. In these systems, multiple Fe $3d$-orbitals are active in the low-energy physics, and they are not all degenerate. For these reasons, the role of orbital-selective correlations has been an active topic in the study of the iron-based systems. In this paper, we survey the recent developments on the subject. For the normal state, we emphasize the orbital-selective Mott physics that has been extensively studied, especially in the iron chalcogenides, in the case of electron filling $n \sim 6$. In addition, the interplay between orbital selectivity and electronic nematicity is addressed. For the superconducting state, we summarize the initial ideas for orbital-selective pairing, and discuss the recent explosive activities along this direction. We close with some perspectives on several emerging topics. These include the evolution of the orbital-selective correlations, magnetic and nematic orders and superconductivity as the electron filling factor is reduced from $6$ to $5$, as well as the interplay between electron correlations and topological bandstructure in iron-based superconductors.Electronic correlationSuperconductivityBad metalIron based superconductorsDopingAntiferromagnetSaddle pointHamiltonianAnisotropyAngular resolved photoemission spectroscopy...
- Using the tensor Radon transform and related numerical methods, we study how bulk geometries can be explicitly reconstructed from boundary entanglement entropies in the specific case of $\mathrm{AdS}_3/\mathrm{CFT}_2$. We find that, given the boundary entanglement entropies of a $2$d CFT, this framework provides a quantitative measure that detects whether the bulk dual is geometric in the perturbative (near AdS) limit. In the case where a well-defined bulk geometry exists, we explicitly reconstruct the unique bulk metric tensor once a gauge choice is made. We then examine the emergent bulk geometries for static and dynamical scenarios in holography and in many-body systems. Apart from the physics results, our work demonstrates that numerical methods are feasible and effective in the study of bulk reconstruction in AdS/CFT.Radon transformGeodesicEntanglementFree fermionsConformal field theoryEntropyTensor fieldAnti de Sitter spaceManifoldEntanglement entropy...
- We define the "kink transform" as a one-sided boost of bulk initial data about the Ryu-Takayanagi surface of a boundary cut. For a flat cut, we conjecture that the resulting Wheeler-DeWitt patch is the bulk dual to the boundary state obtained by Connes cocycle (CC) flow across the cut. The bulk patch is glued to a precursor slice related to the original boundary slice by a one-sided boost. This evades ultraviolet divergences and distinguishes our construction from one-sided modular flow. We verify that the kink transform is consistent with known properties of operator expectation values and subregion entropies under CC flow. CC flow generates a stress tensor shock at the cut, controlled by a shape derivative of the entropy; the kink transform reproduces this shock holographically by creating a bulk Weyl tensor shock. We also go beyond known properties of CC flow by deriving novel shock components from the kink transform.EntanglementEntropyHorizonExtrinsic curvaturePrecursorCauchy surfaceDualityQuantum field theoryTwo-point correlation functionExpectation Value...
- We draw a picture of physical systems that allows us to recognize what is this thing called "time" by requiring consistency not only with our notion of time but also with the way time enters the fundamental laws of Physics, independently of one using a classical or a quantum description. Elements of the picture are two non-interacting and yet entangled quantum systems, one of which acting as a clock, and the other one doomed to evolve. The setting is based on the so called "Page and Wootters (PaW) mechanism", and updates, with tools from Lie-Group and large-$N$ quantum approaches. The overall scheme is quantum, but the theoretical framework allows us to take the classical limit, either of the clock only, or of the clock and the evolving system altogether; we thus derive the Schr\"odinger equation in the first case, and the Hamilton equations of motion in the second one. Suggestions about possible links with general relativity and gravity are also put forward.Classical limitQuantum mechanicsHamiltonianGeneral relativityCoherent stateManifoldPhase spaceCartan decompositionQuantum theorySymplectic form...
- We study density matrices in quantum gravity, focusing on topology change. We argue that the inclusion of bra-ket wormholes in the gravity path integral is not a free choice, but is dictated by the specification of a global state in the multi-universe Hilbert space. Specifically, the Hartle-Hawking (HH) state does not contain bra-ket wormholes. It has recently been pointed out that bra-ket wormholes are needed to avoid potential bags-of-gold and strong subadditivity paradoxes, suggesting a problem with the HH state. Nevertheless, in regimes with a single large connected universe, approximate bra-ket wormholes can emerge by tracing over the unobserved universes. More drastic possibilities are that the HH state is non-perturbatively gauge equivalent to a state with bra-ket wormholes, or that the third-quantized Hilbert space is one-dimensional. Along the way we draw some helpful lessons from the well-known relation between worldline gravity and Klein-Gordon theory. In particular, the commutativity of boundary-creating operators, which is necessary for constructing the alpha states and having a dual ensemble interpretation, is subtle. For instance, in the worldline gravity example, the Klein-Gordon field operators do not commute at timelike separation.WormholePath integralManifoldEntropyDensity matrixField theoryQuantum gravityHartle-Hawking stateRenyi entropyEnsemble interpretation...
- Within the framework of the AdS/CMT correspondence asymptotically anti-de Sitter black holes in four space-time dimensions can be used to analyse transport properties in two space dimensions. A non-linear renormalisation group equation for the conductivity in two dimensions is derived in this model and, as an example of its application, both the Ohmic and Hall DC and AC conductivities are studied in the presence of a magnetic field, using a bulk dyonic solution of the Einstein-Maxwell equations in asymptotically AdS$_4$ space-time. The ${\cal Q}$-factor of the cyclotron resonance is shown to decrease as the temperature is increased and increase as the charge density is increased in a fixed magnetic field. Likewise the dissipative Ohmic conductivity at resonance increases as the temperature is decreased and as the charge density is increased.Renormalisation group equationsEvent horizonAdS/CMTBlack holeCyclotronDirect currentAdS/CFT correspondenceMagnetic chargeAsymptotically AdS spaceAnti de Sitter space...
- Weak gravitational lensing of background galaxies provides a direct probe of the projected matter distribution in and around galaxy clusters. Here we present a self-contained pedagogical review of cluster--galaxy weak lensing, covering a range of topics relevant to its cosmological and astrophysical applications. We begin by reviewing the theoretical foundations of gravitational lensing from the first principles, with special attention to the basics and advanced techniques of weak gravitational lensing. We summarize and discuss key findings from recent cluster--galaxy weak-lensing studies on both observational and theoretical grounds, with a focus on cluster mass profiles, the concentration--mass relation, the splashback radius, and implications from extensive mass calibration efforts for cluster cosmology.Weak lensingShearedGalaxyNavarro-Frenk-White profileCosmologyLensing signalCluster of galaxiesMass distributionEllipticityGravitational lensing...
- JWST will provide moderate resolution transit spectra with continuous wavelength coverage from the optical to the mid-infrared for the first time. In this paper, we illustrate how different aerosol species, size-distributions, and spatial distributions encode information in JWST transit spectra. We use the transit spectral modeling code METIS, along with Mie theory and several flexible treatments of aerosol size and spatial distributions to perform parameter sensitivity studies, calculate transit contribution functions, compute Jacobians, and retrieve parameters with Markov Chain Monte Carlo. The broader wavelength coverage of JWST will likely encompass enough non-gray aerosol behavior to recover information about the species and size-distribution of particles, especially if distinct resonance features arising from the aerosols are present. Within the JWST wavelength range, the optical and mid-infrared typically provide information about 0.1-1 $\mu$m sized aerosols, while the near-infrared to mid-infrared wavelengths usually provide information about gaseous absorption, even if aerosols are present. Strong gaseous absorption features in the infrared often remain visible, even when clouds and hazes are flattening the optical and NIR portion of the spectrum that is currently observable. For some combinations of aerosol properties, temperature, and surface gravity, one can make a precise measure of metallicity despite the presence of aerosols, but more often the retrieved metallicity of a cloudy or hazy atmosphere has significantly lower precision than for a clear atmosphere with otherwise similar properties. Future efforts to securely link aerosol properties to atmospheric metallicity and temperature in a physically motivated manner will ultimately enable a robust physical understanding of the processes at play in cloudy, hazy exoplanet atmospheres.MetallicityJames Webb Space TelescopeOpacityExtrasolar planetPlanetAbsorption featureNatriumMonte Carlo Markov chainInfrared limitNear-infrared...
- Recently, several groups have resolved a gap that bifurcates planets between the size of Earth and Neptune into two populations. The location and depth of this feature is an important signature of the physical processes that form and sculpt planets. In particular, planets residing in the radius gap are valuable probes of these processes as they may be undergoing the final stages of envelope loss. Here, we discuss two views of the radius gap by Fulton & Petigura (2018; F18) and Van Eylen et al. (2018; V18). In V18, the gap is wider and more devoid of planets. This is due, in part, to V18's more precise measurements of planet radius $R_p$. Thanks to Gaia, uncertainties in stellar radii Rstar are no longer the limiting uncertainties in determining $R_p$ for the majority of Kepler planets; instead, errors in $R_p/R_\star$ dominate. V18's analysis incorporated short-cadence photometry along with constraints on mean stelar density that enabled more accurate determinations of $R_p/R_\star$. In the F18 analysis, less accurate $R_p/R_\star$ blurs the boundary the radius gap. The differences in $R_p/R_\star$ are largest at high impact parameter ($b \gtrsim 0.8$) and often exceed 10%. This motivates excluding high-$b$ planets from demographic studies, but identifying such planets from long-cadence photometry alone is challenging. We show that transit duration can serve as an effective proxy, and we leverage this information to enhance the contrast between the super-Earth and sub-Neptune populations.PlanetPhotometryMessier 15NeptuneStarEccentricitySuper-earthLight curveImpact parameterEarth...
- The Kepler DR25 planet candidate catalog was produced using an automated method of planet candidate identification based on various tests. These tests were tuned to obtain a reasonable but arbitrary balance between catalog completeness and reliability. We produce new catalogs with differing balances of completeness and reliability by varying these tests, and study the impact of these alternative catalogs on occurrence rates. We find that if there is no correction for reliability, different catalogs give statistically inconsistent occurrence rates, while if we correct for both completeness and reliability, we get statistically consistent occurrence rates. This is a strong indication that correction for completeness and reliability is critical for the accurate computation of occurrence rates. Additionally, we find that this result is the same whether using Bayesian Poisson likelihood MCMC or Approximate Bayesian Computation methods. We also examine the use of a Robovetter disposition score cut as an alternative to reliability correction, and find that while a score cut does increase the reliability of the catalog, it is not as accurate as performing a full reliability correction. We get the same result when performing a reliability correction with and without a score cut. Therefore removing low-score planets removes data without providing any advantage, and should be avoided when possible. We make our alternative catalogs publicly available, and propose that these should be used as a test of occurrence rate methods, with the requirement that a method should provide statistically consistent occurrence rates for all these catalogs.PlanetCompletenessStarApproximate Bayesian computationExtrasolar planetInferenceStatisticsKepler Objects of InterestStellar populationsBayesian...
- In solar energetic particle (SEP) events, the power-law dependence of element abundance enhancements on their mass-to-charge ratios A/Q provides a new tool that measures the combined rigidity dependences from both acceleration and transport. Distinguishing these two processes can be more challenging. However, the effects of acceleration dominate when SEP events are small or when the ions even propagate scatter-free, and transport can dominate the time evolution of large events with streaming-limited intensities. Magnetic reconnection in solar jets produces positive powers of A/Q from +2 to +7 and shock acceleration produces mostly negative powers from -2 to +1 in small and moderate SEP events where transport effects are minimal. This variation in the rigidity dependence of shock acceleration may reflect the non-planer structure, complexity, and time variation of coronal shocks themselves. Wave amplification by streaming protons in the largest SEP events suppresses the escape of ions with low A/Q, creating observed powers of A/Q from +1 to +3 upstream of the accelerating shock, decreasing to small negative powers downstream.Solar energetic particlesChemical abundanceIntensityMagnetic reconnectionEventIonScatteringChargeProtonMass...
- We consider an elastic/viscoelastic transmission problem for the Bresse system with fully Dirichlet or Dirichlet-Neumann-Neumann boundary conditions. The physical model consists of three wave equations coupled in certain pattern. The system is damped directly or indirectly by global or local Kelvin-Voigt damping. Actually, the number of the dampings, their nature of distribution (locally or globally) and the smoothness of the damping coefficient at the interface play a crucial role in the type of the stabilization of the corresponding semigroup. Indeed, using frequency domain approach combined with multiplier techniques and the construction of a new multiplier function, we establish different types of energy decay rate (see the table of stability results below). Our results generalize and improve many earlier ones in the literature and in particular some studies done on the Timoshenko system with Kelvin-Voigt damping.KelvinDecay rateWave equationNeumann boundary conditionFrequencyEnergy...
- These notes are an overview of effective field theory (EFT) methods. I discuss toy model EFTs, chiral perturbation theory, Fermi liquid theory, and non-relativistic QED, and use these examples to introduce a variety of EFT concepts, including: matching a tree and loop level; summation of large logarithms; naturalness problems; spurion fields; coset construction; Wess-Zumino-Witten terms; chiral and gauge anomalies; field rephasings; and method of regions. These lecture notes were prepared for the 2nd Joint ICTP-Trieste/ICTP-SAIFR school on Particle Physics, Sao Paulo, Brazil, June 22 - July 3, 2020.Effective field theoryEffective actionWilson coefficientsChiral perturbation theorySpurionLoop integralWess-Zumino termSymmetry breakingDegree of freedomPower counting...
- For the first time, a gravitational calculation was recently shown to yield the Page curve for the entropy of Hawking radiation, consistent with unitary evolution. However, the calculation takes as essential input Hawking's result that the radiation entropy becomes large at late times. We call this apparent contradiction the state paradox. We exhibit its manifestations in standard and doubly-holographic settings, with and without an external bath. We clarify which version(s) of the Ryu-Takayanagi prescription apply in each setting. We show that the two possible homology rules in the presence of a braneworld generate a bulk dual of the state paradox. The paradox is resolved if the gravitational path integral computes averaged quantities in a suitable ensemble of unitary theories, a possibility supported independently by several recent developments.EntropyConformal field theoryRyu-TakayanagiAnti de Sitter spaceBlack holeDualityEntanglementHawking radiationVon neumann entropyBraneworld...
- Studying the UV dust attenuation, as well as its relation to other galaxy parameters such as the stellar mass, plays an important role in multi-wavelength research. This work relates the dust attenuation to the stellar mass of star forming galaxies, and its evolution with redshift. A sample of galaxies with an estimate of the dust attenuation computed from the infrared excess was used. The dust attenuation vs. stellar mass data, separated in redshift bins, was modelled by a single parameter linear function, assuming a nonzero constant apparent dust attenuation for low mass galaxies. But the origin of this effect is still to be determined and several possibilities are explored (actual high dust content, variation of the dust-to-metal ratio, variation of the stars-dust geometry). The best-fitting parameter of this model is then used to study the redshift evolution of the cosmic dust attenuation and is found to be in agreement with results from the literature. This work also gives evidence to a redshift evolution of the dust attenuation - stellar mass relationship, as is suggested by recent works in the highest redshift range.Dust attenuation curveStellar massGalaxyInfrared limitLow-mass galaxySpectral energy distributionRedshift binsStar-forming galaxyInitial mass functionMilky Way...
- We investigate the role of disorder on the various topological magnonic phases present in deformed honeycomb ferromagnets. To this end, we introduce a bosonic Bott index to characterize the topology of magnon spectra in finite, disordered systems. The consistency between the Bott index and Chern number is numerically established in the clean limit. We demonstrate that topologically protected magnon edge states are robust to moderate disorder and, as anticipated, localized in the strong regime. We predict a disorder-driven topological phase transition, a magnonic analog of the "topological Anderson insulator" in electronic systems, where the disorder is responsible for the emergence of the nontrivial topology. Combining the results for the Bott index and transport properties, we show that bulk-boundary correspondence holds for disordered topological magnons. Our results open the door for research on topological magnonics as well as other bosonic excitations in finite and disordered systems.DisorderMagnonEdge excitationsTopological transitionChern numberHoneycomb latticeReal spaceInsulatorsFerromagnetLattice (order)...
- A theorem of Grove and Searle directly establishes that positive curvature 2d manifolds M with effective circular symmetry group of dimension 8 or less have positive Euler characteristic X(M): the fixed point set N consists of even dimensional positive curvature manifolds and has the Euler characteristic X(N)=X(M). It is not empty by Berger. If N has a co-dimension 2 component, Grove-Searle forces M to be in { RP^2d,S^2d,CP^d }. By Frankel, there can be not two codimension 2 cases. In the remaining cases, Gauss-Bonnet-Chern forces all to have positive Euler characteristic. This simple proof does not quite reach the record 10 or less which uses methods of Wilking but it motivates to analyze the structure of fixed point components N and in particular to look at positive curvature manifolds which admit a U(1) or SU(2) symmetry with connected or almost connected fixed point set N. They have amazing geodesic properties: the fixed point manifold N agrees with the caustic of each of its points and the geodesic flow is integrable. In full generality, the Lefschetz fixed point property X(N)=X(M) and Frankel's dimension theorem dim(M) is less than dim(A) + dim(B) for two different connectivity components A,B of N produce already heavy constraints in building up M from smaller components. It is possible that S^2d, RP^2d, CP^d, HP^d, OP^2, W^6,E^6,W^12,W^24 are actually a complete list of even-dimensional positive curvature manifolds admitting a continuum symmetry. Aside from the projective spaces, the Euler characteristic of the known cases is always 1,2 or 6, where the jump from 2 to 6 happened with the Wallach or Eschenburg manifolds W^6,E^6 which have four fixed point components N=S^2 + S^2 + S^0, the only known case which are not of the Grove-Searle form N=N_1 or N=N_1 + p} with connected N_1.ManifoldGeodesicEuler characteristicCurvaturePhase space causticCodimensionIsometryIntersection numberLine bundleReal projective space...
- Positive curvature and bosons Compact positive curvature Riemannian manifolds M with symmetry group G allow Conner-Kobayashi reductions M to N, where N is the fixed point set of the symmetry G. The set N is a union of smaller-dimensional totally geodesic positive curvature manifolds each with even co-dimension. By Berger, N is not empty. By Lefschetz, M and N have the same Euler characteristic. By Frankel, the sum of dimension of any two components in N is smaller than the dimension of M. Reverting the process N to M allows to build up positive curvature manifolds from smaller ones using division algebras and the geodesic flow. From dimension 6 to 24, only four exceptional manifolds have appeared so far, some of them being flag manifolds and related to the special unitary group in three dimensions. We can now draw a periodic system of elements of the known even-dimensional positive curvature manifolds and observe that the list of even-dimensional known positive curvature manifolds has an affinity with the list of known force carriers in physics. Positive mass of the boson matches up with the existence of of two linearly independent harmonic k-forms on the manifold. This motivates to compute more quantities of the exceptional positive curvature manifolds like the lowest non-zero eigenvalues of the Hodge Laplacian L=d d*+d* d or properties of the pairs (u,v) of harmonic 2,4 or 8 forms in the positive mass case.ManifoldCurvatureGeodesicCohomologyDivision algebraEuler characteristicDiscrete symmetrySymmetry groupOdd dimensionalDifferential form...
- Some intensive observables of the electronic ground state in condensed matter have a geometrical or even topological nature. In this Review I present the geometrical observables whose expression is known in a full many-body framework, beyond band-structure theory. The formalism allows dealing with the general case of disordered and/or correlated many-electron systems.InsulatorsPeriodic boundary conditionsBerry phaseElectronic band structureLattice (order)HamiltonianBandsDisorderDrude weightAnomalous Hall Effect...
- Electronic transport properties for single-molecule junctions have been widely measured by several techniques, including mechanically controllable break junctions, electromigration break junctions or by means of scanning tunneling microscopes. In parallel, many theoretical tools have been developed and refined for describing such transport properties and for obtaining numerical predictions. Most prominent among these theoretical tools are those based upon density functional theory. In this review, theory and experiment are critically compared and this confrontation leads to several important conclusions. The theoretically predicted trends nowadays reproduce the experimental findings quite well for series of molecules with a single well-defined control parameter, such as the length of the molecules. The quantitative agreement between theory and experiment usually is less convincing, however. Many reasons for quantitative discrepancies can be identified, from which one may decide that qualitative agreement is the best one may expect with present modeling tools. For further progress, benchmark systems are required that are sufficiently well-defined by experiment to allow quantitative testing of the approximation schemes underlying the theoretical modeling. Several key experiments can be identified suggesting that the present description may even be qualitatively incomplete in some cases. Such key experimental observations and their current models are also discussed here, leading to several suggestions for extensions of the models towards including dynamic image charges, electron correlations, and polaron formation.Kondo effectInterferenceHamiltonianGreen's functionGraphFermi energyDegree of freedomElectromigrationScanning tunneling microscopeBreak junction...
- We construct a theoretical framework to study Population III (Pop III) star formation in the post-reionization epoch ($z\lesssim 6$) by combining cosmological simulation data with semi-analytical models. We find that due to radiative feedback (i.e. Lyman-Werner and ionizing radiation) massive haloes ($M_{\rm halo}\gtrsim 10^{9}\ \rm M_{\odot}$) are the major ($\gtrsim 90$\%) hosts for potential Pop III star formation at $z\lesssim 6$, where dense pockets of metal-poor gas may survive to form Pop III stars, under inefficient mixing of metals released by supernovae. Metal mixing is the key process that determines not only when Pop III star formation ends, but also the total mass, $M_{\rm PopIII}$, of \textit{active} Pop III stars per host halo, which is a crucial parameter for direct detection and identification of Pop III hosts. Both aspects are still uncertain due to our limited knowledge of metal mixing during structure formation. Current predictions range from early termination at the end of reionization ($z\sim 5$) to continuous Pop III star formation extended to $z=0$ at a non-negligible rate $\sim 10^{-7}\ \rm M_{\odot}\ yr^{-1}\ Mpc^{-3}$, with $M_{\rm PopIII}\sim 10^{3}-10^{6}\ \rm M_{\odot}$. This leads to a broad range of redshift limits for direct detection of Pop III hosts, $z_{\rm PopIII}\sim 0.5-12.5$, with detection rates $\lesssim 0.1-20\ \rm arcmin^{-2}$, for current and future space telescopes (e.g. HST, WFIRST and JWST). Our model also predicts that the majority ($\gtrsim 90$\%) of the cosmic volume is occupied by metal-free gas. Measuring the volume filling fractions of this metal-free phase can constrain metal mixing parameters and Pop III star formation.Star formationStarReionizationLyman-wernerPopulation IIIMetallicityJames Webb Space TelescopeTerminationFilling fractionMetal enrichment...
- We describe an algorithm for predicting the concentrations of dark matter halos via a random walk in energy space. Given a full merger tree for a halo, the total internal energy of each halo in that tree is determined by summing the internal and orbital energies of progenitor halos. For halos described by single-parameter density profiles (such as the NFW profile) the energy can be directly mapped to a scale radius, and so to a concentration. We show that this model can accurately reproduce the mean of the concentration mass relation measured in N-body simulations, and reproduces more of the scatter in that relation than previous models. However, our model underpredicts the kurtosis of the distribution of N-body concentrations. We test this model by examining both the autocorrelation of scale radii across time, and the correlations between halo concentration and spin, and comparing to results measured from cosmological N-body simulations. In both cases we find that our model closely matches the N-body results. Our model is implemented within the open source Galacticus toolkit.Merger treeN-body simulationHalo concentrationsVirial massRandom walkDark matter haloNavarro-Frenk-White profileCOCO simulationAccretionConcentration-mass relation...
- We examine the assumptions behind Noether's theorem connecting symmetries and conservation laws. To compare classical and quantum versions of this theorem, we take an algebraic approach. In both classical and quantum mechanics, observables are naturally elements of a Jordan algebra, while generators of one-parameter groups of transformations are naturally elements of a Lie algebra. Noether's theorem holds whenever we can map observables to generators in such a way that each observable generates a one-parameter group that preserves itself. In ordinary complex quantum mechanics this mapping is multiplication by $\sqrt{-1}$. In the more general framework of unital JB-algebras, Alfsen and Shultz call such a mapping a "dynamical correspondence", and show its presence allows us to identify the unital JB-algebra with the self-adjoint part of a complex C*-algebra. However, to prove their result, they impose a second, more obscure, condition on the dynamical correspondence. We show this expresses a relation between quantum and statistical mechanics, closely connected to the principle that "inverse temperature is imaginary time".Noether's theoremOne-parameter groupQuantum mechanicsVector spacePoisson algebraComplex numberBc mesonStatistical mechanicsHamiltonianConvex set...
- Let $T$ be a bounded linear operator on $L^p$. We study the rate of growth of the norms of the powers of $T$ under resolvent conditions or Ces\`aro boundedness assumptions. Actually the relevant properties of $L^p$ spaces in our study are their type and cotype, and for $1<p<\infty$, the fact that they are UMD. Some of the proofs make use of Fourier multipliers on Banach spaces, which explains why UMD spaces come into play.Bounded operatorLattice (order)Haar measureMartingaleComplex numberReal spaceHilbert spaceLinear operatorDropletReal numbers...
- In this paper, we study the stability problem of a star-shaped network of elastic strings with a local Kelvin-Voigt damping. Under the assumption that the damping coefficients have some singularities near the transmission point, we prove that the semigroup corresponding to the system is polynomially stable and the decay rates depends on the speed of the degeneracy. This result improves the decay rate of the semigroup associated to the system on an earlier result of Z.~Liu and Q.~Zhang in \cite{LZ} involving the wave equation with local Kelvin-Voigt damping and non-smooth coefficient at interface.KelvinDecay rateStarWave equationExponential stabilityCauchy-Schwarz inequalityRewriting systemDissipationWave propagationWeight function...
- We present an analysis of coronal mass ejections (CMEs) observed by the Heliospheric Imagers (HIs) on board NASA's Solar Terrestrial Relations Observatory (STEREO) spacecraft. Between August 2008 and April 2014 we identify 273 CMEs that are observed simultaneously, by the HIs on both spacecraft. For each CME, we track the observed leading edge, as a function of time, from both vantage points, and apply the Stereoscopic Self-Similar Expansion (SSSE) technique to infer their propagation throughout the inner heliosphere. The technique is unable to accurately locate CMEs when their observed leading edge passes between the spacecraft, however, we are able to successfully apply the technique to 151, most of which occur once the spacecraft separation angle exceeds 180 degrees, during solar maximum. We find that using a small half-width to fit the CME can result in observed acceleration to unphysically high velocities and that using a larger half-width can fail to accurately locate the CMEs close to the Sun because the method does not account for CME over-expansion in this region. Observed velocities from SSSE are found to agree well with single-spacecraft (SSEF) analysis techniques applied to the same events. CME propagation directions derived from SSSE and SSEF analysis agree poorly because of known limitations present in the latter. This work was carried out as part of the EU FP7 HELCATS (Heliospheric Cataloguing, Analysis and Techniques Service) project (http://www.helcats-fp7.eu/).Coronal mass ejectionHealth informaticsSolar Terrestrial Relations ObservatorySunHeliosphereSTEREO-ASTEREO-BApexKinematicsSolar wind...
- Feature warping is a core technique in optical flow estimation; however, the ambiguity caused by occluded areas during warping is a major problem that remains unsolved. In this paper, we propose an asymmetric occlusion-aware feature matching module, which can learn a rough occlusion mask that filters useless (occluded) areas immediately after feature warping without any explicit supervision. The proposed module can be easily integrated into end-to-end network architectures and enjoys performance gains while introducing negligible computational cost. The learned occlusion mask can be further fed into a subsequent network cascade with dual feature pyramids with which we achieve state-of-the-art performance. At the time of submission, our method, called MaskFlownet, surpasses all published optical flow methods on the MPI Sintel, KITTI 2012 and 2015 benchmarks. Code is available at https://github.com/microsoft/MaskFlownet.ArchitectureSchedulingGround truthTraining setAttentionFeature extractionSupervised learningSparsityObject detectionPython...
- Ellerman Bomb-like brightenings of the hydrogen Balmer line wings in the quiet Sun (QSEBs) are a signature of the fundamental process of magnetic reconnection at the smallest observable scale in the solar lower atmosphere. We analyze high spatial resolution observations (0.1 arcsec) obtained with the Swedish 1-m Solar Telescope to explore signatures of QSEBs in the H$\beta$ line. We find that QSEBs are ubiquitous and uniformly distributed throughout the quiet Sun, predominantly occurring in intergranular lanes. We find up to 120 QSEBs in the FOV for a single moment in time; this is more than an order of magnitude higher than the number of QSEBs found in earlier H$\alpha$ observations. This suggests that about half a million QSEBs could be present in the lower solar atmosphere at any given time. The QSEB brightening found in the H$\beta$ line wings also persist in the line core with a temporal delay and spatial offset towards the nearest solar limb. Our results suggest that QSEBs emanate through magnetic reconnection along vertically extended current sheets in the solar lower atmosphere. The apparent omnipresence of small-scale magnetic reconnection may play an important role in the energy balance of the solar chromosphere.Magnetic reconnectionSolar atmosphereQuiet sunPhotosphereIntensityEllerman bombsSolar chromosphereLine intensity mappingCoronal heatingChromosphere...
- We have performed microwave diagnostics of the magnetic field strengths in solar flare loops based on the theory of gyrosynchrotron emission. From Nobeyama Radioheliograph observations of three flare events at 17 and 34 GHz, we obtained the degree of circular polarization and the spectral index of microwave flux density, which were then used to map the magnetic field strengths in post-flare loops. Our results show that the magnetic field strength typically decreases from ~800 G near the loop footpoints to ~100 G at a height of 10--25 Mm. Comparison of our results with magnetic field modeling using a flux rope insertion method is also discussed. Our study demonstrates the potential of microwave imaging observations, even at only two frequencies, in diagnosing the coronal magnetic field of flaring regions.Magnetic field strengthBrightness temperatureSolar flareHarmonic numberSolar Dynamics ObservatoryThermal bremsstrahlungRadioheliographsEmission measureLine of sightSun...
- A zero initial velocity of the axion field is assumed in the conventional misalignment mechanism. We propose an alternative scenario where the initial velocity is nonzero, which may arise from an explicit breaking of the PQ symmetry in the early Universe. We demonstrate that, depending on the specifics about the initial velocity and the time order of the PQ symmetry breaking vs. inflation, this new scenario can alter the conventional prediction for the axion relic abundance in different, potentially significant ways. As a result, new viable parameter regions for axion dark matter may open up.AxionInflationMisalignment mechanismPeccei-Quinn symmetryThe early UniverseAxion relic abundanceSymmetry breakingQCD axionIndex catalogueAxion-like particle...
- In the conventional misalignment mechanism, the axion field has a constant initial field value in the early universe and later begins to oscillate. We present an alternative scenario where the axion field has a nonzero initial velocity, allowing an axion decay constant much below the conventional prediction from axion dark matter. This axion velocity can be generated from explicit breaking of the axion shift symmetry in the early universe, which may occur as this symmetry is approximate.AxionThermalisationMisalignment mechanismAxionic dark matterPeccei-Quinn symmetryThe early UniverseSymmetry breakingGlobal symmetry breakingGlobal symmetryDissipation...
- Nowadays, deep neural networks are widely used in mission critical systems such as healthcare, self-driving vehicles, and military which have direct impact on human lives. However, the black-box nature of deep neural networks challenges its use in mission critical applications, raising ethical and judicial concerns inducing lack of trust. Explainable Artificial Intelligence (XAI) is a field of Artificial Intelligence (AI) that promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions. In addition to providing a holistic view of the current XAI landscape in deep learning, this paper provides mathematical summaries of seminal work. We start by proposing a taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models. We then describe the main principles used in XAI research and present the historical timeline for landmark studies in XAI from 2007 to 2020. After explaining each category of algorithms and approaches in detail, we then evaluate the explanation maps generated by eight XAI algorithms on image data, discuss the limitations of this approach, and provide potential future directions to improve XAI evaluation.Deep learningMachine learningArchitectureNeural networkDeep Neural NetworksConvolution Neural NetworkDecision makingSoftwareTaxonomyBackpropagation...
- Instagram has become a great venue for amateur and professional photographers alike to showcase their work. It has, in other words, democratized photography. Generally, photographers take thousands of photos in a session, from which they pick a few to showcase their work on Instagram. Photographers trying to build a reputation on Instagram have to strike a balance between maximizing their followers' engagement with their photos, while also maintaining their artistic style. We used transfer learning to adapt Xception, which is a model for object recognition trained on the ImageNet dataset, to the task of engagement prediction and utilized Gram matrices generated from VGG19, another object recognition model trained on ImageNet, for the task of style similarity measurement on photos posted on Instagram. Our models can be trained on individual Instagram accounts to create personalized engagement prediction and style similarity models. Once trained on their accounts, users can have new photos sorted based on predicted engagement and style similarity to their previous work, thus enabling them to upload photos that not only have the potential to maximize engagement from their followers but also maintain their style of photography. We trained and validated our models on several Instagram accounts, showing it to be adept at both tasks, also outperforming several baseline models and human annotators.Deep Neural NetworksHuman annotatorsVenusObjectMatricesMeasurementPotential...
- In this paper we present Spektral, an open-source Python library for building graph neural networks with TensorFlow and the Keras application programming interface. Spektral implements a large set of methods for deep learning on graphs, including message-passing and pooling operators, as well as utilities for processing graphs and loading popular benchmark datasets. The purpose of this library is to provide the essential building blocks for creating graph neural networks, focusing on the guiding principles of user-friendliness and quick prototyping on which Keras is based. Spektral is, therefore, suitable for absolute beginners and expert deep learning practitioners alike. In this work, we present an overview of Spektral's features and report the performance of the methods implemented by the library in scenarios of node classification, graph classification, and graph regression.GraphGraph Neural NetworkDeep learningApplication programming interfaceRegressionPythonProteinGraph Convolutional NetworkAttentionHyperparameter...
- We present a causal speech enhancement model working on the raw waveform that runs in real-time on a laptop CPU. The proposed model is based on an encoder-decoder architecture with skip-connections. It is optimized on both time and frequency domains, using multiple loss functions. Empirical evidence shows that it is capable of removing various kinds of background noise including stationary and non-stationary noises, as well as room reverb. Additionally, we suggest a set of data augmentation techniques applied directly on the raw waveform which further improve model performance and its generalization abilities. We perform evaluations on several standard benchmarks, both using objective metrics and human judgements. The proposed model matches state-of-the-art performance of both causal and non causal methods while working directly on the raw waveform.ArchitectureLong short term memoryAblationNeural networkSpeech recognitionTraining setGenerative Adversarial NetGround truthOptimizationRegression...
- Normalization has become one of the most fundamental components in many deep neural networks for machine learning tasks while deep neural network has also been widely used in CTR estimation field. Among most of the proposed deep neural network models, few model utilize normalization approaches. Though some works such as Deep & Cross Network (DCN) and Neural Factorization Machine (NFM) use Batch Normalization in MLP part of the structure, there isn't work to thoroughly explore the effect of the normalization on the DNN ranking systems. In this paper, we conduct a systematic study on the effect of widely used normalization schemas by applying the various normalization approaches to both feature embedding and MLP part in DNN model. Extensive experiments are conduct on three real-world datasets and the experiment results demonstrate that the correct normalization significantly enhances model's performance. We also propose a new and effective normalization approaches based on LayerNorm named variance only LayerNorm(VO-LN) in this work. A normalization enhanced DNN model named NormDNN is also proposed based on the above-mentioned observation. As for the reason why normalization works for DNN models in CTR estimation, we find that the variance of normalization plays the main role and give an explanation in this work.Deep Neural NetworksNetwork modelRankingEmbeddingMachine learningFieldNetworks...
- Element-wise activation functions play a critical role in deep neural networks by affecting the expressivity power and the learning dynamics. Learning-based activation functions have recently gained increasing attention and success. We propose a new perspective of learnable activation function through formulating them with element-wise attention mechanism. In each network layer, we devise an attention module which learns an element-wise, sign-based attention map for the pre-activation feature map. The attention map scales an element based on its sign. Adding the attention module with a rectified linear unit (ReLU) results in an amplification of positive elements and a suppression of negative ones, both with learned, data-adaptive parameters. We coin the resulting activation function Attention-based Rectified Linear Unit (AReLU). The attention module essentially learns an element-wise residue of the activated part of the input, as ReLU can be viewed as an identity transformation. This makes the network training more resistant to gradient vanishing. The learned attentive activation leads to well-focused activation of relevant regions of a feature map. Through extensive evaluations, we show that AReLU significantly boosts the performance of most mainstream network architectures with only two extra learnable parameters per layer introduced. Notably, AReLU facilitates fast network training under small learning rates, which makes it especially suited in the case of transfer learning. Our source code has been released (https://github.com/densechen/AReLU).AttentionActivation functionArchitecturePositive elementDeep Neural NetworksMNIST datasetStochastic gradient descentNeural networkImage segmentationWord vectors...
- This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on a concurrently introduced self-supervised model which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to the strongest comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. Analysis shows that the latent discrete speech representations are shared across languages with increased sharing for related languages.Speech recognitionQuantizationArchitectureAttentionInterferenceTraining setProgrammingSchedulingImage ProcessingComputational linguistics...
- We introduce Bi-GNN for modeling biological link prediction tasks such as drug-drug interaction (DDI) and protein-protein interaction (PPI). Taking drug-drug interaction as an example, existing methods using machine learning either only utilize the link structure between drugs without using the graph representation of each drug molecule, or only leverage the individual drug compound structures without using graph structure for the higher-level DDI graph. The key idea of our method is to fundamentally view the data as a bi-level graph, where the highest level graph represents the interaction between biological entities (interaction graph), and each biological entity itself is further expanded to its intrinsic graph representation (representation graphs), where the graph is either flat like a drug compound or hierarchical like a protein with amino acid level graph, secondary structure, tertiary structure, etc. Our model not only allows the usage of information from both the high-level interaction graph and the low-level representation graphs, but also offers a baseline for future research opportunities to address the bi-level nature of the data.GraphProteinLink predictionGraph Neural NetworkAttentionMolecular structureEmbeddingAmino-acidProtein interactionsSparsity...
- In this work, we propose three explainable deep learning architectures to automatically detect patients with Alzheimer`s disease based on their language abilities. The architectures use: (1) only the part-of-speech features; (2) only language embedding features and (3) both of these feature classes via a unified architecture. We use self-attention mechanisms and interpretable 1-dimensional ConvolutionalNeural Network (CNN) to generate two types of explanations of the model`s action: intra-class explanation and inter-class explanation. The inter-class explanation captures the relative importance of each of the different features in that class, while the inter-class explanation captures the relative importance between the classes. Note that although we have considered two classes of features in this paper, the architecture is easily expandable to more classes because of its modularity. Extensive experimentation and comparison with several recent models show that our method outperforms these methods with an accuracy of 92.2% and F1 score of 0.952on the DementiaBank dataset while being able to generate explanations. We show by examples, how to generate these explanations using attention values.AttentionArchitectureConvolution Neural NetworkEmbeddingAlzheimer's diseaseLong short term memoryDeep learningPart-of-speechF1 scorePart-of-speech tagging...
- Bitcoin is the first digital decentralized cryptocurrency that has shown a significant increase in market capitalization in recent years. The objective of this paper is to determine the predictable price direction of Bitcoin in USD by machine learning techniques and sentiment analysis. Twitter and Reddit have attracted a great deal of attention from researchers to study public sentiment. We have applied sentiment analysis and supervised machine learning principles to the extracted tweets from Twitter and Reddit posts, and we analyze the correlation between bitcoin price movements and sentiments in tweets. We explored several algorithms of machine learning using supervised learning to develop a prediction model and provide informative analysis of future market prices. Due to the difficulty of evaluating the exact nature of a Time Series(ARIMA) model, it is often very difficult to produce appropriate forecasts. Then we continue to implement Recurrent Neural Networks (RNN) with long short-term memory cells (LSTM). Thus, we analyzed the time series model prediction of bitcoin prices with greater efficiency using long short-term memory (LSTM) techniques and compared the predictability of bitcoin price and sentiment analysis of bitcoin tweets to the standard method (ARIMA). The RMSE (Root-mean-square error) of LSTM are 198.448 (single feature) and 197.515 (multi-feature) whereas the ARIMA model RMSE is 209.263 which shows that LSTM with multi feature shows the more accurate result.Long short term memoryMachine learningSentiment analysisMean squared errorMarketRecurrent neural networkTwitterTime SeriesAttentionSupervised learning...
- We present the largest self-driving dataset for motion prediction to date, with over 1,000 hours of data. This was collected by a fleet of 20 autonomous vehicles along a fixed route in Palo Alto, California over a four-month period. It consists of 170,000 scenes, where each scene is 25 seconds long and captures the perception output of the self-driving system, which encodes the precise positions and motions of nearby vehicles, cyclists, and pedestrians over time. On top of this, the dataset contains a high-definition semantic map with 15,242 labelled elements and a high-definition aerial view over the area. Together with the provided software kit, this collection forms the largest, most complete and detailed dataset to date for the development of self-driving, machine learning tasks such as motion forecasting, planning and simulation. The full dataset is available at http://level5.lyft.com/.Machine learningLiDARSelf-driving vehicleAutonomous vehiclesSoftwareHorizonAutonomous drivingObject detectionPythonImage Processing...
- Generative Adversarial Networks are notoriously challenging to train. The underlying minimax optimization is highly susceptible to the variance of the stochastic gradient and the rotational component of the associated game vector field. We empirically demonstrate the effectiveness of the Lookahead meta-optimization method for optimizing games, originally proposed for standard minimization. The backtracking step of Lookahead naturally handles the rotational game dynamics, which in turn enables the gradient ascent descent method to converge on challenging toy games often analyzed in the literature. Moreover, it implicitly handles high variance without using large mini-batches, known to be essential for reaching state of the art performance. Experimental results on MNIST, SVHN, and CIFAR-10, demonstrate a clear advantage of combining Lookahead with Adam or extragradient, in terms of performance, memory footprint, and improved stability. Using 30-fold fewer parameters and 16-fold smaller minibatches we outperform the reported performance of the class-dependent BigGAN on CIFAR-10 by obtaining FID of $13.65$ \emph{without} using the class labels, bringing state-of-the-art GAN training within reach of common computational resources.Generative Adversarial NetMinimaxOptimizationHyperparameterArchitectureHamiltonianCompletenessDeep learningStopping timeNash equilibrium...
- There have been several reports of a detection of an unexplained excess of X-ray emission at $\simeq$ 3.5 keV in astrophysical systems. One interpretation of this excess is the decay of sterile neutrino dark matter. The most influential study to date analysed 73 clusters observed by the XMM-Newton satellite. We explore evidence for a $\simeq$ 3.5 keV excess in the XMM-PN spectra of 117 redMaPPer galaxy clusters ($0.1 < z < 0.6$). In our analysis of individual spectra, we identify three systems with an excess of flux at $\simeq$ 3.5 keV. In one case (XCS J0003.3+0204) this excess may result from a discrete emission line. None of these systems are the most dark matter dominated in our sample. We group the remaining 114 clusters into four temperature ($T_{\rm X}$) bins to search for an increase in $\simeq$ 3.5 keV flux excess with $T_{\rm X}$ - a reliable tracer of halo mass. However, we do not find evidence of a significant excess in flux at $\simeq$ 3.5 keV in any $T_{\rm X}$ bins. To maximise sensitivity to a potentially weak dark matter decay feature at $\simeq$ 3.5 keV, we jointly fit 114 clusters. Again, no significant excess is found at $\simeq$ 3.5 keV. We estimate the upper limit of an undetected emission line at $\simeq$ 3.5 keV to be $2.41 \times 10^{-6}$ photons cm$^{-2}$ s$^{-1}$, corresponding to a mixing angle of $\sin^2(2\theta)=4.4 \times 10^{-11}$, lower than previous estimates from cluster studies. We conclude that a flux excess at $\simeq$ 3.5 keV is not a ubiquitous feature in clusters and therefore unlikely to originate from sterile neutrino dark matter decay.Cluster of galaxiesXMM-Newton cluster surveyEPIC PN cameraDark matterDark matter decayRegion of interestMixing angleXMM-Newton MOS cameraMetallicityPerseus galaxy cluster...
- We use operads, algebraic devices abstracting the notion of composition of combinatorial objects, to build pairs of graded graphs. For this, we first construct a pair of graded graphs where vertices are syntax trees, the elements of free nonsymmetric operads. This pair of graphs is dual for a new notion of duality called $\phi$-diagonal duality, similar to the ones introduced by Fomin. We also provide a general way to build pairs of graded graphs from operads, wherein underlying posets are analogs to the Young lattice. Some examples of operads leading to new pairs of graded graphs involving integer compositions, Motzkin paths, and $m$-trees are considered.GraphPartially ordered setDualityPermutationRankMorphismDistributive latticeStatisticsStructural inductionIrreducible element...
- To be successful in real-world tasks, Reinforcement Learning (RL) needs to exploit the compositional, relational, and hierarchical structure of the world, and learn to transfer it to the task at hand. Recent advances in representation learning for language make it possible to build models that acquire world knowledge from text corpora and integrate this knowledge into downstream decision making problems. We thus argue that the time is right to investigate a tight integration of natural language understanding into RL in particular. We survey the state of the field, including work on instruction following, text games, and learning from textual domain knowledge. Finally, we call for the development of new environments as well as further investigation into the potential uses of recent Natural Language Processing (NLP) techniques for such tasks.Natural languageReinforcement learningDecision makingComputational linguisticsDeep learningSentence representationsOptimizationArchitectureInformation retrievalMarkov decision process...
- The Coronal Global Evolutionary Model (CGEM) provides data-driven simulations of the magnetic field in the solar corona to better understand the build-up of magnetic energy that leads to eruptive events. The CGEM project has developed six capabilities. CGEM modules (1) prepare time series of full-disk vector magnetic field observations to (2) derive the changing electric field in the solar photosphere over active-region scales. This local electric field is (3) incorporated into a surface flux transport model that reconstructs a global electric field that evolves magnetic flux in a consistent way. These electric fields drive a (4) 3D spherical magneto-frictional (SMF) model, either at high-resolution over a restricted range of solid angle or at lower resolution over a global domain, to determine the magnetic field and current density in the low corona. An SMF-generated initial field above an active region and the evolving electric field at the photosphere are used to drive (5) detailed magneto-hydrodynamic (MHD) simulations of active regions in the low corona. SMF or MHD solutions are then used to compute emissivity proxies that can be compared with coronal observations. Finally, a lower-resolution SMF magnetic field is used to initialize (6) a global MHD model that is driven by an SMF electric-field time series to simulate the outer corona and heliosphere, ultimately connecting Sun to Earth. As a demonstration, this report features results of CGEM applied to observations of the evolution of NOAA Active Region 11158 in February 2011.CoronaPhotosphereSunTime SeriesSolar Dynamics ObservatoryEarthCurrent densityMagnetic energyHydrodynamical simulationsSolar corona...
- The COVID-19 pandemic started in Wuhan, China, and caused the worldwide spread of the RNA virus SARS-CoV-2, the causative agent of COVID-19. Because of its mutational rate, wide geographical distribution, and host response variance this coronavirus is currently evolving into an array of strains with increasing genetic diversity. Most variants apparently have neutral effects for disease spread and symptoms severity. However, in the viral Spike protein, which is responsible for host cell attachment and invasion, an emergent variant, containing the amino acid substitution D to G in position 614 (D614G), was suggested to increase viral infection capability. To test whether this variant has epidemiological impact, the temporal distributions of the SARS-CoV-2 samples bearing D or G at position 614 were compared in the USA, Asia and Europe. The epidemiological curves were compared at early and late epidemic stages. At early stages, where containment measures were still not fully implemented, the viral variants are supposed to be unconstrained and its growth curves might approximate the free viral dynamics. Our analysis shows that the D614G prevalence and the growth rates of COVID-19 epidemic curves are correlated in the USA, Asia and Europe. Our results suggest a selective sweep that can be explained, at least in part, by a propagation advantage of this variant, in other words, that the molecular level effects of D614G have sufficient impact on population transmission dynamics as to be detected by differences in rate coefficients of epidemic growth curves.COVID 19Selective sweepsGrowth curveMultidimensional ArrayRibonucleic acidProteinAmino-acid...