Recently bookmarked papers

with concepts:
  • I describe the recently proposed quantization of bosonic string about the mean-field ground state, paying special attention to the differences from the usual quantization about the classical vacuum which turns out to be unstable for d>2. In particular, the string susceptibility index $\gamma_{\rm str}$ is 1 in the usual perturbation theory, but equals 1/2 in the mean-field approximation that applies for 2<d<26. I show the total central charge equals zero for the mean field, which justifies the Alvarez-Arvis spectrum of the effective string, and argue that fluctuations about it do not spoil conformal invariance.
    Mean fieldMean-field approximationEffective actionPropagatorQuantizationCentral chargePath integralConformal invarianceSaddle pointRegularization...
  • A family of quantum Hamiltonians is said to be universal if any other finite-dimensional Hamiltonian can be approximately encoded within the low-energy space of a Hamiltonian from that family. If the encoding is efficient, universal families of Hamiltonians can be used as universal analogue quantum simulators and universal quantum computers, and the problem of approximately determining the ground-state energy of a Hamiltonian from a universal family is QMA-complete. One natural way to categorise Hamiltonians into families is in terms of the interactions they are built from. Here we prove universality of some important classes of interactions on qudits ($d$-level systems): (1) We completely characterise the $k$-qudit interactions which are universal, if augmented with arbitrary 1-local terms. We find that, for all $k \geqslant 2$ and all local dimensions $d \geqslant 2$, almost all such interactions are universal aside from a simple stoquastic class. (2) We prove universality of generalisations of the Heisenberg model that are ubiquitous in condensed-matter physics, even if free 1-local terms are not provided. We show that the $SU(d)$ and $SU(2)$ Heisenberg interactions are universal for all local dimensions $d \geqslant 2$ (spin $\geqslant 1/2$), implying that a quantum variant of the Max-$d$-Cut problem is QMA-complete. We also show that for $d=3$ all bilinear-biquadratic Heisenberg interactions are universal. One example is the general AKLT model. (3) We prove universality of any interaction proportional to the projector onto a pure entangled state.
    HamiltonianQubitRankIsometryInterferenceHeisenberg modelIrreducible representationLocal UnitaryCasimir operatorGraph...
  • The work is devoted to the critical analysis of theoretical prediction and astronomical observation of GR effects, first of all, the Mercury's perihelion advance. In the first part, the methodological issues of observations are discussed including a practice of observations, a method of recognizing the relativistic properties of the effect and recovering it from bulk of raw data, a parametric observational model, and finally, methods of assessment of the effect value and statistical level of confidence. In the second part, the Mercury's perihelion advance and other theoretical problems are discussed in relationship with the GR physical foundations. Controversies in literature devoted to the GR tests are analyzed. The unified GR approach to particles and photons is discussed with the emphasis on the GR classical tests. Finally, the alternative theory of relativistic effect treatment is presented.
    General relativityPerihelionMercuryPlanetEphemeridesThomas precessionStatisticsN-body problemProper timeCircular orbit...
  • J.S. Bell's work has convinced many that correlations in violation of CHSH inequalities show that the world itself is non-local, and that there is an apparently essential conflict between any sharp formulation of quantum theory and relativity. Against this consensus, this paper argues that there is no conflict between quantum theory and relativity. Quantum theory itself helps us explain such (otherwise) puzzling correlations in a way that contradicts neither Bell's intuitive locality principle nor his local causality condition. The argument depends on understanding quantum theory along pragmatist lines, and on a more general view of how that theory helps us explain. Quantum theory is compatible with Bell's intuitive locality principle and with his local causality condition not because it conforms to them, but because they are simply inapplicable to quantum theory, as so understood.
    CausalityLocality principleCHSH inequalityContradiction...
  • A systematic study of the spinor representation by means of the fermionic physical space is accomplished and implemented. The spinor representation space is shown to be constrained by the Fierz-Pauli-Kofink identities among the spinor bilinear covariants. A robust geometric and topological structure can be manifested from the spinor space, wherein, for instance, the first and second homotopy groups play prominent roles on the underlying physical properties, associated to the fermionic fields.
    Spinor representationClassificationMultivectorTopological invariantManifoldClifford algebraFierz identityDirac spinorBundleFermionic field...
  • The radiative cooling timescales at the centers of hot atmospheres surrounding elliptical galaxies, groups, and clusters are much shorter than their ages. Therefore, hot atmospheres are expected to cool and to form stars. Cold gas and star formation are observed in central cluster galaxies but at levels below those expected from an unimpeded cooling flow. X-ray observations have shown that wholesale cooling is being offset by mechanical heating from radio active galactic nuclei. Feedback is widely considered to be an important and perhaps unavoidable consequence of the evolution of galaxies and supermassive black holes. We show that cooling X-ray atmospheres and the ensuing star formation and nuclear activity are probably coupled to a self-regulated feedback loop. While the energetics are now reasonably well understood, other aspects of feedback are not. We highlight the problems of atmospheric heating and transport processes, accretion, and nuclear activity, and we discuss the potential role of black hole spin. We discuss X-ray imagery showing that the chemical elements produced by central galaxies are being dispersed on large scales by outflows launched from the vicinity of supermassive black holes. Finally, we comment on the growing evidence for mechanical heating of distant cluster atmospheres by radio jets and its potential consequences for the excess entropy in hot halos and a possible decline in the number of distant cooling flows.
    Active Galactic NucleiCoolingAccretionCooling timescaleGalaxyCooling flowStar formationOptical burstsEntropyBlack hole...
  • We use the conformal bootstrap to perform a precision study of 3d maximally supersymmetric ($\mathcal{N}=8$) SCFTs that describe the IR physics on $N$ coincident M2-branes placed either in flat space or at a $\mathbb{C}^4/\mathbb{Z}_2$ singularity. First, using the explicit Lagrangians of ABJ(M) and recent supersymmetric localization results, we calculate certain half and quarter-BPS OPE coefficients, both exactly at small $N$, and approximately in a large $N$ expansion that we perform to all orders in $1/N$. Comparing these values with the numerical bootstrap bounds leads us to conjecture that these theories obey an OPE coefficient minimization principle. We then use this conjecture as well as the extremal functional method to reconstruct the first few low-lying scaling dimensions and OPE coefficients for both protected and unprotected multiplets that appear in the OPE of two stress tensor multiplets for all values of $N$. We also calculate the half and quarter-BPS operator OPE coefficients in the $SU(2)_k \times SU(2)_{-k}$ BLG theory for all values of the Chern-Simons coupling $k$, and show that generically they do not obey the same OPE coefficient minimization principle.
    OPE coefficientsSupersymmetric CFTConformal BootstrapScaling dimensionM-theoryPartition functionConformal field theoryOperator product expansionFlavour symmetryABJM theory...
  • A solution to the infinite coupling problem for N=2 conformal supersymmetric gauge theories in four dimensions is presented. The infinitely-coupled theories are argued to be interacting superconformal field theories (SCFTs) with weakly gauged flavor groups. Consistency checks of this proposal are found by examining some low-rank examples. As part of these checks, we show how to compute new exact quantities in these SCFTs: the central charges of their flavor current algebras. Also, the isolated rank 1 E_6 and E_7 SCFTs are found as limits of Lagrangian field theories.
    Central chargeRankingFlavour symmetryEuler beta functionS-dualityGauge coupling constantCurrent algebraGauge theoryVacuum expectation valueScale invariance...
  • We consider the general gauge theory with a closed irreducible gauge algebra possessing the non-anomalous global (super)symmetry in the case when the gauge fixing procedure violates the global invariance of classical action. The theory is quantized in the framework of BRST-BV approach in the form of functional integral over all fields of the configuration space. It is shown that the global symmetry transformations are deformed in the process of quantization and the full quantum action is invariant under such deformed global transformations in the configuration space. The deformed global transformations are calculated in an explicit form in the one-loop approximation.
    Global symmetryGauge theoryBatalin-Vilkovisky formalismQuantizationEffective actionSupersymmetryGauge fixingGauge transformationSuper Yang-Mills theoryJacobi identity...
  • We consider the open superstring action in the AdS$_4 \times \mathbf{CP}^3$ background and investigate the suitable boundary conditions for the open superstring describing the 1/2-BPS D-branes by imposing the $\kappa$-symmetry of the action. This results in the classification of 1/2-BPS D-branes from covariant open superstring. It is shown that the 1/2-BPS D-brane configurations are restricted considerably by the K\"{a}hler structure on $\mathbf{CP}^3$. We just consider D-branes without worldvolume fluxes.
    D-braneAnti de Sitter spaceSuperstringOpen string theoryClassificationWess-Zumino termType IIA string theorySupersymmetryPlane waveCovariant derivative...
  • We investigate geometric aspects of double field theory (DFT) and its formulation as a doubled membrane sigma-model. Starting from the standard Courant algebroid over the phase space of an open membrane, we determine a splitting and a projection to a subbundle that sends the Courant algebroid operations to the corresponding operations in DFT. This describes precisely how the geometric structure of DFT lies in between two Courant algebroids and is reconciled with generalized geometry. We construct the membrane sigma-model that corresponds to DFT, and demonstrate how the standard T-duality orbit of geometric and non-geometric flux backgrounds is captured by its action functional in a unified way. This also clarifies the appearence of noncommutative and nonassociative deformations of geometry in non-geometric closed string theory. Gauge invariance of the DFT membrane sigma-model is compatible with the flux formulation of DFT and its strong constraint, whose geometric origin is explained. Our approach leads to a new generalization of a Courant algebroid, that we call a DFT algebroid and relate to other known generalizations, such as pre-Courant algebroids and symplectic nearly Lie 2-algebroids. We also describe the construction of a gauge-invariant doubled membrane sigma-model that does not require imposing the strong constraint.
    Courant algebroidSigma modelMembraneCourant bracketManifoldT-dualityWorldsheetGauge invarianceTangent bundleVector bundle...
  • The spectrum of IIB supergravity on AdS${}_5 \times S^5$ contains a number of bound states described by long double-trace multiplets in $\mathcal{N}=4$ super Yang-Mills theory at large 't Hooft coupling. At large $N$ these states are degenerate and to obtain their anomalous dimensions as expansions in $\tfrac{1}{N^2}$ one has to solve a mixing problem. We conjecture a formula for the leading anomalous dimensions of all long double-trace operators which exhibits a large residual degeneracy whose structure we describe. Our formula can be related to conformal Casimir operators which arise in the structure of leading discontinuities of supergravity loop corrections to four-point correlators of half-BPS operators.
    SupergravityAnomalous dimensionOperator product expansionCasimir operatorN=4 supersymmetric Yang-Mills theorySuper Yang-Mills theoryBound stateAnti de Sitter spaceUnitarityCompleteness...
  • We investigate the phase structure and conductivity of a relativistic fluid in a circulating electric field with a transverse magnetic field. This system exhibits behavior similar to other driven systems such as strongly coupled driven CFTs [Rangamani2015] or a simple anharmonic oscillator. We identify distinct regions of fluid behavior as a function of driving frequency, and argue that a "phase" transition will occur. Such a transition could be measurable in graphene, and may be characterized by sudden discontinuous increase in the Hall conductivity. The presence of the discontinuity depends on how the boundary is approached as the frequency or amplitude is dialed. In the region where two solution exists the measured conductivity will depend on how the system is prepared.
    Phase transitionsGrapheneConstitutive relationHydrodynamic descriptionInstabilityFluid dynamicsShear viscosityTime-reversal symmetryPerfect fluidLaplace transform...
  • Generally, quantum field theories can be thought as deformations away from conformal field theories. In this article, with a simple bottom up model assumed to possess a holographic description, we study a putative large N quantum field theory with large and arbitrary number of adjoint and fundamental degrees of freedom and a non-vanishing chiral anomaly, in the presence of an external magnetic field and with a non-vanishing density. Motivated by the richness of quantum chromodynamics under similar condition, we explore the solution space to find an infinite class of scale-invariant, but not conformal, field theories that may play a pivotal role in defining the corresponding physics. In particular, we find two classes of geometries: Schrodinger isometric and warped AdS_3 geometries with an SL(2,R) X U(1) isometry. We find hints of spontaneous breaking of translational symmetry, at low temperatures, around the warped backgrounds.
    Chern-Simons termConformal field theoryQuantum chromodynamicsField theoryInstabilityEvent horizonDegree of freedomBorn-Infeld actionD-braneQuantum field theory...
  • Using the holographic model for spontaneous symmetry breaking, we study some properties of the dual superfluid such as the thermodynamic exponents, Joule-Thomson coefficient, compressibility etc. Our focus is on how these properties vary with the scaling dimension and the charge of the operator that undergoes condensation.
    EntropyScaling dimensionBlack holeCondensationScalar fieldGraphHorizonCharged black holeField theorySuperfluid...
  • During reionization, neutral hydrogen in the intergalactic medium (IGM) imprints a damping wing absorption feature on the spectrum of high-redshift quasars. A detection of this signature provides compelling evidence for a significantly neutral Universe, and enables measurements of the hydrogen neutral fraction $x_{\rm HI}(z)$ at that epoch. Obtaining reliable quantitative constraints from this technique, however, is challenging due to stochasticity induced by the patchy inside-out topology of reionization, degeneracies with quasar lifetime, and the unknown unabsorbed quasar spectrum close to rest-frame Ly$\alpha$. We combine a large-volume semi-numerical simulation of reionization topology with 1D radiative transfer through high-resolution hydrodynamical simulations of the high-redshift Universe to construct models of quasar transmission spectra during reionization. Our state-of-the-art approach captures the distribution of damping wing strengths in biased quasar halos that should have reionized earlier, as well as the erosion of neutral gas in the quasar environment caused by its own ionizing radiation. Combining this detailed model with our new technique for predicting the quasar continuum and its associated uncertainty, we introduce a Bayesian statistical method to jointly constrain the neutral fraction of the Universe and the quasar lifetime from individual quasar spectra. We apply this methodology to the spectra of the two highest redshift quasars known, ULAS J1120+0641 and ULAS J1342+0928, and measured volume-averaged neutral fractions $\langle x_{\rm HI} \rangle(z=7.09)=0.48^{+0.26}_{-0.26}$ and $\langle x_{\rm HI} \rangle(z=7.54)=0.60^{+0.20}_{-0.23}$ (posterior medians and 68% credible intervals) when marginalized over quasar lifetimes of $10^3 \leq t_{\rm q} \leq 10^8$ years.
    QuasarDamping Wing of Gunn-Peterson TroughReionizationIntergalactic mediumIonizing radiationPrincipal component analysisHydrodynamical simulationsTraining setIonizationHistory of the reionization...
  • The largely dominant meritocratic paradigm of highly competitive Western cultures is rooted on the belief that success is due mainly, if not exclusively, to personal qualities such as talent, intelligence, skills, efforts or risk taking. Sometimes, we are willing to admit that a certain degree of luck could also play a role in achieving significant material success. But, as a matter of fact, it is rather common to underestimate the importance of external forces in individual successful stories. It is very well known that intelligence or talent exhibit a Gaussian distribution among the population, whereas the distribution of wealth - considered a proxy of success - follows typically a power law (Pareto law). Such a discrepancy between a Normal distribution of inputs, with a typical scale, and the scale invariant distribution of outputs, suggests that some hidden ingredient is at work behind the scenes. In this paper, with the help of a very simple agent-based model, we suggest that such an ingredient is just randomness. In particular, we show that, if it is true that some degree of talent is necessary to be successful in life, almost never the most talented people reach the highest peaks of success, being overtaken by mediocre but sensibly luckier individuals. As to our knowledge, this counterintuitive result - although implicitly suggested between the lines in a vast literature - is quantified here for the first time. It sheds new light on the effectiveness of assessing merit on the basis of the reached level of success and underlines the risks of distributing excessive honors or resources to people who, at the end of the day, could have been simply luckier than others. With the help of this model, several policy hypotheses are also addressed and compared to show the most efficient strategies for public funding of research in order to improve meritocracy, diversity and innovation.
    Gaussian distributionScientific impactAgent-based modelRankingMarketAbundanceWinner-take-allNetwork scienceSocial systemsEllipticity...
  • We examine a pair of dynamical systems on the plane induced by a pair of spanning trees in the Cayley graph of the Super-Apollonian group of Graham, Lagarias, Mallows, Wilks and Yan. The dynamical systems compute Gaussian rational approximations to complex numbers and are "reflective" versions of the complex continued fractions of A. L. Schmidt. They also describe a reduction algorithm for Lorentz quadruples, in analogy to work of Romik on Pythagorean triples. For these dynamical systems, we produce an invertible extension and an invariant measure, which we conjecture is ergodic. We consider some statistics of the related continued fraction expansions, and we also examine the restriction of these systems to the real line, which gives a reflective version of the usual continued fraction algorithm. Finally, we briefly consider an alternate setup corresponding to a tree of Lorentz quadruples ordered by arithmetic complexity.
    GeodesicCurvatureGraphCayley graphSpanning treeErgodicityIsometryArithmeticComplex numberApollonian sphere packing...
  • As long as people have studied mathematics, they have wanted to know how many primes there are. Getting precise answers is a notoriously difficult problem, and the first suitable technique, due to Riemann, inspired an enormous amount of great mathematics, the techniques and insights permeating many different fields. In this article we will review some of the best techniques for counting primes, centering our discussion around Riemann's seminal paper. We will go on to discuss its limitations, and then recent efforts to replace Riemann's theory with one that is significantly simpler.
    Field
  • We present a framework to link and describe AGN variability on a wide range of timescales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different timescales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio ($L/L_{\rm Edd}$) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the $L/L_{\rm Edd}$ distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different timescales, therefore providing new insights into AGN variability and black hole growth phenomena.
    Active Galactic NucleiPower spectral densityLight curveGalaxyAccretionSupermassive black holeLuminosityBlack holeAccretion diskQuasar...
  • Galaxy cluster cores are pervaded by hot gas which radiates at far too high a rate to maintain any semblance of a steady state; this is referred to as the cooling flow problem. Of the many heating mechanisms that have been proposed to balance radiative cooling, one of the most attractive is dissipation of acoustic waves generated by Active Galactic Nuclei (AGN). Fabian (2005) showed that if the waves are nearly adiabatic, wave damping due to heat conduction and viscosity must be well below standard Coulomb rates in order to allow the waves to propagate throughout the core. Because of the importance of this result, we have revisited wave dissipation under galaxy cluster conditions in a way that accounts for the self limiting nature of dissipation by electron thermal conduction, allows the electron and ion temperature perturbations in the waves to evolve separately, and estimates kinetic effects by comparing to a semi-collisionless theory. While these effects considerably enlarge the toolkit for analyzing observations of wavelike structures and developing a quantitative theory for wave heating, the drastic reduction of transport coefficients proposed in Fabian (2005) remains the most viable path to acoustic wave heating of galaxy cluster cores.
    DissipationAcoustic waveViscosityCluster of galaxiesActive Galactic NucleiRadiative coolingDamping rateTurbulenceCosmic rayCooling flow...
  • Cluster cool cores frequently possess networks of line-emitting filaments. These filaments are thought to have originated most likely via uplift of cold gas from cluster centers by buoyant bubbles inflated by the central active galactic nuclei (AGN), or via local thermal instability in the hot intracluster medium (ICM). Therefore the filaments are either the signatures of black hole feedback or feeding of the supermassive black holes. Despite being characterized by cooling times much shorter than cool core dynamical times, the filaments are significant H$\alpha$ emitters, which suggests that some process continuously powers these structures. Many cluster cool cores host diffuse radio mini halos and AGN injecting radio plasma into the ambient ICM, suggesting that some cosmic rays (CRs) and magnetic fields are present in the ICM. We argue that the excitation of Alfv\'en waves by CR streaming, and the replenishment of CR energy via accretion onto the filaments of high plasma-$\beta$ ICM characterized by low CR pressure support, can provide the adequate amount of heating to power and sustain the emission from these filaments. This mechanism does not require the CRs to penetrate the filaments even if the filaments are magnetically isolated from the ambient ICM and it may operate irrespectively of whether the filaments are dredged up from the center or form in situ in the ICM. This picture is qualitatively consistent with non-thermal line ratios seen in the cold filament gas. Future X-ray observations of the iron line complex with XARM, Lynx, or Athena could help to test this model by providing constraints on the amount of CRs in the hot plasma that is cooling and accreting onto the filaments.
    Cosmic rayAccretionActive Galactic NucleiCool core galaxy clusterPressure supportCoolingCooling timescaleInstabilitySpeed of soundTwo-stream instability...
  • A pair of natural numbers (a,b) such that a is both squarefree and coprime to b is called a carefree couple. A result conjectured by Manfred Schroeder (in his book `Number theory in science and communication') on carefree couples and a variant of it are established using standard arguments from elementary analytic number theory. Also a related conjecture of Schroeder on triples of integers that are pairwise coprime is proved.
    CoprimeCountingAnalytic number theoryPairwise coprimeNumber theoryStatisticsRiemann zeta functionEuler's totient functionEuler productComplex number...
  • The exceptional precision attainable using modern spectroscopic techniques provides a promising avenue to search for signatures of physics beyond the Standard Model in tiny shifts of the energy levels of atoms and molecules. We briefly review three categories of new-physics searches based in precision measurements: tests of QED using measurements of the anomalous magnetic moment of the electron and the value of the fine-structure constant, searches for time variation of the fundamental constants, and searches for a permanent electric dipole moment of an electron or atomic nucleus.
    PrecisionBeyond the Standard ModelAnomalous magnetic dipole momentFine structure constantAtomic nucleusElectric dipole momentQuantum electrodynamicsAtoms and moleculesSpectroscopyMeasurement...
  • We propose a model with an $U(1)_{B-L}$ gauge symmetry, in which small neutrino masses, dark matter and the matter-antimatter asymmetry in the Universe can be simultaneously explained. In particular, the neutrino masses are generated radiatively, while the matter-antimatter asymmetry is led by the leptogenesis mechanism, at TeV scale. We also explore allowed regions of the model parameters and discuss some phenomenological effects including lepton flavor violating processes.
    Dark matterNeutrino massGauge symmetryLeptogenesisBaryon asymmetry of the UniverseSterile neutrinoTeV scaleLepton flavour violationActive neutrinoStandard Model...
  • Contrary to what would be predicted on the basis of Cram\'er's model concerning the distribution of prime numbers, we develop evidence that the distribution of $\psi(x+H)- \psi(x)$, for $0\le x\le N$, is approximately normal with mean $\sim H$ and variance $\sim H\log N/H$, when $N^\delta \le H \le N^{1-\delta}$.
  • The general concept of symmetry is realized in manifold ways in different realms of reality, such as plants, animals, minerals, mathematical objects or human artefacts in literature, fine arts and society. In order to arrive at a common ground for this variedness a very general conceptualization of symmetry is proposed: Existence of substitutions, which, in the given context, do not lead to an essential change. This simple definition has multiple consequences: -The context dependence of the notion of symmetry is evident in the humanities but by no means irrelevant yet often neglected in science. The subtle problematic of concept formation and the ontological status of similarities opens up. -In general, the substitutions underlying the concept of symmetry are not really performed but remain in a state of virtuality. Counterfactuality, freedom and creativity come into focus. The detection of previously hidden symmetries may provide deep and surprising insights. -Related to this, due attention is devoted to the aesthetic dimension of symmetry and the breaking of it. -Finally, we point out to what extent life is based on the interplay between order and freedom, between full and broken symmetry.
    ManifoldSymmetryDimensionsObject...
  • The axiomatic structure of the electromagnetic theory is outlined. We will base classical electrodynamics on (1) electric charge conservation, (2) the Lorentz force, (3) magnetic flux conservation, and (4) on the Maxwell-Lorentz spacetime relations. This yields the Maxwell equations. The consequences will be drawn, inter alia, for the interpretation and the dimension of the electric and magnetic fields.
    ElectrodynamicsCharge conservationDifferential form of degree threeTwo-formRankMagnetic field strengthCurrent densityInternational System of UnitsDifferential formHodge star...
  • We will display the fundamental structure of classical electrodynamics. Starting from the axioms of (1) electric charge conservation, (2) the existence of a Lorentz force density, and (3) magnetic flux conservation, we will derive Maxwell's equations. They are expressed in terms of the field strengths $(E,{\cal B})$, the excitations $({\cal D},H)$, and the sources $(\rho,j)$. This fundamental set of four microphysical equations has to be supplemented by somewhat less general constitutive assumptions in order to make it a fully determined system with a well-posed initial value problem. It is only at this stage that a distance concept (metric) is required for space-time. We will discuss one set of possible constitutive assumptions, namely ${\cal D}\sim E$ and $H\sim {\cal B}$. {\em file erik8a.tex, 1999-07-27}
  • We sketch the foundations of classical electrodynamics, in particular the transition that took place when Einstein, in 1915, succeeded to formulate general relativity. In 1916 Einstein demonstrated that, with a choice of suitable variables for the electromagnetic field, it is possible to put Maxwell's equation into a form that is covariant under general coordinate transformations. This unfolded, by basic contributions of Kottler, Cartan, van Dantzig, Schouten & Dorgelo, Toupin & Truesdell, and Post, to what one may call {\em premetric classical electrodynamics.} This framework will be described shortly. An analysis is given of the physical dimensions involved in electrodynamics and subsequently the question of units addressed. It will be pointed out that these results are untouched by the generalization of classical to quantum electrodynamics (QED). We compare critically our results with those of {\sl L.B. Okun} which he had presented at a recent conference.
    ElectrodynamicsGeneral relativityQuantum electrodynamicsGeneral covarianceElectromagnetismCovariancePhysical dimensionsFine structure constantCharge conservationElectricity and magnetism...
  • In this manuscript we review the theoretical foundations of gravitational waves in the framework of Albert Einstein's theory of general relativity. Following Einstein's early efforts we first derive the linearised Einstein field equations and work out the corresponding gravitational wave equation. Moreover we present the gravitational potentials in the far away wave zone field point approximation obtained from the relaxed Einstein field equations. We close this review by taking a closer look on the radiative losses of gravitating $n$-body systems and present some aspects of the current interferometric gravitational waves detectors. Each section posses a separate appendix-contribution where further computational details are displayed. To conclude we summarize the main results and present a brief outlook in terms of current ongoing efforts to build a spaced based gravitational wave observatory.
    Gravitational waveGravitational radiationBinary starInterferometersEinstein field equationsLasersLaser Interferometer Gravitational-Wave ObservatoryDouble pulsarEarthLaser Interferometer Space Antenna...
  • The mainstream view of meaning is that it is emergent, not fundamental, but some have disputed this, asserting that there is a more fundamental level of reality than that addressed by current physical theories, and that matter and meaning are in some way entangled. In this regard there are intriguing parallels between the quantum and biological domains, suggesting that there may be a more fundamental level underlying both. I argue that the organisation of this fundamental level is already to a considerable extent understood by biosemioticians, who have fruitfully integrated Peirce's sign theory into biology; things will happen there resembling what happens with familiar life, but the agencies involved will differ in ways reflecting their fundamentality, in other words they will be less complex, but still have structures complex enough for what they have to do. According to one approach involving a collaboration with which I have been involved, a part of what they have to do, along with the need to survive and reproduce, is to stop situations becoming too chaotic, a concept that accords with familiar 'edge of chaos' ideas. Such an extension of sign theory (semiophysics?) need to be explored by physicists, possible tools being computational models, existing insights into complexity, and dynamical systems theory. Such a theory will not be mathematical in the same way that conventional physics theories are mathematical: rather than being foundational, mathematics will be 'something that life does', something that sufficiently evolved life does because in the appropriate context so doing is of value to life.
    Computational modellingChaosTheoryDynamical systems...
  • The solar chemical composition is an important ingredient in our understanding of the formation, structure and evolution of both the Sun and our solar system. Furthermore, it is an essential reference standard against which the elemental contents of other astronomical objects are compared. In this review we evaluate the current understanding of the solar photospheric composition. In particular, we present a re-determination of the abundances of nearly all available elements, using a realistic new 3-dimensional (3D), time-dependent hydrodynamical model of the solar atmosphere. We have carefully considered the atomic input data and selection of spectral lines, and accounted for departures from LTE whenever possible. The end result is a comprehensive and homogeneous compilation of the solar elemental abundances. Particularly noteworthy findings are significantly lower abundances of carbon, nitrogen, oxygen and neon compared with the widely-used values of a decade ago. The new solar chemical composition is supported by a high degree of internal consistency between available abundance indicators, and by agreement with values obtained in the solar neighborhood and from the most pristine meteorites. There is, however, a stark conflict with standard models of the solar interior according to helioseismology, a discrepancy that has yet to find a satisfactory resolution.
    MeteoritesSunPhotosphereSolar abundancesIonizationFluid dynamicsStarSolar photosphereSolar systemSolar wind...
  • We demonstrate that a Meijer-G-function-based resummation approach can be successfully applied to approximate the Borel sum of divergent series, and thus to approximate the Borel-\'Ecalle summation of resurgent transseries in quantum field theory (QFT). The proposed method is shown to vastly outperform the conventional Borel-Pad\'e and Borel-Pad\'e-\'Ecalle summation methods. The resulting Meijer-G approximants are easily parameterized by means of a hypergeometric ansatz and can be thought of as a generalization to arbitrary order of the Borel-Hypergeometric method [Mera {\it et al.} Phys. Rev. Lett. {\bf 115}, 143001 (2015)]. Here we illustrate the ability of this technique in various examples from QFT, traditionally employed as benchmark models for resummation, such as: 0-dimensional $\phi^4$ theory, $\phi^4$ with degenerate minima, self-interacting QFT in 0-dimensions, and the computation of one- and two-instanton contributions in the quantum-mechanical double-well problem.
    InstantonQuantum field theoryResummationHypergeometric functionMeijer G-functionRational functionPartition functionRadius of convergenceAsymptotic expansionPerturbation theory...
  • Shannon's mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon's theory. An annotated reading list is provided for further reading.
    EntropyInformation theoryMutual informationGaussian noiseSignal to noise ratioGaussian distributionNoise powerPrecisionInferenceUniform distribution...
  • The fields are discussed, the internal degrees of freedom of which are expressed by either the Grassmann or the Clifford "coordinates". Since both "coordinates" fulfill anticommutation relations, both fields can be second quantized so that their creation and annihilation operators fulfill the requirements of the commutation relations for fermion fields. However, while the internal spin, determined by the generators of the Lorentz group of the Clifford objects $S^{ab}$ and $\tilde{S}^{ab}$ (in the spin-charge-family theory $S^{ab}$ determine the spin degrees of freedom and $\tilde{S}^{ab}$ the family degrees of freedom) have half integer spin, ${\cal {\bf S}}^{ab}$ (expressible with $S^{ab} + \tilde{S}^{ab}$) have integer spin. Nature "made" obviously a choice of the Clifford algebra, at least in the so far observed part of our universe. We discuss the quantization - first and second - of the fields, the internal degrees of freedom of which are functions of the Grassmann coordinates $\theta$ and their conjugate momentum and the fields, the internal degrees of freedom of which are functions of the Clifford $\gamma^{a}$. Inspiration comes from the spin-charge-family theory with the action for fermions in $d$-dimensional space is $ \int \; d^dx \; E\;\frac{1}{2}\, (\bar{\psi} \, \gamma^a p_{0a} \psi) + h.c.$, $p_{0a} = f^{\alpha}{}_a p_{0\alpha} +\frac{1}{2E}\, \{ p_{\alpha}, E f^{\alpha}{}_a\}_- $, $ p_{0\alpha}=$ $ p_{\alpha} - \frac{1}{2} S^{ab} \omega_{ab \alpha} - \frac{1}{2} \tilde{S}^{ab} \tilde{\omega}_{ab \alpha}$. We write the basic states of the Grassmann fields and the Clifford fields as a function of products of either Grassmann or Clifford objects, trying to understand "the choice of nature". We look for the action for free fields in Grassmann and Clifford space to understand why Clifford algebra "wins" in the competition for the physical degrees of freedom.
    Vacuum stateDegree of freedomClifford algebraNilpotentCreation and annihilation operatorsLorentz transformationVector spaceGauge fieldSecond quantizationSuperposition...
  • A central claim in modern network science is that real-world networks are typically "scale free," meaning that the fraction of nodes with degree $k$ follows a power law, decaying like $k^{-\alpha}$, often with $2 < \alpha < 3$. However, empirical evidence for this belief derives from a relatively small number of real-world networks. We test the universality of scale-free structure by applying state-of-the-art statistical tools to a large corpus of nearly 1000 network data sets drawn from social, biological, technological, and informational sources. We fit the power-law model to each degree distribution, test its statistical plausibility, and compare it via a likelihood ratio test to alternative, non-scale-free models, e.g., the log-normal. Across domains, we find that scale-free networks are rare, with only 4% exhibiting the strongest-possible evidence of scale-free structure and 52% exhibiting the weakest-possible evidence. Furthermore, evidence of scale-free structure is not uniformly distributed across sources: social networks are at best weakly scale free, while a handful of technological and biological networks can be called strongly scale free. These results undermine the universality of scale-free networks and reveal that real-world networks exhibit a rich structural diversity that will likely require new ideas and mechanisms to explain.
    Scale-freeGraphDegree sequenceDegree distributionSimple graphMultigraphLikelihood-ratio testSocial networkPreferential attachmentNetwork science...
  • Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While current methods offer efficiencies by adaptively choosing new configurations to train, an alternative strategy is to adaptively allocate resources across the selected configurations. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinitely many armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce Hyperband for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare Hyperband with state-of-the-art methods on a suite of hyperparameter optimization problems. We observe that Hyperband provides five times to thirty times speedup over state-of-the-art Bayesian optimization algorithms on a variety of deep-learning and kernel-based learning problems.
    HyperparameterOptimizationBayesianHorizonRankEarly stoppingTwo-photon exchangeRegularizationDeep learningMachine learning...
  • First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods for optimization problems in machine learning that match the per-iteration cost of gradient based methods, and in certain settings improve upon the overall running time over popular first-order methods. Furthermore, our algorithm has the desirable property of being implementable in time linear in the sparsity of the input data.
    Statistical estimatorOptimizationMachine learningRankStochastic optimizationConvex setRegularizationStochastic gradient descentCompletenessRegression...
  • Submodular functions are relevant to machine learning for at least two reasons: (1) some problems may be expressed directly as the optimization of submodular functions and (2) the lovasz extension of submodular functions provides a useful set of regularization functions for supervised and unsupervised learning. In this monograph, we present the theory of submodular functions from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems. In particular, we show how submodular function minimization is equivalent to solving a wide variety of convex optimization problems. This allows the derivation of new efficient algorithms for approximate and exact submodular function minimization with theoretical guarantees and good practical performance. By listing many examples of submodular functions, we review various applications to machine learning, such as clustering, experimental design, sensor placement, graphical model structure learning or subset selection, as well as a family of structured sparsity-inducing norms that can be derived and used from submodular functions.
    Greedy algorithmPolytopeRegularizationExtreme pointGraphDualityConvex hullLinear functionalModularityRelaxation...
  • A vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks. The freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) relaxed optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization. On the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques - popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties. This monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. The monograph will lead the reader through several widely used non-convex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.
    OptimizationRankEM algorithmMachine learningConvex setLatent variableNP-hard problemLinear regressionMaximum likelihood estimateLikelihood function...
  • A thesis submitted for the degree of Doctor of Philosophy of The Australian National University. In this work we introduce several new optimisation methods for problems in machine learning. Our algorithms broadly fall into two categories: optimisation of finite sums and of graph structured objectives. The finite sum problem is simply the minimisation of objective functions that are naturally expressed as a summation over a large number of terms, where each term has a similar or identical weight. Such objectives most often appear in machine learning in the empirical risk minimisation framework in the non-online learning setting. The second category, that of graph structured objectives, consists of objectives that result from applying maximum likelihood to Markov random field models. Unlike the finite sum case, all the non-linearity is contained within a partition function term, which does not readily decompose into a summation. For the finite sum problem, we introduce the Finito and SAGA algorithms, as well as variants of each. For graph-structured problems, we take three complementary approaches. We look at learning the parameters for a fixed structure, learning the structure independently, and learning both simultaneously. Specifically, for the combined approach, we introduce a new method for encouraging graph structures with the "scale-free" property. For the structure learning problem, we establish SHORTCUT, a O(n^{2.5}) expected time approximate structure learning method for Gaussian graphical models. For problems where the structure is known but the parameters unknown, we introduce an approximate maximum likelihood learning algorithm that is capable of learning a useful subclass of Gaussian graphical models.
    Machine learningGraphGraphical modelMaximum likelihoodStructured learningRandom FieldScale-freePartition functionObjectiveAlgorithms...
  • This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by Nesterov's seminal book and Nemirovski's lecture notes, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. We also pay special attention to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging) and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.
    OptimizationConvex setPolytopeRegularizationMachine learningSaddle pointRelaxationStochastic optimizationBregman divergenceRandom walk...
  • We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.
    Long short term memoryArchitectureComputational linguisticsWord vectorsWord Sense DisambiguationPart-of-speechTraining setRecurrent neural networkRegularizationNearest-neighbor site...
  • This article reviews recent developments in tests of fundamental physics using atoms and molecules, including the subjects of parity violation, searches for permanent electric dipole moments, tests of the CPT theorem and Lorentz symmetry, searches for spatiotemporal variation of fundamental constants, tests of quantum electrodynamics, tests of general relativity and the equivalence principle, searches for dark matter, dark energy and extra forces, and tests of the spin-statistics theorem. Key results are presented in the context of potential new physics and in the broader context of similar investigations in other fields. Ongoing and future experiments of the next decade are discussed.
    PrecisionAxionElectric dipole momentStandard ModelQuantum electrodynamicsEarthLasersHidden photonAxion-like particleTorsion tensor...
  • The observed surface densities of dark matter halos are known to follow a simple scaling law, ranging from dwarf galaxies to galaxy clusters, with a weak dependence on their virial mass. Here we point out that this can not only be used to provide a method to determine the standard relation between halo mass and concentration, but also to use large samples of objects in order to place constraints on dark matter self-interactions that can be more robust than constraints derived from individual objects. We demonstrate our method by considering a sample of about 50 objects distributed across the whole halo mass range, and by modelling the effect of self-interactions in a way similar to what has been previously done in the literature. Using additional input from simulations then results in a constraint on the self-interaction cross section per unit dark matter mass of about $\sigma/m_\chi\lesssim 0.3$ cm$^2$/g. We expect that these constraints can be significantly improved in the future, and made more robust, by i) an improved modelling of the effect of self-interactions, both theoretical and by comparison with simulations, ii) taking into account a larger sample of objects and iii) by reducing the currently still relatively large uncertainties that we conservatively assign to the surface densities of individual objects. The latter can be achieved in particular by using kinematic observations to directly constrain the average halo mass inside a given radius, rather than fitting the data to a pre-selected profile and then reconstruct the mass. For a velocity-independent cross-section, our current result is formally already somewhat smaller than the range $0.5-5$ cm$^2$/g that has been invoked to explain potential inconsistencies between small-scale observations and expectations in the standard collisionless cold dark matter paradigm.
    Dark matterSelf-interacting dark matterNavarro-Frenk-White profileVirial massCored dark matter density profileCore radiusScaling lawSelf-interaction cross-sectionDark Matter Density ProfileCosmology...
  • We point out the existence of a new general relativistic contribution to the perihelion advance of Mercury that, while smaller than the contributions arising from the solar quadrupole moment and angular momentum, is 100 times larger than the second-post-Newtonian contribution. It arises in part from relativistic "cross-terms" in the post-Newtonian equations of motion between Mercury's interaction with the Sun and with the other planets, and in part from an interaction between Mercury's motion and the gravitomagnetic field of the moving planets. At a few parts in $10^6$ of the leading general relativistic precession of 42.98 arcseconds per century, these effects are likely to be detectable by the BepiColombo mission to place and track two orbiters around Mercury, scheduled for launch around 2018.
    MercuryPerihelionPlanetSunGeneral relativitySchedulingQuadrupolePeriapsisLunar Laser Ranging experimentHelioseismology...
  • On the verge of the centenary of dimensional analysis (DA), we present a generalisation of the theory and a methodology for the discovery of empirical laws from observational data. It is well known that DA: a) reduces the number of free parameters, b) guarantees scale invariance through dimensional homogeneity and c) extracts functional information encoded in the dimensionless grouping of variables. Less known are the results of Rudolph and co-workers that DA also gives rise to a new pair of transforms - the similarity transform (S) that converts physical dimensional data into dimensionless space and its inverse (S'). Here, we present a new matrix generalisation of the Buckingham Theorem, made possible by recent developments in the theory of inverse non-square matrices, and show how the transform pair arises naturally. We demonstrate that the inverse transform S' is non-unique and how this casts doubt on scaling relations obtained in cases where observational data has not been referred to in order to break the degeneracy inherent in transforming back to dimensional (physical) space. As an example, we show how the underlying functional form of the Planck Radiation Law can be deduced in only a few lines using the matrix method and without appealing to first principles; thus demonstrating the possibility of a priori knowledge discovery; but that subsequent data analysis is still required in order to identify the exact causal law. It is hoped that the proof presented here will give theoreticians confidence to pursue inverse problems in physics using DA.
    Scaling lawHomogenizationPlanck missionKnowledge discoveryBlock matrixIntensityScale invarianceRankingCausalityDimensional Reduction...
  • We extend the colored Zee-Babu model with a gauged $U(1)_{B-L}$ symmetry and a scalar singlet dark matter (DM) candidate $S$. The spontaneous breaking of $U(1)_{B-L}$ leaves a residual $Z_2$ symmetry that stabilizes the DM and generates tiny neutrino mass at the two-loop level with the color seesaw mechanism. We investigate the imprint of this model on two of cosmic-ray anomalies: the Fermi-LAT gamma-ray excess at the Galactic Center (GCE) and the PeV ultra-high energy (UHE) neutrino events at the IceCube. We found that the Fermi-LAT GCE spectrum can be well fitted by DM annihilation into a pair of on-shell singlet Higgs mediators while being compatible with the constraints from relic density, direct detections as well as dwarf spheroidal galaxies in the Milky Way. Although the UHE neutrino events at the IceCube could be accounted for by resonance production of a TeV-scale leptoquark, the relevant Yukawa couplings have been severely constrained by current low energy flavor physics. Then we derive further the limits on the Yukawa couplings by employing the 6-year IceCube data.
    IceCube Neutrino ObservatoryDark matterNeutrinoNeutrino massYukawa couplingStandard ModelFERMI telescopeUltra-high energy neutrinoRelic abundanceDark matter annihilation...