Recently bookmarked papers

with concepts:
  • We consider the emergence from quantum entanglement of spacetime geometry in a bulk region. For certain classes of quantum states in an appropriately factorized Hilbert space, a spatial geometry can be defined by associating areas along codimension-one surfaces with the entanglement entropy between either side. We show how Radon transforms can be used to convert this data into a spatial metric. Under a particular set of assumptions, the time evolution of such a state traces out a four-dimensional spacetime geometry, and we argue using a modified version of Jacobson's "entanglement equilibrium" that the geometry should obey Einstein's equation in the weak-field limit. We also discuss how entanglement equilibrium is related to a generalization of the Ryu-Takayanagi formula in more general settings, and how quantum error correction can help specify the emergence map between the full quantum-gravity Hilbert space and the semiclassical limit of quantum fields propagating on a classical spacetime.
    EntanglementRadon transformEntanglement entropyEinstein field equationsEntropyHamiltonianAdS/CFT correspondenceMutual informationQuantum error correctionDegree of freedom...
  • The self-similarity properties of fractals are studied in the framework of the theory of entire analytical functions and the $q$-deformed algebra of coherent states. Self-similar structures are related to dissipation and to noncommutative geometry in the plane. The examples of the Koch curve and logarithmic spiral are considered in detail. It is suggested that the dynamical formation of fractals originates from the coherent boson condensation induced by the generators of the squeezed coherent states, whose (fractal) geometrical properties thus become manifest. The macroscopic nature of fractals appears to emerge from microscopic coherent local deformation processes.
    FractalSelf-similarityDissipationNoncommutative geometrySqueezed coherent stateInterferenceCondensationTime-reversal symmetryDegree of freedomHamiltonian...
  • The Legendre transform is an important tool in theoretical physics, playing a critical role in classical mechanics, statistical mechanics, and thermodynamics. Yet, in typical undergraduate or graduate courses, the power of motivation and elegance of the method are often missing, unlike the treatments frequently enjoyed by Fourier transforms. We review and modify the presentation of Legendre transforms in a way that explicates the formal mathematics, resulting in manifestly symmetric equations, thereby clarifying the structure of the transform algebraically and geometrically. Then we bring in the physics to motivate the transform as a way of choosing independent variables that are more easily controlled. We demonstrate how the Legendre transform arises naturally from statistical mechanics and show how the use of dimensionless thermodynamic potentials leads to more natural and symmetric relations.
    Statistical mechanicsThermodynamic potentialsEntropyHamiltonianHelmholtz Free EnergyLaplace transformPartition functionEnthalpyNon-convexitySaddle point...
  • The centuries-long practice of the teaching turned mechanics into an academic construct detached from its underlying science, the physics of macroscopic bodies. In particular, the regularities that delineate the scope of validity of Newtonian mechanics was never used as premises in that construct. Instead, its logical structure has been built with the only purpose to ease the presentation of mechanics as an application of calculus and algebra. This leaves no room for the explicit physical description of the fundamental notions of the construct, such as the ability of (a system of) physical bodies to keep a spatial form, symbolized by a rigid measuring rod, and the possibility to count out even temporal intervals, symbolized by a standard clock, let alone the origin of the basic mechanical quantities. The comparison between (states of) physical objects is possible as far as the natural regularities enable a researcher to define an equivalence relation in a set of these objects. The class of macroscopic (or macroscopically perceived) objects is that one where certain composition rules hold so that constituents of any object are macroscopic, too. Then the properties of compositions of macroscopic objects, being in conjunction with the equivalence relations between them, yield a numerical characterization of each macroscopic object, i.e. a basic physical quantity. The technique of logically arranged feasible/thought experiments is used to formulate the logically impeccable, induction-based definitions for two quantities: the length of a line segment as an instructive toy example and the mass of a macroscopic body as an example applicable to the real world.
    RegularizationPhysical bodyLiquidsConjunctionQuantum mechanicsSurface tensionArithmeticInclinationOrientationPhysics Education...
  • Gauge invariance is the basis of the modern theory of electroweak and strong interactions (the so called Standard Model). The roots of gauge invariance go back to the year 1820 when electromagnetism was discovered and the first electrodynamic theory was proposed. Subsequent developments led to the discovery that different forms of the vector potential result in the same observable forces. The partial arbitrariness of the vector potential A brought forth various restrictions on it. div A = 0 was proposed by J. C. Maxwell; 4-div A = 0 was proposed L. V. Lorenz in the middle of 1860's . In most of the modern texts the latter condition is attributed to H. A. Lorentz, who half a century later was one of the key figures in the final formulation of classical electrodynamics. In 1926 a relativistic quantum-mechanical equation for charged spinless particles was formulated by E. Schrodinger, O. Klein, and V. Fock. The latter discovered that this equation is invariant with respect to multiplication of the wave function by a phase factor exp(ieX/hc) with the accompanying additions to the scalar potential of -dX/cdt and to the vector potential of grad X. In 1929 H. Weyl proclaimed this invariance as a general principle and called it Eichinvarianz in German and gauge invariance in English. The present era of non-abelian gauge theories started in 1954 with the paper by C. N. Yang and R. L. Mills.
    Gauge invarianceElectrodynamicsStrong interactionsQuantum mechanicsYang-Mills theoryStandard ModelPotentialVectorElectromagnetCharge...
  • The concept of gauge invariance in classical electrodynamics assumes tacitly that Maxwell's equations have unique solutions. By calculating the electromagnetic field of a moving particle both in Lorenz and in Coulomb gauge and directly from the field equations we obtain, however, contradicting solutions. We conclude that the tacit assumption of uniqueness is not justified. The reason for this failure is traced back to the inhomogeneous wave equations which connect the propagating fields and their sources at the same time.
    Wave equationElectrodynamicsGauge invariancePoint sourceRegularizationGauge transformationNear-fieldHydrodynamic descriptionField theoryAcoustic wave...
  • Within the past fifteen years the use of the concept of "relativistic mass" has been on the decline and has been replaced by the concept of "proper mass" (aka "rest mass") - ?simply referred to as "mass" and labeled "m" by its proponents. This decline in usage appears to be due to arguments presented in several journal articles over the last thirty-five years, as well as to standard practices in the field of particle physics. The aforementioned debate consists of arguments as to how the term "mass" should be defined to maximize logic as well as to be less confusing to the layman and people just starting to learn relativity. Lacking in the debate is a clear definition of all types of mass and all its usages in a wide variety of cases. The purpose in this article is to bring a unifying perspective to the subject. In doing so I will explore those things omitted from previous articles on this subject including the importance of point particles vs. extended objects; open vs. closed systems and gravitational mass. Although I argue for the usage of relativistic mass I do "not" argue that proper mass is not an important tool in relativistic dynamics.
    MassObjectGravitationObjectiveParticlesParticle physicsRest massField...
  • A lengthy bibliography of books referring to special and/or general relativity is provided to give a background for discussions on the historical use of the concept of relativistic mass.
  • The concept of velocity dependent mass, relativistic mass, is examined and is found to be inconsistent with the geometrical formulation of special relativity. This is not a novel result; however, many continue to use this concept and some have even attempted to establish it as the basis for special relativity. It is argued that the oft-held view that formulations of relativity with and without relativistic mass are equivalent is incorrect. Left as a heuristic device a preliminary study of first time learners suggest that misconceptions can develop when the concept is introduced without basis. In order to gauge the extent and nature of the use of relativistic mass a survey of the literature on relativity has been undertaken. The varied and at times self-contradicting use of this concept points to the lack of clear consensus on the formulation of relativity. As geometry lies at the heart of all modern representations of relativity, it is urged, once again, that the use of the concept at all levels be abandoned.
    Special relativityLorentz transformationKinematicsTime dilationQuantum mechanicsCovarianceHyperbolic geometryPrimary star in a binary systemGeneral relativityDilation...
  • Various facets of the concept of mass are discussed. The masses of elementary particles and the search for higgs. The masses of hadrons. The pedagogical virus of relativistic mass.
    EarthMethaneInternational Linear ColliderNeutrinoGravitational fieldsGeneral relativityGravitational forceHydrogen atomGravitational interactionElectrodynamics...
  • We study the enhancement of cooperativity in the atom-light interface near a nanophotonic waveguide for application to quantum nondemolition (QND) measurement of atomic spins. Here the cooperativity per atom is determined by the ratio between the measurement strength and the decoherence rate. Counterintuitively, we find that by placing the atoms at an azimuthal position where the guided probe mode has the lowest intensity, we increase the cooperativity. This arises because the QND measurement strength depends on the interference between the probe and scattered light guided into an orthogonal polarization mode, while the decoherence rate depends on the local intensity of the probe. Thus, by proper choice of geometry, the ratio of good to bad scattering can be strongly enhanced for highly anisotropic modes. We apply this to study spin squeezing resulting from QND measurement of spin projection noise via the Faraday effect in two nanophotonic geometries, a cylindrical nanofiber and a square waveguide. We find, with about 2500 atoms using realistic experimental parameters, $ \sim 6.3 $ dB and $ \sim 13 $ dB of squeezing can be achieved on the nanofiber and square waveguide, respectively.
    NanofiberNanophotonicsOptical pumpingIntensityMaster equationSpontaneous emissionFaraday effectExcited stateCoherent statePolarimeters...
  • A new formalism for the perturbative construction of algebraic quantum field theory is developed. The formalism allows the treatment of low dimensional theories and of non-polynomial interactions. We discuss the connection between the Stueckelberg-Petermann renormalization group which describes the freedom in the perturbative construction with the Wilsonian idea of theories at different scales . In particular we relate the approach to renormalization in terms of Polchinski's Flow Equation to the Epstein-Glaser method. We also show that the renormalization group in the sense of Gell-Mann-Low (which characterizes the behaviour of the theory under the change of all scales) is a 1-parametric subfamily of the Stueckelberg-Petermann group and that this subfamily is in general only a cocycle. Since the algebraic structure of the Stueckelberg-Petermann group does not depend on global quantities, this group can be formulated in the (algebraic) adiabatic limit without meeting any infrared divergencies. In particular we derive an algebraic version of the Callan-Symanzik equation and define the beta-function in a state independent way.
    Renormalization groupRenormalizationHomogenizationRegularizationAlgebraic quantum field theoryQuantum field theoryFunctional derivativeEffective potentialCovarianceScale invariance...
  • It is shown that the most important effects of special and general theory of relativity can be understood in a simple and straightforward way. The system of units in which the speed of light $c$ is the unit of velocity allows to cast all formulas in a very simple form.The Pythagorean theorem graphically relates energy, momentum and mass. The paper is addressed to those who teach and popularize the theory of relativity.
    Gravitational fieldsSunSpeed of lightNeutrinoGravitonDeuteronLarge Hadron ColliderEarthKaonDark energy...
  • This note is an attempt to explain in simple words why the famous relation $E=mc^2$ misrepresents the essence of Einstein's relativity theory. The note is addressed to high-school teachers, and a part of it - to those university professors who permit themselves to say that the mass of a body increases with its velocity or momentum and thus mislead the teachers and their students. ----- Contains both English and Russian versions.
    Special relativityLarge Hadron ColliderEarthLarge Electron-Positron ColliderMagellanic CloudsStarGeneral relativityQuantum mechanicsEinstein field equationsCERN...
  • We review the history of the road to a manifestly covariant perturbative calculus within quantum electrodynamics from the early semi-classical results of the mid-twenties to the complete formalism of Stueckelberg in 1934. We chose as our case study the calculation of the cross-section of the Compton effect. We analyse Stueckelberg's paper extensively. This is our first contribution to a study of his fundamental contributions to the theoretical physics of twentieth century.
  • This is a review of ($q$-)hypergeometric orthogonal polynomials and their relation to representation theory of quantum groups, to matrix models, to integrable theory, and to knot theory. We discuss both continuous and discrete orthogonal polynomials and consider their various generalizations. The review also includes the orthogonal polynomials into a generic framework of ($q$-)hypergeometric functions and their integral representations. In particular, this gives rise to relations with conformal blocks of the Virasoro algebra.
    Orthogonal polynomialsHypergeometric functionRandom matrix theoryAskey-Wilson polynomialsRacah polynomialsKnot polynomialKnot invariantPartition functionJacobi polynomialsModular transformation...
  • In 1938, Stueckelberg introduced a scalar field which makes an Abelian gauge theory massive but preserves gauge invariance. The Stueckelberg mechanism is the introduction of new fields to reveal a symmetry of a gauge--fixed theory. We first review the Stueckelberg mechanism in the massive Abelian gauge theory. We then extend this idea to the standard model, stueckelberging the hypercharge U(1) and thus giving a mass to the physical photon. This introduces an infrared regulator for the photon in the standard electroweak theory, along with a modification of the weak mixing angle accompanied by a plethora of new effects. Notably, neutrinos couple to the photon and charged leptons have also a pseudo-vector coupling. Finally, we review the historical influence of Stueckelberg's 1938 idea, which led to applications in many areas not anticipated by the author, such as strings. We describe the numerous proposals to generalize the Stueckelberg trick to the non-Abelian case with the aim to find alternatives to the standard model. Nevertheless, the Higgs mechanism in spontaneous symmetry breaking remains the only presently known way to give masses to non-Abelian vector fields in a renormalizable and unitary theory.
    Quantum electrodynamicsUnitarityGauge fixingScalar fieldGauge invarianceStandard ModelNilpotentGauge transformationYang-Mills theoryHiggs mechanism...
  • Quantum mechanics is not about 'quantum states': it is about values of physical variables. I give a short fresh presentation and update on the $relational$ perspective on the theory, and a comment on its philosophical implications.
    Quantum mechanicsQuantum theoryPhase spaceFoundation of PhysicsPlanck's constantInterferenceDegree of freedomOrientationInterpretations of quantum mechanicsQuantum gravity...
  • The choice of electromagnetic potentials employed to represent electric and magnetic fields is shown to carry with it the possibility of violating basic symmetries in both classical and quantum physical problems. This can lead to incorrect results in dynamical calculations with an improper set of potentials. Preservation of symmetries constitutes a robust reason to regard electromagnetic potentials as more fundamental than electromagnetic fields, representing a major extension of the quantum-only example of the Aharonov-Bohm effect for accepting the primacy of potentials over fields. The demonstration of symmetry violation caused by inappropriate gauge choice affects such fundamental matters as the fact that gauge transformations are not necessarily unitary, since they do not ensure preservation of the values of physical observables.
    Gauge transformationLasersPlane waveUnitary transformationAharonov-Bohm effectHamiltonianQuantum mechanicsLight conesWave equationCharged particle...
  • We study a Dirac fermion model with three kinds of disorder as well as a marginal interaction which forms the critical line of $c=1$ conformal field theory. Computing scaling equations by the use of a perturbative renormalization group method, we investigate how such an interaction affects the universality classes of disordered systems with non-interacting fermions. We show that some specific fixed points are stable against an interaction, whereas others are unstable and flow to new random critical points with a finite interaction.
    Density of statesDirac fermionUniversality classCoupling constantHamiltonianCritical lineRenormalization groupIsing modelTwo-point correlation functionRenormalisation group equations...
  • We compute the expected value of the cosmological constant in our universe from the Causal Entropic Principle. Since observers must obey the laws of thermodynamics and causality, the principle asserts that physical parameters are most likely to be found in the range of values for which the total entropy production within a causally connected region is maximized. Despite the absence of more explicit anthropic criteria, the resulting probability distribution turns out to be in excellent agreement with observation. In particular, we find that dust heated by stars dominates the entropy production, demonstrating the remarkable power of this thermodynamic selection criterion. The alternative approach - weighting by the number of observers per baryon - is less well-defined, requires problematic assumptions about the nature of observers, and yet prefers values larger than present experimental bounds.
    Entropy productionDiamondStarEntropyStar formation rateCosmological constantGalaxy FormationLuminosityVacuum energyVacuum state...
  • We study the time evolution of early universe which is developed by a cosmological constant $\Lambda_4$ and supersymmetric Yang-Mills (SYM) fields in the Friedmann-Robertson-Walker (FRW) space-time. The renormalized vacuum expectation value of energy-momentum tensor of the SYM theory is obtained in a holographic way. It includes a radiation of the SYM field, parametrized as $C$. The evolution is controlled by this radiation $C$ and the cosmological constant $\Lambda_4$. For positive $\Lambda_4$, an inflationary solution is obtained at late time. When $C$ is added, the quantum mechanical situation at early time is fairly changed. Here we perform the early time analysis in terms of two different approaches, (i) the Wheeler-DeWitt equation and (ii) Lorentzian path-integral with the Picard-Lefschetz method by introducing an effective action. The results of two methods are compared.
    Super Yang-Mills theoryEffective actionPath integralSaddle pointPropagatorCosmological constantQuantum cosmologyFriedmann Robertson WalkerFriedmann-Lemaitre-Robertson-Walker metricDark Radiation...
  • We demonstrate the existence of static, spherically symmetric globally regular, i.e. solitonic solutions of a shift-symmetric scalar-tensor gravity model with negative cosmological constant. The norm of the Noether current associated to the shift symmetry is finite in the full space-time. We also discuss the corresponding black hole solutions and demonstrate that the interplay between the scalar-tensor coupling and the cosmological constant leads to the existence of new branches of solutions. To linear order in the scalar-tensor coupling, the asymptotic space-time corresponds to an Anti-de Sitter space-time with a non-trivial scalar field on its conformal boundary. This allows the interpretation of our solutions in the context of the AdS/CFT correspondence. Finally, we demonstrate that - for physically relevant, small values of the scalar-tensor coupling - solutions with positive cosmological constant do not exist in our model.
    Black holeCosmological constantScalar fieldSolitonAnti de Sitter spaceHorizonGeneral relativityAdS/CFT correspondenceGravitational waveShift symmetry...
  • High-intensity proton beams impinging on a fixed target or beam dump allow to probe new physics via the production of new weakly-coupled particles in hadron decays. The CERN SPS provides opportunities to do so with the running NA62 experiment and the planned SHiP experiment. Reconstruction of kaon decay kinematics (beam mode) allows NA62 to probe for the existence of right-handed neutrinos and dark photons with masses below 0.45 GeV. Direct reconstruction of displaced vertices from the decays of new neutral particles (dump mode) will allow NA62 and SHiP to probe right-handed neutrinos with masses up to 5 GeV and mixings down to several orders of magnitude smaller than current constraints, in regions favoured in models which explain at once neutrino masses, matter-antimatter asymmetry and dark matter.
    Sterile neutrinoSHiP experimentNA62 experimentHidden photonHidden sectorProton beamSuper Proton SynchrotronNeutrinoMuonIntensity...
  • The Horizon Quantum Mechanics is an approach that allows one to analyse the gravitational radius of spherically symmetric systems and compute the probability that a given quantum state is a black hole. We first review the (global) formalism and show how it reproduces a gravitationally inspired GUP relation. This results leads to unacceptably large fluctuations in the horizon size of astrophysical black holes if one insists in describing them as (smeared) central singularities. On the other hand, if they are extended systems, like in the corpuscular models, no such issue arises and one can in fact extend the formalism to include asymptotic mass and angular momentum with the harmonic model of rotating corpuscular black holes. The Horizon Quantum Mechanics then shows that, in simple configurations, the appearance of the inner horizon is suppressed and extremal (macroscopic) geometries seem disfavoured.
    HorizonBlack holeSchwarzschild radiusQuantum mechanicsGravitonHamiltonianArnowitt-Deser-MisnerGeneralized uncertainty principleClassical black holesSpectral decomposition...
  • Some observations about the local and global generality of gradient Kahler Ricci solitons are made, including the existence of a canonically associated holomorphic volume form and vector field, the local generality of solutions with a prescribed holomorphic volume form and vector field, and the existence of Poincare coordinates in the case that the Ricci curvature is positive and the vector field has a fixed point.
    SolitonHolomorphRicci tensorRicci flowPeriodic orbitBiholomorphismScalar curvatureKahler potentialSubgroupDiagonal subgroup...
  • Analog condensed matter systems present an exciting opportunity to simulate early Universe models in table-top experiments. We consider a recent proposal for an analog condensed matter experiment to simulate the relativistic quantum decay of the false vacuum. In the proposed experiment, two ultra-cold condensates are coupled via a time-varying radio-frequency field. The relative phase of the two condensates in this system is approximately described by a relativistic scalar field with a potential possessing a series of false and true vacuum local minima. If the system is set up in a false vacuum, it would then decay to a true vacuum via quantum mechanical tunnelling. Should such an experiment be realized, it would be possible to answer a number of open questions regarding non-perturbative phenomena in quantum field theory and early Universe cosmology. In this paper, we illustrate a possible obstruction: the time-varying coupling that is invoked to create a false vacuum for the long-wavelength modes of the condensate leads to a destabilization of shorter wavelength modes within the system via parametric resonance. We focus on an idealized setup in which the two condensates have identical properties and identical background densities. Describing the system by the coupled Gross-Pitaevskii equations (GPE), we use the machinery of Floquet theory to perform a linear stability analysis, calculating the wavenumber associated with the first instability band for a variety of experimental parameters. However, we demonstrate that, by tuning the frequency of the time-varying coupling, it may be possible to push the first instability band outside the validity of the GPE, where dissipative effects are expected to damp any instabilities. This provides a viable range of experimental parameters to perform analog experiments of false vacuum decay.
    False vacuumInstabilityHamiltonianBose-Einstein condensateVacuum decayEffective LagrangianScalar fieldZero modeEffective actionCosmology...
  • Exactly 500 years ago, Nicolaus Copernicus drew a lattice of lines on a panel above the doorway to his rooms at Olsztyn Castle, then in the Bishopric of Warmia. Although its design has long been regarded as some kind of reflecting vertical sundial, the exact astronomical designation of the lines and related measuring techniques remained unknown. Surprisingly, Copernicus did not refer to his new observational methods in his principal work, \textit{De Revolutionibus}. A data analysis of a 3D model of the panel has, at last, solved the mystery: Copernicus created a new type of measuring device -- a heliograph with a non-local reference meridian -- to precisely measure ecliptic longitudes of the Sun around the time of the equinoxes. The data, 3D model and modeling results of our analysis are open access and available in the form of digital (Jupyter) notebooks.
    SunMeridianEquinoxEcliptic longitudeOrientationMeasuring devicesHour angleCalendarsCountingSunlight...
  • We present a detailed analysis of the Bc form factors in the BSW framework, by investigating the effects of the flavor dependence on the average transverse quark momentum inside a meson. Branching ratios of two body decays of Bc meson to pseudoscalar and vector mesons are predicted.
    Form factorBranching ratioVector mesonPseudoscalarMesonsQuarksMomentum...
  • We consider the question of identifying the bulk space-time of the SYK model. Focusing on the signature of emergent space-time of the (Euclidean) model, we explain the need for non-local (Radon-type) transformations on external legs of $n$-point Green's functions. This results in a dual theory with Euclidean AdS signature with additional leg-factors. We speculate that these factors incorporate the coupling of additional bulk states similar to the discrete states of 2d string theory.
    Sachdev-Ye-Kitaev modelPropagatorAnti de Sitter spaceEigenfunctionDe Sitter spaceHigher spinTwo-dimensional String TheoryDegree of freedomCompletenessModified Bessel Function...
  • We present results for different observables measured in semileptonic and non-leptonic decays of the $B_c^-$ meson. The calculations have been done within the framework of a nonrelativistic constituent quark model. In order to check the sensitivity of all our results against the inter-quark interaction we use five different quark--quark potentials. We obtain form factors, decay widths and asymmetry parameters for semileptonic $B_c^-\to c\bar c$ and $B_c^-\to \bar B$ decays. In the limit of infinite heavy quark mass our model reproduces the constraints of heavy quark spin symmetry. For the actual heavy quark masses we find nonetheless large corrections to that limiting situation for some form factors. We also analyze exclusive non-leptonic two--meson decay channels within the factorization approximation.
    Form factorHeavy quarkHelicityHadronizationCenter of mass frameBranching ratioConstituent quarkPseudoscalarCharged leptonSemileptonic decay...
  • Inspired by the recent measurement of the ratio of $B_c$ branching fractions to $J/\psi \pi^+$ and $J/\psi \mu^+\nu_{\mu}$ final states at the LHCb detector, we study the semileptonic decays of $B_c$ meson to the S-wave ground and radially excited 2S and 3S charmonium states with the perturbative QCD approach. After evaluating the form factors for the transitions $B_c\rightarrow P,V$, where $P$ and $V$ denote pseudoscalar and vector S-wave charmonia, respectively, we calculate the branching ratios for all these semileptonic decays. The theoretical uncertainty of hadronic input parameters are reduced by utilizing the light-cone wave function for $B_c$ meson. It is found that the predicted branching ratios range from $10^{-6}$ up to $10^{-2}$ and could be measured by the future LHCb experiment. Our prediction for the ratio of branching fractions $\frac{\mathcal {BR}(B_c^+\rightarrow J/\Psi \pi^+)}{\mathcal {BR}(B_c^+\rightarrow J/\Psi \mu^+\nu_{\mu})}$ is in good agreement with the data. For $B_c\rightarrow V l \nu_l$ decays, the relative contributions of the longitudinal and transverse polarization are discussed in different momentum transfer squared regions. These predictions will be tested on the ongoing and forthcoming experiments.
    Branching ratioForm factorSemileptonic decayLight conesS-wavePerturbative QCDLHCbExcited statePerturbation theoryQuantum chromodynamics...
  • A recent publication (J.D. Anderson et. al., EPL 110, 1002) presented a strong correlation between the measured values of the gravitational constant $G$ and the 5.9-year oscillation of the length of day. Here, we provide a compilation of all published measurements of $G$ taken over the last 35 years. A least squares regression to a sine with a period of 5.9 years still yields a better fit than a straight line. However, our additions and corrections to the G data reported by Anderson {\it et al.} significantly weaken the correlation.
    Torsion tensorRegressionLeast squaresSystematic errorElectrostaticsMass distributionCalibrationMarylandSolar activityInterferometers...
  • We study the one-dimensional complex conformal manifold that controls the infrared dynamics of a three-dimensional $\mathcal{N}=2$ supersymmetric theory of three chiral superfields with a cubic superpotential. Two special points on this conformal manifold are the well-known XYZ model and three decoupled copies of the critical Wess-Zumino model. The conformal manifold enjoys a discrete duality group isomorphic to $S_4$ and can be thought of as an orbifold of $\mathbf{CP}^1$. We use the $4-\varepsilon$ expansion and the numerical conformal bootstrap to calculate the spectrum of conformal dimensions of low-lying operators and their OPE coefficients, and find a very good quantitative agreement between the two approaches.
    ManifoldScaling dimensionDualityOPE coefficientsSuperpotentialConformal field theoryConformal BootstrapFlavour symmetryCoupling constantSuperconformal field theory...
  • We recently used Virasoro symmetry considerations to propose an exact formula for a bulk proto-field $\phi$ in AdS$_3$. In this paper we study the propagator $\langle \phi \phi \rangle$. We show that many techniques from the study of conformal blocks can be generalized to compute it, including the semiclassical monodromy method and both forms of the Zamolodchikov recursion relations. When the results from recursion are expanded at large central charge, they match gravitational perturbation theory for a free scalar field coupled to gravity in our chosen gauge. We find that although the propagator is finite and well-defined at long distances, its perturbative expansion in $G_N = \frac{3}{2c}$ exhibits UV/IR mixing effects. If we nevertheless interpret $\langle \phi \phi \rangle$ as a probe of bulk locality, then when $G_N m_\phi \ll 1$ locality breaks down at the new short-distance scale $\sigma_* \sim \sqrt[4]{G_N R_{AdS}^3}$. For $\phi$ with very large bulk mass, or at small central charge, bulk locality fails at the AdS length scale. In all cases, locality `breakdown' manifests as singularities or branch cuts at spacelike separation arising from non-perturbative quantum gravitational effects.
    PropagatorAnti de Sitter spaceMonodromyConformal field theoryGravitonPerturbation theoryGeodesicRadius of convergenceCentral chargeOperator product expansion...
  • Most object detection methods operate by applying a binary classifier to sub-windows of an image, followed by a non-maximum suppression step where detections on overlapping sub-windows are removed. Since the number of possible sub-windows in even moderately sized image datasets is extremely large, the classifier is typically learned from only a subset of the windows. This avoids the computational difficulty of dealing with the entire set of sub-windows, however, as we will show in this paper, it leads to sub-optimal detector performance. In particular, the main contribution of this paper is the introduction of a new method, Max-Margin Object Detection (MMOD), for learning to detect objects in images. This method does not perform any sub-sampling, but instead optimizes over all sub-windows. MMOD can be used to improve any object detection method which is linear in the learned parameters, such as HOG or bag-of-visual-word models. Using this approach we show substantial performance gains on three publicly available datasets. Strikingly, we show that a single rigid HOG filter can outperform a state-of-the-art deformable part model on the Face Detection Data Set and Benchmark when the HOG filter is learned via MMOD.
    Object detectionOptimizationSupport vector machineOrientationTraining setFeature vectorReceiver operating characteristicFeature extractionBinary starSoftware...
  • About a dozen measurements of Newton's gravitational constant, G, since 1962 have yielded values that differ by far more than their reported random plus systematic errors. We find that these values for G are oscillatory in nature, with a period of P = 5.899 +/- 0.062 yr, an amplitude of (1.619 +/- 0.103) x 10^{-14} m^3 kg^{-1} s^{-2}, and mean-value crossings in 1994 and 1997. However, we do not suggest that G is actually varying by this much, this quickly, but instead that something in the measurement process varies. Of other recently reported results, to the best of our knowledge, the only measurement with the same period and phase is the Length of Day (LOD - defined as a frequency measurement such that a positive increase in LOD values means slower Earth rotation rates and therefore longer days). The aforementioned period is also about half of a solar activity cycle, but the correlation is far less convincing. The 5.9 year periodic signal in LOD has previously been interpreted as due to fluid core motions and inner-core coupling. We report the G/LOD correlation, whose statistical significance is 0.99764 assuming no difference in phase, without claiming to have any satisfactory explanation for it. Least unlikely, perhaps, are currents in the Earth's fluid core that change both its moment of inertia (affecting LOD) and the circumstances in which the Earth-based experiments measure G. In this case, there might be correlations with terrestrial magnetic field measurements.
    EarthSine waveSolar activityAstronomical UnitWeighted least squares methodSolar systemQuantum measurementRandom walkWolf numberOrbital angular momentum of light...
  • This work extends methods of dealing with superposition rules known for systems of ordinary differential equations to systems of partial differential equations. A special attention is devoted to t-dependent superpositions rules and the so-called quasi-Lie systems, i.e. systems of partial differential equations which can be related via a generalized flow to Lie systems. We develop a procedure of constructing quasi-Lie systems by means of what we call quasi-Lie schemes. Our techniques are illustrated with Wess-Zumino-Novikov-Witten models, Abel differential equations, and other examples of physical and mathematical relevance.
    Partial differential equationAutonomous systemDiffeomorphismBundleManifoldOrdinary differential equationsFoliationRiccati equationIntegral curveQuadrature...
  • The LHC$_b$ collaboration has systematically measured the rates of $B_c\to J/\psi K$, $B_c\to J/\psi D_s$, $B_c\to J/\psi D_s^*$ and $B_c\to \psi(2S) \pi$. The new data enable us to study relevant theoretical models and further determine the model parameters. In this work, We calculate the form factors for the transitions $B_c\to J/\psi$ and $B_c\to \psi(2S)$ numerically, then determine the partial widths of the semi-leptonic and non-leptonic decays. The theoretical predictions on the ratios of $\Gamma(B_c\to J/\psi K)/\Gamma(B_c\to J/\psi \pi)$, $\Gamma(B_c\to J/\psi D_s)/\Gamma(B_c\to J/\psi \pi)$ and $\Gamma(B_c\to J/\psi D_s^*)/\Gamma(B_c\to J/\psi \pi)$ are consistent with data within only 1$\sigma$. Especially, for calculating $\Gamma(B_c\to \psi(2S)X)$ the modified harmonic oscillator wave function (HOWF) which we developed in early works is employed and the results indicate that the modified harmonic oscillator wave function works better than the traditional HOWF.
    Harmonic oscillatorForm factorSemileptonic decayLuminosityHadronizationDecay modeLight frontHeavy quarkConstituent quarkCovariance...
  • The production of the three normal neutrinos via $e^-e+$ collision at $Z$-boson peak (neutrino production in a Z-factory) is investigated thoroughly. The differences of $\nu_e$-pair production from $\nu_\mu$-pair and $\nu_\tau$-pair production are presented in various aspects. Namely the total cross sections, relevant differential cross sections and the forward-backward asymmetry etc for these neutrinos are presented in terms of figures as well as numerical tables. The restriction on the room for the mixing of the three species of light neutrinos with possible externals (heavy neutral leptons and/or stereos) from refined measurements of the invisible width of $Z$-boson is discussed.
    NeutrinoNeutrino productionZ bosonStandard ModelBosonizationPair productionDifferential cross sectionForward-backward asymmetryLuminositySterile neutrino...
  • In (1+1)-d CFTs, the 4-point function on the plane can be mapped to the pillow geometry and thereby crossing symmetry gets translated into a modular property. We use these modular features to derive a universal asymptotic formula for OPE coefficients in which one of the operators is averaged over heavy primaries. The coarse-grained heavy channel then reproduces features of the gravitational 2-to-2 S-matrix which has black holes as their intermediate states.
    Conformal field theoryBlack holeOPE coefficientsCoarse grainingTorusOperator product expansionVirasoro blocksStatisticsScaling dimensionInverse Laplace transform...
  • An exciting frontier in quantum information science is the realization of complex many-body systems whose interactions are designed quanta by quanta. Hybrid nanophotonic systems with cold atoms have emerged as the paradigmatic platform for engineering long-range spin models from the bottom up with unprecedented complexities. Here, we develop a toolbox for realizing fully programmable complex spin-network with neutral atoms in the vicinity of 1D photonic crystal waveguides. The enabling platform synthesizes strongly interacting quantum materials mediated by Bogoliubov phonons from the underlying collective motion of the atoms. In a complementary fashion, phononic quantum magnets can be designed through the coupling to the magnonic excitation of the atomic medium. We generalize our approach to long-range lattice models for interacting SU(n)-magnons mediated by local gauge constraints. Universal open q-body dynamics with q > 2 can be built from floquet driven-dissipation, and the dynamics of arbitrary quantum materials can be constructed with minimal overheads.
    Quantum electrodynamicsQuantum materialsQuantum informationQuantum magnetSU(N)NanophotonicsDissipationLattice modelUltracold atomPhonon...
  • Nonlinear SUSY approach to preparation of quantum systems with pre-planned spectral properties is reviewed. Possible multidimensional extensions of Nonlinear SUSY are described. The full classification of ladder-reducible and irreducible chains of SUSY algebras in one-dimensional QM is given. Emergence of hidden symmetries and spectrum generating algebras is elucidated in the context of Nonlinear SUSY in one- and two-dimensional QM.
    SupersymmetryHamiltonianSuperchargeQuantum mechanicsZero modeSeparation of variablesSuperpotentialBound stateEigenfunctionSuperalgebra...
  • Christoffel connection did not enter gravity as an axiom of minimal length for the free fall of particles (where anyway length action is not defined for massless particles), nor out of economy, but from the weak equivalence principle (gravitational force is equivalent to acceleration according to Einstein) together with the identification of the local inertial frame with the local Lorentz one. This identification implies that the orbits of all particles are given by the geodesics of the Christoffel connection. Here, we show that in the presence of only massless particles (absence of massive particles) the above identification is inconsistent and does not lead to any connection. The proof is based on the existence of projectively equivalent connections and the absence of proper time for null particles. If a connection derived by some kinematical principles for the particles is to be applied in the world, it is better these principles to be valid in all relevant spacetime rather than different principles to give different connections in different spacetime regions. Therefore, our result stated above may imply a conceptual insufficiency of the use of the Christoffel connection in the early universe where only massless particles are expected to be present (whenever at least some notions, like orbits, are meaningful), and thus of the total use of this connection. If in the early universe the notion of a massive particle, which appears latter in time, cannot be used, in an analogous way in a causally disconnected high-energy region (maybe deep interior of astrophysical objects or black holes) the same conclusions could be extracted if only massless particles are present.
    FramesGeodesicProper timeEquivalence principleThe early UniverseTheories of gravityKinematicsBlack holeGravitational forceEinstein equivalence principle...
  • We present the practical step-by-step procedure for constructing canonical gravitational dynamics and kinematics directly from any previously specified quantizable classical matter dynamics, and then illustrate the application of this recipe by way of two completely worked case studies. Following the same procedure, any phenomenological proposal for fundamental matter dynamics must be supplemented with a suitable gravity theory providing the coefficients and kinematical interpretation of the matter equations, before any of the two theories can be meaningfully compared to experimental data.
    Master equationTensor fieldCovarianceQuantizationKinematicsDegree of freedomRicci tensorRankingPartial differential equationCovariant derivative...
  • Which geometries on a smooth manifold (apart from Lorentzian metrics) can serve as a spacetime structure? This question is comprehensively addressed from first principles in eight lectures, exploring the kinematics and gravitational dynamics of all tensorial geometries on a smooth manifold that can carry predictive matter equations, are time-orientable, and allow to distinguish positive from negative particle energies.
  • This work is a theoretical investigation on the spin-polariton (polarized single photon) entanglement in nitrogen vacancy (NV) centers in diamond in order to interpret the results of two landmark experiments reported by the teams of Buckley and Togan in Science and Nature. A Jaynes-Cummings model is applied to analyze the off- and on-resonant dynamics of the electronic spin and polarized photon system. Combined with the analysis on the NV center's electron structure and transition rules, this model consistently explained the Faraday effect, Optical Stark effect, pulse echo technology and energy level engineering technology in the way to realize the spin-polariton entanglement in diamond. All theoretical results are consistent well with the reported phenomena and data. This essay essentially aims at applying the fundamental skills the author has learned in Quantum Optics and Nonlinear Optics, especially to the interesting materials not covered in class, in assignments and examinations, such as calculation on matrix form of Hamiltonian, quantum optical dynamics with dressed state analysis, entanglement and so on.
    PolaritonHamiltonianLasersQuantum opticsFaraday effectStark effectDensity matrixRabi frequencyQuantum computerNonlinear optics...
  • We study the strong coupling between photons and atoms that can be achieved in an optical nanofiber geometry when the interaction is dispersive. While the Purcell enhancement factor for spontaneous emission into the guided mode does not reach the strong-coupling regime for individual atoms, one can obtain high cooperativity for ensembles of a few thousand atoms due to the tight confinement of the guided modes and constructive interference over the entire chain of trapped atoms. We calculate the dyadic Green's function, which determines the scattering of light by atoms in the presence of the fiber, and thus the phase shift and polarization rotation induced on the guided light by the trapped atoms. The Green's function is related to a full Heisenberg-Langevin treatment of the dispersive response of the quantized field to tensor polarizable atoms. We apply our formalism to quantum nondemolition (QND) measurement of the atoms via polarimetry. We study shot-noise-limited detection of atom number for atoms in a completely mixed spin state and the squeezing of projection noise for atoms in clock states. Compared with squeezing of atomic ensembles in free space, we capitalize on unique features that arise in the nanofiber geometry including anisotropy of both the intensity and polarization of the guided modes. We use a first principles stochastic master equation to model the squeezing as function of time in the presence of decoherence due to optical pumping. We find a peak metrological squeezing of ~5 dB is achievable with current technology for ~2500 atoms trapped 180 nm from the surface of a nanofiber with radius a=225 nm.
    NanofiberOptical pumpingQuantizationGreen's functionPolarizabilityHamiltonianMetrologyBirefringenceOptimizationSpontaneous emission...