Recently bookmarked papers

with concepts:
  • We study properties of heavy-light-heavy three-point functions in two-dimensional CFTs by using the modular invariance of two-point functions on a torus. We show that our result is non-trivially consistent with the condition of ETH (Eigenstate Thermalization Hypothesis). We also study the open-closed duality of cylinder amplitudes and derive behaviors of disk one-point functions.
    Conformal field theoryModular invarianceDualityOpen string theoryTorusScaling dimensionEigenstate Thermalization HypothesisConformal BootstrapCentral chargeThermalisation...
  • We seek to understand how the technical definition of Lehmer pair can be related to more analytic properties of the Riemann zeta function, particularly the location of the zeros of $\zeta^\prime(s)$. Because we are interested in the connection between Lehmer pairs and the de Bruijn-Newman constant $\Lambda$, we assume the Riemann Hypothesis throughout. We define strong Lehmer pairs via an inequality on the derivative of the pre-Schwarzian of Riemann's function $\Xi(t)$, evaluated at consecutive zeros. Theorem 1 shows that strong Lehmer pairs are Lehmer pairs. Theorem 2 describes the derivative of the pre-Schwarzian in terms of $\zeta^\prime(\rho)$. Theorem 3 expresses the criteria for strong Lehmer pairs in terms of nearby zeros $\rho^\prime$ of $\zeta^\prime(s)$. We examine 114661 pairs of zeros of $\zeta(s)$ around height t=10^6, finding 855 strong Lehmer pairs. These are compared to the corresponding zeros of $\zeta^\prime(s)$ in the same range.
    Riemann hypothesisRiemann zeta functionCritical lineRegularizationReciprocityHeat equationCauchy's integral formulaOrientationRiemann Xi functionBorel-Carathéodory theorem...
  • Motivated by the connection to the pair correlation of the Riemann zeros, we investigate the second derivative of the logarithm of the Riemann zeta function, in particular the zeros of this function. Theorem 1 gives a zero-free region. Theorem 2 gives an asymptotic estimate for the number of nontrivial zeros to height T. Theorem 3 is a zero density estimate.
    Dirichlet seriesStieltjes constantsRiemann zeta functionDivisor functionArgument principleRiemann hypothesisCritical lineTriangle inequalityUpper half-planeRight half-plane...
  • I discuss various aspects of the role of the conformal anomaly number c in 2- and 1+1-dimensional critical behaviour: its appearance as the analogue of Stefan's constant, its fundamental role in conformal field theory, in the classification of 2d universality classes, and as a measure of quantum entanglement, among other topics.
    EntanglementQuantum mechanicsTrace anomalyConformal field theoryStefan-Boltzmann lawField theoryCritical phenomenaUniversality classClassificationQuantum entanglement...
  • Functional conjugation methods are used to analyze the global structure of various renormalization group trajectories, and to gain insight into the interplay between continuous and discrete rescaling. With minimal assumptions, the methods produce continuous flows from step-scaling {\sigma} functions, and lead to exact functional relations for the local flow {\beta} functions, whose solutions may have novel, exotic features, including multiple branches. As a result, fixed points of {\sigma} are sometimes not true fixed points under continuous changes in scale, and zeroes of {\beta} do not necessarily signal fixed points of the flow, but instead may only indicate turning points of the trajectories.
    Renormalization groupPerturbation theoryRenormalizationInfrared fixed pointRenormalisation group flowHamiltonianLattice gauge theoryGraphClosed-form expressionTime-reversal symmetry...
  • Wigner's quasi-probability distribution function in phase-space is a special (Weyl) representation of the density matrix. It has been useful in describing quantum transport in quantum optics; nuclear physics; decoherence (eg, quantum computing); quantum chaos; "Welcher Weg" discussions; semiclassical limits. It is also of importance in signal processing. Nevertheless, a remarkable aspect of its internal logic, pioneered by Moyal, has only emerged in the last quarter-century: It furnishes a third, alternative, formulation of Quantum Mechanics, independent of the conventional Hilbert Space, or Path Integral formulations. In this logically complete and self-standing formulation, one need not choose sides--coordinate or momentum space. It works in full phase-space, accommodating the uncertainty principle. This is an introductory overview of the formulation with simple illustrations.
    Phase spaceQuantum mechanicsWigner distribution functionDensity matrixUncertainty principleQuantizationOrder operatorHamiltonianClassical limitExpectation Value...
  • In recent years the umbral calculus has emerged from the shadows to provide an elegant correspondence framework that automatically gives systematic solutions of ubiquitous difference equations --- discretized versions of the differential cornerstones appearing in most areas of physics and engineering --- as maps of well-known continuous functions. This correspondence deftly sidesteps the use of more traditional methods to solve these difference equations. The umbral framework is discussed and illustrated here, with special attention given to umbral counterparts of the Airy, Kummer, and Whittaker equations, and to umbral maps of solitons for the Sine-Gordon, Korteweg--de Vries, and Toda systems.
    Hypergeometric functionWhittaker functionWave propagationGraphRegularizationConfluent hypergeometric functionTranslational invariancePhase spaceWolfram MathematicaContinuous symmetry...
  • Ever since Werner Heisenberg's 1927 paper on uncertainty, there has been considerable hesitancy in simultaneously considering positions and momenta in quantum contexts, since these are incompatible observables. But this persistent discomfort with addressing positions and momenta jointly in the quantum world is not really warranted, as was first fully appreciated by Hilbrand Groenewold and Jos\'e Moyal in the 1940s. While the formalism for quantum mechanics in phase space was wholly cast at that time, it was not completely understood nor widely known --- much less generally accepted --- until the late 20th century.
    Quantum mechanicsPhase spaceUncertainty principleQuantizationExpectation ValuePoisson bracketPath integral formulationStatistical mechanicsMoyal bracketIsomorphism...
  • Consider the sequence $\mathcal{V}(2,n)$ constructed in a greedy fashion by setting $a_1 = 2$, $a_2 = n$ and defining $a_{m+1}$ as the smallest integer larger than $a_m$ that can be written as the sum of two (not necessarily distinct) earlier terms in exactly one way; the sequence $\mathcal{V}(2,3)$, for example, is given by $$ \mathcal{V}(2,3) = 2,3,4,5,9,10,11,16,22,\dots$$ We prove that if $n \geqslant 5$ is odd, then the sequence $\mathcal{V}(2,n)$ has exactly two even terms $\left\{2,2n\right\}$ if and only if $n-1$ is not a power of 2. We also show that in this case, $\mathcal{V}(2,n)$ eventually becomes a union of arithmetic progressions. If $n-1$ is a power of 2, then there is at least one more even term $2n^2 + 2$ and we conjecture there are no more even terms. In the proof, we display an interesting connection between $\mathcal{V}(2,n)$ and Sierpinski Triangle. We prove several other results, discuss a series of striking phenomena and pose many problems. This relates to existing results of Finch, Schmerl & Spiegel and a classical family of sequences defined by Ulam.
    Arithmetic progressionSierpinski triangleFractalHomomorphismUlam numberQuasiperiodicityIndicator functionConstellationsClassificationCoset...
  • In this review we attempt to present an overview of some of the better known quantization techniques found in the current literature and used both by physicists and mathematicians. The treatment is more descriptive than rigorous, for we aim to reach both physicists and mathematicians, including non-specialists in the field. It is our hope that an overview such as this will put into perspective the relative successes as well as shortcomings of the various techniques that have been developed and, besides delineating their usefulness in understanding the nature of the quantum regime, will also demonstrate the mathematical richness of the attendant structures.
    QuantizationBundleCovarianceSymplectizationSymplectic manifoldStar productGeometric quantizationLine bundleManifoldHamiltonian...
  • Expository notes which combine a historical survey of the development of quantum physics with a review of selected mathematical topics in quantization theory (addressed to students that are not complete novices in quantum mechanics). After recalling in the introduction the early stages of the quantum revolution, and recapitulating in Sect. 2.1 some basic notions of symplectic geometry, we survey in Sect. 2.2 the so called prequantization thus preparing the ground for an outline of geometric quantization (Sect. 2.3). In Sect. 3 we apply the general theory to the study of basic examples of quantization of Kaehler manifolds. In Sect. 4 we review the Weyl and Wigner maps and the work of Groenewold and Moyal that laid the foundations of quantum mechanics in phase space, ending with a brief survey of the modern development of deformation quantization. Sect. 5 provides a review of second quantization and its mathematical interpretation. We point out that the treatment of (nonrelativistic) bound states requires going beyond the neat mathematical formalization of the concept of second quantization. An appendix is devoted to Pascual Jordan, the least known among the creators of quantum mechanics and the chief architect of the "theory of quantized matter waves".
    QuantizationPhase spaceQuantum mechanicsSecond quantizationSymplectic formHamiltonianManifoldPoisson bracketSymplectic manifoldStar product...
  • The paper puts together some loosely connected observations, old and new, on the concept of a quantum field and on the properties of Feynman amplitudes. We recall, in particular, the role of (exceptional) elementary induced representations of the quantum mechanical conformal group SU(2,2) in the study of gauge fields and their higher spin generalization. A recent revival of the (Bogolubov-)Epstein-Glaser approach to position space renormalization is reviewed including an application to the calculation of residues of primitively divergent graphs. We end up with an optimistic outlook of current developments of analytic methods in perturbative QFT which combine the efforts of theoretical physicists, algebraic geometers and number theorists.
    RenormalizationPermutationGraphQuantum field theoryConformal groupHomogenizationGauge fieldConformal field theoryHigher spinSubgroup...
  • Expository notes on Clifford algebras and spinors with a detailed discussion of Majorana, Weyl, and Dirac spinors. The paper is meant as a review of background material, needed, in particular, in now fashionable theoretical speculations on neutrino masses. It has a more mathematical flavour than the over twenty-seven-year-old "Introduction to Majorana masses" by P.D. Mannheim and includes historical notes and biographical data on past participants in the story.
    Clifford algebraQuaternionsOdd dimensionalOrthogonal groupAutomorphismSpin groupDirac spinorNeutrino massFlavourMajorana fermion...
  • During a first St. Petersburg period Leonhard Euler, in his early twenties, became interested in the Basel problem: summing the series of inverse squares (posed by Pietro Mengoli in mid 17th century). In the words of Andre Weil (1989) "as with most questions that ever attracted his attention, he never abandoned it". Euler introduced on the way the alternating "phi-series", the better converging companion of the zeta function, the first example of a polylogarithm at a root of unity. He realized - empirically! - that odd zeta values appear to be new (transcendental?) numbers. It is amazing to see how, a quarter of a millennium later, the numbers Euler played with, "however repugnant" this game might have seemed to his contemporary lovers of the "higher kind of calculus", reappeared in the analytic calculation of the anomalous magnetic moment of the electron, the most precisely calculated and measured physical quantity. Mathematicians, inspired by ideas of Grothendieck, are reviving the dream of Galois of uncovering a group structure in the ring of periods (that includes the multiple zeta values) - applied to the study of Feynman amplitudes.
    GraphAnomalous magnetic dipole momentQuantum field theoryPolylogarithmQuantum electrodynamicsPrecisionHopf algebraBasel problemNumber theoryAlgebraic geometry...
  • We analyze the classical approximations made in "The general relativistic effects to the magnetic moment in the Earth's gravity", originally published as "Post-Newtonian effects of Dirac particle in curved spacetime - I : magnetic moment in curved spacetime", and work out precisely where in the argument the mistakes are made. We show explicitly that any difference vanishes when properly distinguishing between coordinate and physical distance. In doing this, we illustrate some of the pitfalls in using GR to make predictions.
    Speed of lightMuonEarthGeneral relativityHamiltonianChristoffel symbolsCharged particleEinstein equivalence principleElectromagnetismCyclic permutation...
  • Following Verlinde's conjecture, we show that Tsallis' classical free particle distribution at temperature $T$ can generate Newton's gravitational force's $r^{-2}$ {\it distance's dependence}. If we want to repeat the concomitant argument by appealing to either Boltzmann-Gibbs' or Renyi's distributions, the attempt fails and one needs to modify the conjecture. Keywords: Tsallis', Boltzmann-Gibbs', and Renyi's distributions, classical partition function, entropic force.
    Verlinde formulaEntropyPartition functionKeyphraseGravitational forceTsallis entropyEarthJupiterSunQuantum gravity effect...
  • We propose that the consistent renormalization of gravity requires a scale transformation of the gravitational field. The only scale transformation consistent with an invariant spacetime interval is uniquely determined. The derived transformation is applied to two key problems in quantum gravity, its non-conformal scaling and non-renormalizability.
    Quantum gravityQuantum field theoryRenormalizationConformal field theoryGeodesicScale factorGravitational fieldsEntropySpeed of lightDimensional Reduction...
  • We present a short and intuitive argument explaining why gravity is non-renormalizable. The argument is based on black-hole domination of the high energy spectrum of gravity and not on the standard perturbative irrelevance of the gravitational coupling. This is a pedagogical note, containing textbook material that is widely appreciated by experts and is by no means original.
    Quantum field theoryConformal field theoryBlack holeRenormalization groupRenormalizationGaussian fixed pointQuantum mechanicsAnti de Sitter spaceHorizonGeneral relativity...
  • We make a scalar extension of the B-L gauge model where the ${\bf S}_{3}$ non-abelian discrete group drives mainly the Yukawa sector. Motived by the large and small hierarchies among the quark and active neutrino masses respectively, the quark and lepton families are not treated on the same footing under the assignment of the discrete group. As a consequence, the Nearest Neighbor Interactions (NNI) textures appear in the quark sector, leading to the CKM mixing matrix, whereas in the lepton sector, a soft breaking of the $\mu \leftrightarrow \tau$ symmetry in the effective neutrino mass that comes from type I see-saw mechanism, provides a non-maximal atmospheric angle and a non-zero reactor angle.
    Neutrino massCabibbo-Kobayashi-Maskawa matrixNearest-neighbor siteQuark massFlavour symmetryFermion massSeesaw mechanismSoft symmetry breakingMixing angleDiscrete symmetry...
  • In The Hitchhiker's Guide to the Galaxy, by Douglas Adams, the Answer to the Ultimate Question of Life, the Universe, and Everything is found to be 42 -- but the meaning of this is left open to interpretation. We take it to mean that there are 42 fundamental questions which must be answered on the road to full enlightenment, and we attempt a first draft (or personal selection) of these ultimate questions, on topics ranging from the cosmological constant and origin of the universe to the origin of life and consciousness.
    Standard ModelBlack holeEarthString theoryGrand unification theoryCosmological constantDark matterAntimatterStarGalaxy...
  • We report the experimental study of a harmonic oscillator in the relativistic regime. The oscillator is composed of Bose-condensed lithium atoms in the third band of an optical lattice, which have an energy-momentum relation nearly identical to that of a massive relativistic particle, with a reduced effective mass and speed of light. Imaging the shape of oscillator worldlines at velocities up to 98% of the effective speed of light reveals a crossover from sinusoidal to nearly photon-like propagation. Effective time dilation causes the measured period of oscillations to increase with energy; our measurements reveal beyond-leading-order contributions to this relativistic anharmonicity. We observe an intrinsic relativistic dephasing of oscillator ensembles, and a breathing mode with exactly the opposite phase of that predicted for non-relativistic harmonic motion. All observed dynamics are in quantitative agreement with longstanding but hitherto-untested relativistic predictions.
    Harmonic oscillatorSpeed of lightDephasingEffective massBloch oscillationEvaporationConfinementTime dilationCurvatureLorentz factor...
  • Operator scrambling is a crucial ingredient of quantum chaos. Specifically, in the quantum chaotic system, a simple operator can become increasingly complicated under unitary time evolution. This can be diagnosed by various measures such as square of the commutator (out-of-time-ordered correlator), operator entanglement entropy etc. In this paper, we discuss operator scrambling in three representative models: a chaotic spin-$1/2$ chain with spatially local interactions, a 2-local spin model and the quantum linear map. In the first two examples, although the speeds of scrambling are quite different, a simple Pauli spin operator can eventually approach a "highly entangled" operator with operator entanglement entropy taking a volume law value (close to the Page value). Meanwhile, the spectrum of the operator reduced density matrix develops a universal spectral correlation which can be characterized by the Wishart random matrix ensemble. In the second example, we further connect the 2-local model into a one dimensional chain and briefly discuss the operator scrambling there. In contrast, in the quantum linear map, although the square of commutator can increase exponentially with time, a simple operator does not scramble but performs chaotic motion in the operator basis space determined by the classical linear map. We show that once we modify the quantum linear map such that operator can mix in the operator basis, the operator entanglement entropy can grow and eventually saturate to its Page value, thus making it a truly quantum chaotic model.
    Quantum chaosEntanglement entropyReduced density matrixHamiltonianRandom matrixSachdev-Ye-Kitaev modelSuperpositionQuantum dotsClassical limitOut of Time...
  • We aim to carry out an assessment of the scientific value of Oppenheimer's research on black holes in order to determine and weigh possible factors to explain its neglect by the scientific community, and even by Oppenheimer himself. Dealing primarily with the science and looking closely at the scientific culture and the scientific conceptual belief system of the 1930s, the present article seeks to supplement the existent literature on the subject by enriching the explanations and possibly complicating the guiding questions. We suggest a rereading of Oppenheimer as a more intriguing, ahead-of-his-time figure.
    Black holeStarGeneral relativityDark matterSchwarzschild radiusEPR paradoxFoundation of PhysicsOf starsCosmologyFreezing...
  • The Gemini theorem asserts that, given certain reasonable assumptions, no physical system can be certainly aware of its own existence. The theorem can be proved algorithmically, but the proof of this theorem is somewhat obscure, and there exists very little literature on it. The purpose of this article is to provide a brief non-technical summary of the theorem and its proof, with a view to stimulating critical discussion of the proof and its implications. Since the theorem implies that a violation of the conservation of energy will take place within the brains of conscious human beings, it has obvious implications for any physical theory.
    TheoryConservation of energy
  • This is a partly non-technical introduction to selected topics on tensor network methods, based on several lectures and introductory seminars given on the subject. It should be a good place for newcomers to get familiarized with some of the key ideas in the field, specially regarding the numerics. After a very general introduction we motivate the concept of tensor network and provide several examples. We then move on to explain some basics about Matrix Product States (MPS) and Projected Entangled Pair States (PEPS). Selected details on some of the associated numerical methods for 1d and 2d quantum lattice systems are also discussed.
    Matrix product statesTensor network stateEntanglementHamiltonianExpectation ValueMany-body systemsTwo-point correlation functionEntanglement entropyRenormalization groupTranslational invariance...
  • One of the exciting results in flavor physics in recent times is the $R_D$/$R_{D^*}$ puzzle. The measurements of these flavor ratios performed by the B-factory experiments, BaBar and Belle, and the LHCb experiment are about $4\sigma$ away from the Standard Model expectation. These measurements indicate that the mechanism of $b\rightarrow c\tau\bar{\nu}$ decay is not identical to that of $b\rightarrow c(\mu/e)\bar{\nu}$. This charge lepton universality violation is particularly intriguing because these decays occur at tree level in the Standard Model. In particular, we expect a moderately large new physics contribution to $b\rightarrow c\tau\bar{\nu}$. The different types new physics amplitudes, which can explain the $R_D$/$R_{D^*}$ puzzle, have been identified previously. In this letter, we show that the polarization fractions of $\tau$ and $D^*$ and the angular asymmetries $A_{FB}$ and $A_{LT}$ in $B\rightarrow D^*\tau\bar{\nu}$ decay have the capability to uniquely identify the Lorentz structure of the new physics. A measurement of these four observables will lead to such an identification.
    Standard ModelLHCbLepton universality of gauge couplingsCharged leptonB-factoryHamiltonianForm factorNeutrinoForward-backward asymmetryFlavour physics...
  • The study aims to identify the institutional flaws of the current EU waste management model by analysing the economic model of extended producer responsibility and collective waste management systems and to create a model for measuring the transaction costs borne by waste recovery organizations. The model was approbated by analysing the Bulgarian collective waste management systems that have been complying with the EU legislation for the last 10 years. The analysis focuses on waste oils because of their economic importance and the limited number of studies and analyses in this field as the predominant body of research to date has mainly addressed packaging waste, mixed household waste or discarded electrical and electronic equipment. The study aims to support the process of establishing a circular economy in the EU, which was initiated in 2015.
    Economic modelsOilField
  • The experimental advance on light-matter interaction into strong couplings has invalidated Jaynes-Cummings model and brought quantum Rabi model (QRM) to more relevance. The QRM only involves linear coupling via a single-photon process (SPP), while nonlinear two-photon process (TPP) is weaker and conventionally neglected. However, we find a contrary trend that enhancing the linear coupling might not suppress more the nonlinear effect but backfire to trigger some collapse of linear characters. Indeed, in strong SPP couplings a tiny strength of TPP may dramatically change properties of the system, like a symmetry spontaneous breaking. By extracting the ground-state phase diagram including both SPP and TPP, we find TPP in low frequency limit induces a quantum phase transition with continuity-discontinuity double faces, which split into two distinct transitions at finite frequencies and yields a triple point. Our analysis unveils a subtle SPP-TPP entanglement.
    Quantum phase transitionQuantum Rabi modelTriple pointSpontaneous symmetry breakingPhase diagramEntanglementWave packetJaynes-Cummings modelInteraction of light and matterHamiltonian...
  • We extend the results of two of our papers [Phys. Rev. A 94, 041603R (2016) and Phys. Rev. B 97, 060303R (2018)] that touch upon the intimately connected topics of quantum chaos and thermalization. In the first, we argued that when the initial state of isolated lattice many-body quantum systems is chaotic, the power-law decay of the survival probability is caused by the bounds in the spectrum, and thus anticipates thermalization. In the current work, we provide stronger numerical support for the onset of these algebraic behaviors. In the second paper, we demonstrated that the correlation hole, which is a direct signature of quantum chaos revealed by the evolution of the survival probability at times beyond the power-law decay, appears also for other observables. In the present work, we investigate the correlation hole in the chaotic regime and in the vicinity of a many-body localized phase for the spin density imbalance, which is an observable studied experimentally.
    Survival probabilityThermalisationLocal density of statesGaussian orthogonal ensembleQuantum chaosChaosHamiltonianLevel repulsionDensity of statesStatistics...
  • In the purely gravitational dark matter scenario, the dark matter particle does not have any interaction except for gravitational one. We study the gravitational particle production of dark matter particle in such a minimal setup and show that correct amount of dark matter can be produced depending on the inflation model and the dark matter mass. In particular, we carefully evaluate the particle production rate from the transition epoch to the inflaton oscillation epoch in a realistic inflation model and point out that the gravitational particle production is efficient even if dark matter mass is much larger than the Hubble scale during inflation as long as it is smaller than the inflaton mass.
    InflatonInflationDark matterModel of inflationDark matter particle massScale factorInflaton massDark matter particleHubble scaleAbundance...
  • Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach.
    Deep learningNetworks
  • We discuss an article by Steven Weinberg expressing his discontent with the usual ways to understand quantum mechanics. We examine the two solutions that he considers and criticizes and propose another one, which he does not discuss, the pilot wave theory or Bohmian mechanics, for which his criticisms do not apply.
    Bohmian mechanicsQuantum mechanicsMeasuring devicesFoundation of PhysicsPilot waveSuperpositionMany-worlds interpretationHamiltonianWavefunctionUncertainty principle...
  • A century after its formulation by Einstein, it is time to incorporate special relativity early in the physics curriculum. The approach advocated here employs a simple algebraic extension of vector formalism that generates Minkowski spacetime, displays covariant symmetries, and enables calculations of boosts and spatial rotations without matrices or tensors. The approach is part of a comprehensive geometric algebra with applications in many areas of physics, but only an intuitive subset is needed at the introductory level. The approach and some of its extensions are given here and illustrated with insights into the geometry of spacetime.
    Relativistic astrophysicsCrossed productPlane waveGeometric algebraDilationSpecial relativityLorentz transformationSpeed of lightComplex numberLorentz invariant...
  • Using basic algebra and simple calculus, the analytical solution to the memristor model of Strukov et al published in Nature is derived. Lissajous figures of current responding to a sinusoidal voltage are presented.
    Ohm's lawResistorSemiconductorThin filmsHysteresisConstitutive relationEccentricityCapacitorRegularizationPinch...
  • Most introductory physics textbooks neglect air resistance in situations where an astute student can observe that it dominates the dynamics. We give examples from many books. Using dimensional analysis we discuss how to estimate the relative importance of air resistance and gravity. The discussion can be used to mitigate the baleful influence of these textbooks. Incorrectly neglecting air resistance is one of their many unphysical teachings. Shouldn't a physics textbook teach correct physics?
  • This paper offers a solution method that allows one to find exact values for a large class of convergent series of rational terms. Sums of this form arise often in problems dealing with Quantum Field Theory.
    Laplace transformPolygamma functionQuantum field theoryDigamma functionConjunctionTelescopesZeta functionGamma functionMathematics (under construction)Polynomial...
  • We propose a unified approach to addition of some physical quantities (among which resistors and capacitors are the most well-known) that are usually encountered in introductory physics such that the formulae required to solve problems are always simply additive. This approach has the advantage of being consistent with the intuition of students. To demonstrate the effectiveness of our approach, we propose and solve several problems. We hope that this article can serve as a resource paper for problems on the subject.
  • Defined by a single axiom, finite abstract simplicial complexes belong to the simplest constructs of mathematics. We look at a a few theorems.
    GraphCurvatureEuler characteristicCohomologyCritical pointPolytopeSectional curvatureAutomorphismGraph theoryGeodesic...
  • These are notes and slides from a Pecha-Kucha talk given on March 6, 2013. The presentation tinkered with the question whether calculus on graphs could have emerged by the time of Archimedes, if the concept of a function would have been available 2300 years ago. The text first attempts to boil down discrete single and multivariable calculus to one page each, then presents the slides with additional remarks and finally includes 40 "calculus problems" in a discrete or so-called 'quantum calculus' setting. We also added some sample Mathematica code, gave a short overview over the emergence of the function concept in calculus and included comments on the development of calculus textbooks over time.
    GraphArchimedesEuler characteristicOrientationSimple graphCohomologyDifferential formRiemann sumCritical pointQuantization...
  • Occurrences of very close zeros of the Riemann zeta function are strongly connected with Lehmer pairs and with the Riemann Hypothesis. The aim of the present note is to derive a condition for a pair of consecutive simple zeros of the $\zeta$-function to be a Lehmer pair in terms of derivatives of Hardy's $Z$-function. Furthermore, we connect Newman's conjecture with stationary points of the $Z$-function, and present some numerical results.
    Riemann hypothesisRiemann zeta functionCritical lineHurwitz zeta functionTrigonometric integralGamma functionHolomorphic functionPythonNatriumPrime number...
  • We present a somewhat different way of looking on Shannon entropy. This leads to an axiomatisation of Shannon entropy that is essentially equivalent to that of Fadeev. In particular we give a new proof of Fadeev theorem.
    EntropyPotential
  • We consider the task of text attribute transfer: transforming a sentence to alter a specific attribute (e.g., sentiment) while preserving its attribute-independent content (e.g., changing "screen is just the right size" to "screen is too small"). Our training data includes only sentences labeled with their attribute (e.g., positive or negative), but not pairs of sentences that differ only in their attributes, so we must learn to disentangle attributes from attribute-independent content in an unsupervised way. Previous work using adversarial methods has struggled to produce high-quality outputs. In this paper, we propose simpler methods motivated by the observation that text attributes are often marked by distinctive phrases (e.g., "too small"). Our strongest method extracts content words by deleting phrases associated with the sentence's original attribute value, retrieves new phrases associated with the target attribute, and uses a neural model to fluently combine these into a final output. On human evaluation, our best method generates grammatical and appropriate responses on 22% more inputs than the best previous system, averaged over three attribute transfer datasets: altering sentiment of reviews on Yelp, altering sentiment of reviews on Amazon, and altering image captions to be more romantic or humorous.
    Recurrent neural networkInductive biasAutoencoderHidden stateEmbeddingWord vectorsLong short term memoryNaive Bayes classifierHyperparameterEuclidean distance...
  • Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. We proposes a fully computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. For example, we show that the total number of labeled datapoints needed for solving a set of 10 tasks can be reduced by roughly 2/3 (compared to training independently) while keeping the performance nearly the same. We provide a set of tools for computing and probing this taxonomical structure including a solver that users can employ to devise efficient supervision policies for their use cases.
    TaxonomyInductive transferArchitectureClassificationRankingNeural networkStatisticsHypergraphGround truthImage Processing...
  • The current dominant paradigm for imitation learning relies on strong supervision of expert actions to learn both 'what' and 'how' to imitate. We pursue an alternative paradigm wherein an agent first explores the world without any expert supervision and then distills its experience into a goal-conditioned skill policy with a novel forward consistency loss. In our framework, the role of the expert is only to communicate the goals (i.e., what to imitate) during inference. The learned policy is then employed to mimic the expert (i.e., how to imitate) after seeing just a sequence of images demonstrating the desired task. Our method is 'zero-shot' in the sense that the agent never has access to expert actions during training or for the task demonstration at inference. We evaluate our zero-shot imitator in two real-world settings: complex rope manipulation with a Baxter robot and navigation in previously unseen office environments with a TurtleBot. Through further experiments in VizDoom simulation, we provide evidence that better mechanisms for exploration lead to learning a more capable policy which in turn improves end task performance. Videos, models, and more details are available at https://pathak22.github.io/zeroshot-imitation/
    InferenceGround truthFeature spaceVisual observationOptimizationArchitectureOrientationRoboticsEntropySupervised learning...
  • D. Bennequin and P. Baudot introduced a cohomological construction adapted to information theory, called "information cohomology" (see "The homological nature of Entropy", 2015). Our text serves as a detailed introduction to information cohomology, containing the necessary background in probability theory and homological algebra. It makes explicit the link with topos theory, as introduced by Grothendieck, Verdier and their collaborators in the SGA IV. It also contains several new constructions and results. (1) We define generalized information structures, as categories of finite random variables related by a notion of extension or refinement; probability spaces are models (or representations) for these general structures. Generalized information structures form a category with finite products and coproducts. We prove that information cohomology is invariant under isomorphisms of generalized structures. (2) We prove that the relatively-free bar construction gives a projective object for the computation of cohomology. (3) We provide detailed computations of $H^1$ and describe the "degenerate" cases. (4) We establish the homological nature of Tsallis entropy. (5) We re-interpret Shannon's axioms for a 'measure of choice' in the light of this theory and provide a combinatorial justification for his recurrence formula.
    CohomologyInformation theoryTsallis entropyProjective objectHomological algebraIsomorphismEntropyProbabilityTheoryProbability theory...
  • We study the dynamics of a supersonically expanding ring-shaped Bose-Einstein condensate both experimentally and theoretically. The expansion redshifts long-wavelength excitations, as in an expanding universe. After expansion, energy in the radial mode leads to the production of bulk topological excitations -- solitons and vortices -- driving the production of a large number of azimuthal phonons and, at late times, causing stochastic persistent currents. These complex nonlinear dynamics, fueled by the energy stored coherently in one mode, are reminiscent of a type of "preheating" that may have taken place at the end of inflation.
    PhononBose-Einstein condensateSolitonPersistent currentExpanding universeSpeed of soundWave equationConfinementInflationWinding number...
  • Assuming a closed universe with slight positive curvature, cosmic expansion is modeled as a heat engine where the '"system'" is defined collectively as those regions of space within the observable universe which will later evolve into voids or empty space, and the '"surroundings'" are identified collectively as those pockets of space which will eventually develop into matter filled galaxies, clusters, super-clusters and filament walls. Using this model, we show that the energy needed for cosmic expansion can be found using basic thermodynamic principles, and that cosmic expansion had as its origin, a finite initial energy density, pressure, volume, and temperature. Inflation in the traditional sense, with the inflaton field, may also not be required as it can be argued that homogeneities and in-homogeneities in the WMAP temperature profile can be attributed to quantum mechanical fluctuations about a fixed background temperature in the initial isothermal expansion phase. Fluctuations in temperature can cause certain regions of space to lose heat to other pockets producing voids forcing, i.e., fueling expansion of the latter and creating slightly cooler temperatures in the former, where matter will later congregate. Upon freeze-out, this could produce the observed WMAP signature with its associated CBR fluctuation in magnitude. Finally, we estimate that the freeze-out temperature and time for WMAP in-homogeneities occurred at roughly 3.02 * 1027 K and 2.54 * 10-35 s, respectively, after first initiation of volume expansion, in line with current estimates for the end of the inflationary epoch. The heat absorbed in the inflationary phase is estimated to be Q = 1.81 * 1094 J, and the system volume increases by a factor of only 5.65. The bubble voids in the observable universe increase, collectively, in volume from about .046 m3 to .262 m3 within this time.
    Expansion of the UniverseInflationVoidWilkinson Microwave Anisotropy ProbeFreeze-outMessier 3Temperature profileClosed universeGalaxyCurvature...
  • In this paper, we propose a simple neural net that requires only $O(nlog_2k)$ number of qubits and $O(nk)$ quantum gates: Here, $n$ is the number of input parameters, and $k$ is the number of weights applied to these parameters in the proposed neural net. We describe the network in terms of a quantum circuit, and then draw its equivalent classical neural net which involves $O(k^n)$ nodes in the hidden layer. Then, we show that the network uses a periodic activation function of cosine values of the linear combinations of the inputs and weights. The backpropagation is described through the gradient descent, and then iris and breast cancer datasets are used for the simulations. The numerical results indicate the network can be used in machine learning problems and it may provide exponential speedup over the same structured classical neural net.
    Activation functionQubitHidden layerQuantum circuitNeural networkQuantum gatesMachine learningSuperpositionBackpropagationQuantum machine learning...
  • In this paper, we study the homogeneity of the GRB distribution using a subsample of the Greiner GRB catalogue, which contains 314 objects with redshift $0<z<2.5$ (244 of them discovered by the Swift GRB Mission). We try to reconcile the dilemma between the new observations and the current theory of structure formation and growth. To test the results against the possible biases in redshift determination and the incompleteness of the Greiner sample, we also apply our analysis to the 244 GRBs discovered by Swift and the subsample presented by the Swift Gamma-Ray Burst Host Galaxy Legacy Survey (SHOALS). The real space two-point correlation function (2PCF) of GRBs, $\xi(r),$ is calculated using a Landy-Szalay estimator. We perform a standard least-$\chi^2$ fit to the measured 2PCFs of GRBs. We use the best-fit 2PCF to deduce a recently defined homogeneity scale. The homogeneity scale, $R_H$, is defined as the comoving radius of the sphere inside which the number of GRBs $N(<r)$ is proportional to $r^3$ within $1\%$, or equivalently above which the correlation dimension of the sample $D_2$ is within $1\%$ of $D_2=3$. For the Swift subsample of 244 GRBs, the correlation length and slope are $r_0= 387.51 \pm 132.75~h^{-1}$Mpc and $\gamma = 1.57\pm 0.65$ (at $1\sigma$ confidence level). The corresponding scale for a homogeneous distribution of GRBs is $r\geq 7,700~h^{-1}$Mpc. The results help to alleviate the tension between the new discovery of the excess clustering of GRBs and the cosmological principle of large-scale homogeneity. It implies that very massive structures in the relatively local Universe do not necessarily violate the cosmological principle and could conceivably be present.
    HomogenizationTwo-point correlation functionQuasarCorrelation dimensionGamma ray burstCosmological principleReal spaceCosmologyFractal dimensionHost galaxy...