Recent ontology graph

Recent definitions

Recently bookmarked papers

with concepts:
  • A block cipher is intended to be computationally indistinguishable from a random permutation of appropriate domain and range. But what are the properties of a random permutation? By the aid of exponential and ordinary generating functions, we derive a series of collolaries of interest to the cryptographic community. These follow from the Strong Cycle Structure Theorem of permutations, and are useful in rendering rigorous two attacks on Keeloq, a block cipher in wide-spread use. These attacks formerly had heuristic approximations of their probability of success. Moreover, we delineate an attack against the (roughly) millionth-fold iteration of a random permutation. In particular, we create a distinguishing attack, whereby the iteration of a cipher a number of times equal to a particularly chosen highly-composite number is breakable, but merely one fewer round is considerably more secure. We then extend this to a key-recovery attack in a "Triple-DES" style construction, but using AES-256 and iterating the middle cipher (roughly) a million-fold. It is hoped that these results will showcase the utility of exponential and ordinary generating functions and will encourage their use in cryptanalytic research.
    Random permutationSecurityStatisticsHighly composite numberPermutationPeriodateProbability...
  • In this paper we present FeynRules, a new Mathematica package that facilitates the implementation of new particle physics models. After the user implements the basic model information (e.g. particle content, parameters and Lagrangian), FeynRules derives the Feynman rules and stores them in a generic form suitable for translation to any Feynman diagram calculation program. The model can then be translated to the format specific to a particular Feynman diagram calculator via FeynRules translation interfaces. Such interfaces have been written for CalcHEP/CompHEP, FeynArts/FormCalc, MadGraph/MadEvent and Sherpa, making it possible to write a new model once and have it work in all of these programs. In this paper, we describe how to implement a new model, generate the Feynman rules, use a generic translation interface, and write a new translation interface. We also discuss the details of the FeynRules code.
    Feynman diagramsFeynman rulesStandard ModelSherpaProgramming LanguageEffective theoryGamma matricesMonte Carlo methodLarge Hadron ColliderScalar field...
  • We study deformations of N=1 supersymmetric QCD that exhibit a rich landscape of supersymmetric and non-supersymmetric vacua.
    SuperpotentialEigenvalueGauge theoryMetastateQuarkElectricity and magnetismExpectation ValueF-termGlobal symmetryRanking...
  • The compensation approach is applied to the problem of a calculation of parameters of the Standard Model. Examples of sets of compensation equations are considered. Principal possibility of a determination of mass ratios of fundamental quarks and leptons is demonstrated, as well as of a determination of important parameter $\sin^2\theta_W$. In case of a realization of a non-trivial solution of a set of compensation equations corresponding to an effective interaction of electroweak gauge bosons, a satisfactory value for the electromagnetic fine structure constant $\alpha$ at scale $M_W$ may be obtained. Arguments are laid down on behalf of a possibility of a calculation of fundamental parameters of the Standard Model in the framework of the compensation approach.
    Standard ModelMass ratioForm factorFine structure constantQuarkElectroweak interactionNambu-Jona-Lasinio modelFour-fermion interactionsCoupling constantStrong interactions...
  • In this pedagogical note I will discuss one-loop integrals, where (i) different regions of the integration region lead to divergences and (ii) where these divergences cancel in the sum over all regions. These integrals cannot be calculated without regularisation, in spite of the fact that they yield a finite result. A typical example where such integrals occur is the decay H --> gamma gamma.
    Loop integralUltraviolet divergenceLoop momentumLaurent seriesHiggs bosonSegmentationFactorisationMetric tensorMeasurementTaylor series...
  • A joint effort of cryogenic microcalorimetry (CM) and high-precision Penning-trap mass spectrometry (PT-MS) in investigating atomic orbital electron capture (EC) can shed light on the possible existence of heavy sterile neutrinos with masses from 0.5 to 100 keV. Sterile neutrinos are expected to perturb the shape of the atomic de-excitation spectrum measured by CM after a capture of the atomic orbital electrons by a nucleus. This effect should be observable in the ratios of the capture probabilities from different orbits. The sensitivity of the ratio values to the contribution of sterile neutrinos strongly depends on how accurately the mass difference between the parent and the daughter nuclides of EC-transitions can be measured by, e.g., PT-MS. A comparison of such probability ratios in different isotopes of a certain chemical element allows one to exclude many systematic uncertainties and thus could make feasible a determination of the contribution of sterile neutrinos on a level below 1%. Several electron capture transitions suitable for such measurements are discussed.
    Electron captureSterile neutrinoAtomic orbitalHeavy sterile neutrinoSystematic errorIsotopePenning trapMeasurementMassProbability...
  • We consider chiral liquids, consisting of massless fermions and right-left asymmetric. In such media, one expects existence of electromagnetic current in equilibrium flowing along external magnetic field. The current is predicted to be dissipation free. We argue that actually the chiral liquids in the hydrodynamic approximation should satisfy further constraints, like infinite classical conductivity. Inclusion of higher orders in electromagnetic interactions is crucial to reach the conclusions.
    HelicityFluid dynamicsChiralityChiral liquidLiquidsQuantum anomalyDissipationVorticityChiral anomalyMagnetic helicity...
  • We review recent progress in Bipartite Field Theories. We cover topics such as their gauge dynamics, emergence of toric Calabi-Yau manifolds as master and moduli spaces, string theory embedding, relationships to on-shell diagrams, connections to cluster algebras and the Grassmannian, and applications to graph equivalence and stratification of the Grassmannian.
    OrientationGraphPolytopeBipartite networkGrassmannianRiemann surfaceGauge theoryStratificationSuperpotentialD-brane...
  • We study the moduli spaces of self-dual instantons on CP^2 in a simple group G. When G is a classical group, these instanton solutions can be realised using ADHM-like constructions which can be naturally embedded into certain three dimensional quiver gauge theories with 4 supercharges. The topological data for such instanton bundles and their relations to the quiver gauge theories are described. Based on such gauge theory constructions, we compute the Hilbert series of the moduli spaces of instantons that correspond to various configurations. The results turn out to be equal to the Hilbert series of their counterparts on C^2 upon an appropriate mapping. We check the former against the Hilbert series derived from the blowup formula for the Hirzebruch surface F_1 and find an agreement. The connection between the moduli spaces of instantons on such two spaces is explained in detail.
    InstantonQuiverGauge theoryChiralityBundleADHM constructionChern classPartition functionSuperchargeHolomorph...
  • Codimension two defects of the $(0,2)$ six dimensional theory $\mathscr{X}[\mathfrak{g}]$ have played an important role in the understanding of dualities for certain $\mathcal{N}=2$ SCFTs in four dimensions. These defects are typically understood by their behaviour under various dimensional reduction schemes. In their various guises, the defects admit partial descriptions in terms of singularities of Hitchin systems, Nahm boundary conditions or Toda operators. Here, a uniform dictionary between these descriptions is given for a large class of such defects in $\mathscr{X}[\mathfrak{g}], \mathfrak{g} \in A,D,E$.
    NilpotentDualityCodimensionWeyl groupSpringer correspondenceS-dualitySubgroupRepresentation theoryPartition functionIrreducible representation...
  • In this paper we establish relations between three enumerative geometry tau-functions, namely the Kontsevich-Witten, Hurwitz and Hodge tau-functions. The relations allow us to describe the tau-functions in terms of matrix integrals, Virasoro constraints and Kac-Schwarz operators. All constructed operators belong to the algebra (or group) of symmetries of the KP hierarchy.
    Tau-functionVirasoro constraintKadomtsev-Petviashvili hierarchyAlgebraGeometrySymmetry...
  • We study 't Hooft anomalies for discrete global symmetries in bosonic theories in 2, 3 and 4 dimensions. We show that such anomalies may arise in gauge theories with topological terms in the action, if the total symmetry group is a nontrivial extension of the global symmetry by the gauge symmetry. Sometimes the 't Hooft anomaly for a d-dimensional theory with a global symmetry G can be canceled by anomaly inflow from a (d+1)-dimensional topological gauge theory with gauge group G. Such d-dimensional theories can live on the surfaces of Symmetry Protected Topological Phases. We also give examples of theories with more severe 't Hooft anomalies which cannot be canceled in this way.
    Gauge fieldGauge theoryGlobal symmetryCohomologyGauge transformationAnomaly inflowGauge invarianceChern-Simons termSymmetry groupFree fermions...
  • The two-pion contribution from low energies to the muon magnetic moment anomaly, although small, has a large relative uncertainty since in this region the experimental data on the cross sections are neither sufficient nor precise enough. It is therefore of interest to see whether the precision can be improved by means of additional theoretical information on the pion electromagnetic form factor, which controls the leading order contribution. In the present paper we address this problem by exploiting analyticity and unitarity of the form factor in a parametrization-free approach that uses the phase in the elastic region, known with high precision from the Fermi-Watson theorem and Roy equations for $\pi\pi$ elastic scattering as input. The formalism also includes experimental measurements on the modulus in the region 0.65-0.70 GeV, taken from the most recent $e^+e^-\to \pi^+\pi^-$ experiments, and recent measurements of the form factor on the spacelike axis. By combining the results obtained with inputs from CMD2, SND, BABAR and KLOE, we make the predictions $a_\mu^{\pi\pi, \LO}\,[2 m_\pi,\, 0.30 \gev]=(0.553 \pm 0.004) \times 10^{-10}$ and $a_\mu^{\pi\pi, \LO}\,[0.30 \gev,\, 0.63 \gev]=(133. 083 \pm 0.837)\times 10^{-10}$. These are consistent with the other recent determinations, and have slightly smaller errors.
    Form factorMuonPionUnitarityCharge radiusIsospinHadronizationElasticityStatisticsVacuum polarization...
  • We review and analyze the available information for nuclear fusion cross sections that are most important for solar energy generation and solar neutrino production. We provide best values for the low-energy cross-section factors and, wherever possible, estimates of the uncertainties. We also describe the most important experiments and calculations that are required in order to improve our knowledge of solar fusion rates.
    NeutrinoSolar neutrinoSunElectron captureNuclear fusionScreening effectSystematic errorSNO+CNO cycleStatistics...
  • The flux of papers from electron positron colliders containing data on the photon structure function ended naturally around 2005. It is thus timely to review the theoretical basis and confront the predictions with a summary of the experimental results. The discussion will focus on the increase of the structure function with x (for x away from the boundaries) and its rise with log Q**2, both characteristics beeing dramatically different from hadronic structure functions. Comparing the data with a specific QCD prediction a new determination of the QCD coupling coupling constant is presented. The agreement of the experimental observations with the theoretical calculations of the real and virtual photon structure is a striking success of QCD.
    QuarkHadronizationNext-to-leading order computationPositronLight quarkPartonVector mesonInterferenceStrong coupling constantL3...
  • There has been substantial progress in recent years in the quantitative understanding of the nonequilibrium time evolution of quantum fields. Important topical applications, in particular in high energy particle physics and cosmology, involve dynamics of quantum fields far away from the ground state or thermal equilibrium. In these cases, standard approaches based on small deviations from equilibrium, or on a sufficient homogeneity in time underlying kinetic descriptions, are not applicable. A particular challenge is to connect the far-from-equilibrium dynamics at early times with the approach to thermal equilibrium at late times. Understanding the ``link'' between the early- and the late-time behavior of quantum fields is crucial for a wide range of phenomena. For the first time questions such as the explosive particle production at the end of the inflationary universe, including the subsequent process of thermalization, can be addressed in quantum field theory from first principles. The progress in this field is based on efficient functional integral techniques, so-called n-particle irreducible effective actions, for which powerful nonperturbative approximation schemes are available. Here we give an introduction to these techniques and show how they can be applied in practice. Though we focus on particle physics and cosmology applications, we emphasize that these techniques can be equally applied to other nonequilibrium phenomena in complex many body systems.
    Effective actionG2Evolution equationSelf-energyStatisticsDensity matrixRenormalizationTwo-point correlation functionGraphQuantum field theory...
  • We solve the nonequilibrium dynamics of a 3+1 dimensional theory with Dirac fermions coupled to scalars via a chirally invariant Yukawa interaction. The results are obtained from a systematic coupling expansion of the 2PI effective action to lowest non-trivial order, which includes scattering as well as memory and off-shell effects. The dynamics is solved numerically without further approximation, for different far-from-equilibrium initial conditions. The late-time behavior is demonstrated to be insensitive to the details of the initial conditions and to be uniquely determined by the initial energy density. Moreover, we show that at late time the system is very well characterized by a thermal ensemble. In particular, we are able to observe the emergence of Fermi--Dirac and Bose--Einstein distributions from the nonequilibrium dynamics.
    Evolution equationSelf-energyStatisticsEffective actionChiralityBose-Einstein statisticsChiral symmetryBosonizationDirac fermionFermionic field...
  • Majorana fermions in a superconductor hybrid system are charge neutral zero-energy states. For the detection of this unique feature, we propose an interferometry of a chiral Majorana edge channel, formed along the interface between a superconductor and a topological insulator under an external magnetic field. The superconductor is of a ring shape and has a Josephson junction that allows the Majorana state to enclose continuously tunable magnetic flux. Zero-bias differential electron conductance between the Majorana state and a normal lead is found to be independent of the flux at zero temperature, manifesting the Majorana feature of a charge neutral zero-energy state. In contrast, the same setup on graphene has no Majorana state and shows Aharonov-Bohm effects.
    GrapheneChiralityMajorana fermionEdge excitationsAharonov-Bohm effectSuperconductivityZeeman EnergySuperconductorTopological insulatorInterferometry...
  • We have analyzed a large sample of clean blazars detected by Fermi Large Area Telescope (LAT). Using literature and calculation, we obtained intrinsic $\gamma$-ray luminosity excluding beaming effect, black hole mass, broad-line luminosity (used as a proxy for disk luminosity), jet kinetic power from "cavity" power and bulk Lorentz factor for parsec-scale radio emission, and studied the distributions of these parameters and relations between them. Our main results are as follows. (i) After excluding beaming effect and redshift effect, intrinsic $\gamma$-ray luminosity with broad line luminosity, black hole mass and Eddington ratio have significant correlations. Our results confirm the physical distinction between BL Lacs and FSRQs. (ii) The correlation between broad line luminosity and jet power is significant which supports that jet power has a close link with accretion. Jet power depends on both the Eddington ratio and black hole mass. We also obtain $LogL_{\rm BLR}\sim(0.98\pm0.07)Log P_{\rm jet}$ for all blazars, which is consistent with the theoretical predicted coefficient. These results support that jets are powered by energy extraction from both accretion and black hole spin (i.e., not by accretion only). (iii) For almost all BL Lacs, $P_{\rm jet}>L_{\rm disk}$; for most of FSRQs, $P_{\rm jet}<L_{\rm disk}$. The "jet-dominance" (parameterized as $\frac{P_{\rm jet}}{L_{\rm disk}}$) is mainly controlled by the bolometric luminosity. Finally, the radiative efficiency of $\gamma$-ray and properties of TeV blazars detected by Fermi LAT were discussed.
    Astrophysical jetLuminosityBlazarBlack holeFlat spectrum radio quasarAccretionLorentz factorBroad-line regionFERMI telescopeActive Galactic Nuclei...
  • We derive a general criterion that defines all single-field models leading to Starobinsky-like inflation and to universal predictions for the spectral index and tensor-to-scalar ratio, which are in agreement with Planck data. Out of all the theories that satisfy this criterion, we single out a special class of models with the interesting property of retaining perturbative unitarity up to the Planck scale. These models are based on induced gravity, with the Planck mass determined by the vacuum expectation value of the inflaton.
    UnitarityAttractorPlanck scaleStandard ModelHiggs inflationSpectral index of power spectrumInflatonModel of inflationEinstein framePlanck mission...
  • (abridged) Observations of Faraday rotation for extragalactic sources probe magnetic fields both inside and outside the Milky Way. Building on our earlier estimate of the Galactic foreground (Oppermann et al., 2012), we set out to estimate the extragalactic contributions. We discuss different strategies and the problems involved. In particular, we point out that taking the difference between the observed values and the Galactic foreground reconstruction is not a good estimate for the extragalactic contributions. We present a few possibilities for improved estimates using the existing foreground map, allowing for imperfectly described observational noise. In this context, we point out a degeneracy between the contributions to the observed values due to extragalactic magnetic fields and observational noise and comment on the dangers of over-interpreting an estimate without taking into account its uncertainty information. Finally, we develop a reconstruction algorithm based on the assumption that the observational uncertainties are accurately described for a subset of the data, which can overcome this degeneracy. We demonstrate its performance in a simulation, yielding a high quality reconstruction of the Galactic Faraday depth, a precise estimate of the typical extragalactic contribution, and a well-defined probabilistic description of the extragalactic contribution for each source. We apply this reconstruction technique to a catalog of Faraday rotation observations. We vary our assumptions about the data, showing that the dispersion of extragalactic contributions to observed Faraday depths is likely lower than 7 rad/m^2, in agreement with earlier results, and that the extragalactic contribution to an individual data point is poorly constrained by the data in most cases. Posterior samples for the extragalactic contributions and all results of our fiducial model are provided online.
    FaradayPolar capsFaraday rotationCovarianceAngular power spectrumGalactic latitudeSimulationsStatisticsExpectation ValueGalactic plane...
  • Using one-dimensional models, we show that a helical magnetic field with an appropriate sign of a helicity can compensate the Faraday depolarization resulting from the superposition of Faraday-rotated polarization planes from a spatially extended source. For radio emission from a helical magnetic field, the polarization as a function of the square of the wavelength becomes asymmetric with respect to zero. Mathematically speaking, the resulting emission occurs then either at observable or at unobservable (imaginary) wavelengths. We demonstrate that rotation measure (RM) synthesis allows the reconstruction of the underlying Faraday dispersion function in the former case, but not in the latter. The presence of positive magnetic helicity can thus be detected by observing positive RM in highly polarized regions in the sky and negative RM in weakly polarized regions. Conversely, negative magnetic helicity can be detected by observing negative RM in highly polarized regions and positive RM in weakly polarized regions. The simultaneous presence of two magnetic constituents with opposite signs of helicity is shown to possess signatures that can be quantified through polarization peaks at specific wavelengths and the gradient of the phase of the Faraday dispersion function. We discuss the possibility of detecting magnetic fields with such properties in external galaxies using the Square Kilometre Array.
    FaradayHelicityMagnetic helicityHelical magnetic fieldRotation measure of the plasmaGalaxyLine of sightAmplitudeOrientationFaraday rotation...
  • The mass and composition of dark matter (DM) particles and the shape and damping scales of the power spectrum of density perturbations can be estimated from recent observations of the DM dominated relaxed objects -- dwarf galaxies and clusters of galaxies. We confirm that the observed velocity dispersion of dSph galaxies agrees with the possible existence of DM particles with mass $m_w\sim 15 - 20keV$. More complex analysis utilizes the well known semi analytical model of formation of DM halos in order to describe the basic properties of corresponding objects and to estimate their redshifts of formation. For the DM halos this redshift is determined by their masses and the initial power spectrum of density perturbations. This correlation allows us to partly reconstruct the small scale spectrum of perturbations. We consider the available sample of suitable observed objects that includes $\sim 40$ DM dominated galaxies and $\sim 40$ clusters of galaxies and we show that the observed characteristics of these objects are inconsistent with expectations of the standard $\Lambda$CDM cosmological model. However, they are consistent with a more complex DM model with a significant contribution of the hot DM--like power spectrum with relatively large damping scale ($\sim 10 - 30Mpc$). The HDM component of DM decelerates but does not prevent formation of low mass objects. These preliminary inferences require confirmation by a more representative observational data that should include -- if possible -- DM dominated objects with intermediate masses $M\sim 10^{10} - 10^{12} M_\odot$. Comparison of observed properties of such objects with numerical simulations will provide more detailed picture of the process of formation of DM halos.
    Cluster of galaxiesDark matterMatter power spectrumDark matter particleDark matter haloSimulationsWarm dark matterVirial massCold dark matterGalaxy...
  • Turbulence is ubiquitous in the insterstellar medium and plays a major role in several processes such as the formation of dense structures and stars, the stability of molecular clouds, the amplification of magnetic fields, and the re-acceleration and diffusion of cosmic rays. Despite its importance, interstellar turbulence, alike turbulence in general, is far from being fully understood. In this review we present the basics of turbulence physics, focusing on the statistics of its structure and energy cascade. We explore the physics of compressible and incompressible turbulent flows, as well as magnetized cases. The most relevant observational techniques that provide quantitative insights of interstellar turbulence are also presented. We also discuss the main difficulties in developing a three-dimensional view of interstellar turbulence from these observations. Finally, we briefly present what could be the the main sources of turbulence in the interstellar medium.
    TurbulenceInterstellar mediumStatisticsMolecular cloudCompressibilityMagnetohydrodynamic turbulenceMaserLine of sightVelocity dispersionNumerical simulation...
  • This paper is contributed to a fast algorithm for Hankel tensor-vector products. For this purpose, we first discuss a special class of Hankel tensors that can be diagonalized by the Fourier matrix, which is called \emph{anti-circulant} tensors. Then we obtain a fast algorithm for Hankel tensor-vector products by embedding a Hankel tensor into a larger anti-circulant tensor. The computational complexity is about $\mathcal{O}(m^2 n \log mn)$ for a square Hankel tensor of order $m$ and dimension $n$, and the numerical examples also show the efficiency of this scheme. Moreover, the block version for multi-level block Hankel tensors is discussed as well. Finally, we apply the fast algorithm to exponential data fitting and the block version to 2D exponential data fitting for higher performance.
    Fast Fourier transformCompressibilityVandermonde determinantSingular valueDegree of freedomCirculant matrixTotal least squaresEigenvalueTransposeExact solution...
  • We present a new determination of the concentration-mass relation for galaxy clusters based on our comprehensive lensing analysis of 19 X-ray selected galaxy clusters from the Cluster Lensing and Supernova Survey with Hubble (CLASH). Our sample spans a redshift range between 0.19 and 0.89. We combine weak lensing constraints from the Hubble Space Telescope (HST) and from ground-based wide field data with strong lensing constraints from HST. The result are reconstructions of the surface-mass density for all CLASH clusters on multi-scale grids. Our derivation of NFW parameters yields virial masses between 0.53 x 10^15 and 1.76 x 10^15 M_sol/h and the halo concentrations are distributed around c_200c ~ 3.7 with a 1-sigma significant negative trend with cluster mass. We find an excellent 4% agreement between our measured concentrations and the expectation from numerical simulations after accounting for the CLASH selection function based on X-ray morphology. The simulations are analyzed in 2D to account for possible biases in the lensing reconstructions due to projection effects. The theoretical concentration-mass (c-M) relation from our X-ray selected set of simulated clusters and the c-M relation derived directly from the CLASH data agree at the 90% confidence level.
    Cluster of galaxiesCluster Lensing And Supernova survey with HubbleSimulationsConcentration-mass relationStrong gravitational lensingWeak lensingHubble Space TelescopeRelaxationCosmologyNumerical simulation...
  • Topological insulators (TIs) exhibit many exotic properties. In particular, a topological magneto-electric (TME) effect, quantized in units of the fine structure constant, exists in TIs. In this Letter, we study theoretically the scattering properties of electromagnetic waves by TI circular cylinders particularly in the Rayleigh scattering limit. Compared with ordinary dielectric cylinders, the scattering by TI cylinders shows many unusual features due to the TME effect. Two proposals are suggested to determine the TME effect of TIs simply based on measuring the electric-field components of scattered waves in the far field at one or two scattering angles. Our results could also offer a way to measure the fine structure constant.
    Topological insulatorRayleigh scatteringElectromagnetismFine structure constantScattering matrixQuantizationAxionMagnetic monopoleAmplitudeConstitutive relation...
  • Completely positive, trace preserving (CPT) maps and Lindblad master equations are both widely used to describe the dynamics of open quantum systems. The connection between these two descriptions is a classic topic in mathematical physics. One direction was solved by the now famous result due to Lindblad, Kossakowski Gorini and Sudarshan, who gave a complete characterisation of the master equations that generate completely positive semi-groups. However, the other direction has remained open: given a CPT map, is there a Lindblad master equation that generates it (and if so, can we find it's form)? This is sometimes known as the Markovianity problem. Physically, it is asking how one can deduce underlying physical processes from experimental observations. We give a complexity theoretic answer to this problem: it is NP-hard. We also give an explicit algorithm that reduces the problem to integer semi-definite programming, a well-known NP problem. Together, these results imply that resolving the question of which CPT maps can be generated by master equations is tantamount to solving P=NP: any efficiently computable criterion for Markovianity would imply P=NP; whereas a proof that P=NP would imply that our algorithm already gives an efficiently computable criterion. Thus, unless P does equal NP, there cannot exist any simple criterion for determining when a CPT map has a master equation description. However, we also show that if the system dimension is fixed (relevant for current quantum process tomography experiments), then our algorithm scales efficiently in the required precision, allowing an underlying Lindblad master equation to be determined efficiently from even a single snapshot in this case. Our work also leads to similar complexity-theoretic answers to a related long-standing open problem in probability theory.
    NP-hard problemEigenvalueMaster equationEmbedding problemComplexity classQuantum channelMarkov chainDensity matrixPositive semi definiteMarkov process...
  • Quantum systems carry information. Quantum theory supports at least two distinct kinds of information (classical and quantum), and a variety of different ways to encode and preserve information in physical systems. A system's ability to carry information is constrained and defined by the noise in its dynamics. This paper introduces an operational framework, using information-preserving structures to classify all the kinds of information that can be perfectly (i.e., with zero error) preserved by quantum dynamics. We prove that every perfectly preserved code has the same structure as a matrix algebra, and that preserved information can always be corrected. We also classify distinct operational criteria for preservation (e.g., "noiseless", "unitarily correctible", etc.) and introduce two new and natural criteria for measurement-stabilized and unconditionally preserved codes. Finally, for several of these operational critera, we present efficient (polynomial in the state-space dimension) algorithms to find all of a channel's information-preserving structures.
    Alice and BobNP-hard problemQuantum error correctionDecoherence-free subspacesClassificationIsomorphismGalaxyQuantum mechanicsAlgorithmsTransformations...
  • One of the most challenging open problems in quantum information theory is to clarify and quantify how entanglement behaves when part of an entangled state is sent through a quantum channel. Of central importance in the description of a quantum channel or completely positive map (CP-map) is the dual state associated to it. The present paper is a collection of well-known, less known and new results on quantum channels, presented in a unified way. We will show how this dual state induces nice characterizations of the extremal maps of the convex set of CP-maps, and how normal forms for states defined on a Hilbert space with a tensor product structure lead to interesting parameterizations of quantum channels.
    EntanglementQuantum channelEigenvalueTensor productRankingQubitConvex setQuantum information theoryDualityQuantum mechanics...
  • The general stable quantum memory unit is a hybrid consisting of a classical digit with a quantum digit (qudit) assigned to each classical state. The shape of the memory is the vector of sizes of these qudits, which may differ. We determine when N copies of a quantum memory A embed in N(1+o(1)) copies of another quantum memory B. This relationship captures the notion that B is as at least as useful as A for all purposes in the bulk limit. We show that the embeddings exist if and only if for all p >= 1, the p-norm of the shape of A does not exceed the p-norm of the shape of B. The log of the p-norm of the shape of A can be interpreted as the maximum of S(\rho) + H(\rho)/p (quantum entropy plus discounted classical entropy) taken over all mixed states \rho on A. We also establish a noiseless coding theorem that justifies these entropies. The noiseless coding theorem and the bulk embedding theorem together say that either A blindly bulk-encodes into B with perfect fidelity, or A admits a state that does not visibly bulk-encode into B with high fidelity. In conclusion, the utility of a hybrid quantum memory is determined by its simultaneous capacity for classical and quantum entropy, which is not a finite list of numbers, but rather a convex region in the classical-quantum entropy plane.
    EntropyHybridizationQuantum information theoryMixed statesPositive elementVon Neumann algebraKelvin-Helmholtz timescaleVon neumann entropyQuantum channelClassical capacity...
  • Given a list of n complex numbers, when can it be the spectrum of a quantum channel, i.e., a completely positive trace preserving map? We provide an explicit solution for the n=4 case and show that in general the characterization of the non-zero part of the spectrum can essentially be given in terms of its classical counterpart - the non-zero spectrum of a stochastic matrix. A detailed comparison between the classical and quantum case is given. We discuss applications of our findings in the analysis of time-series and correlation functions and provide a general characterization of the peripheral spectrum, i.e., the set of eigenvalues of modulus one. We show that while the peripheral eigen-system has the same structure for all Schwarz maps, the constraints imposed on the rest of the spectrum change immediately if one departs from complete positivity.
    EigenvalueQuantum channelComplex numberQubitRankingTwo-point correlation functionPermutationTime SeriesSpectral setSingular value...
  • This article is intended as a reference guide to various notions of monoidal categories and their associated string diagrams. It is hoped that this will be useful not just to mathematicians, but also to physicists, computer scientists, and others who use diagrammatic reasoning. We have opted for a somewhat informal treatment of topological notions, and have omitted most proofs. Nevertheless, the exposition is sufficiently detailed to make it clear what is presently known, and to serve as a starting place for more in-depth study. Where possible, we provide pointers to more rigorous treatments in the literature. Where we include results that have only been proved in special cases, we indicate this in the form of caveats.
    MonoidVector spaceCommutative diagramTensor productLanguageSurveysTopologyDropletVectorLinear operator...
  • We study quantum information and computation from a novel point of view. Our approach is based on recasting the standard axiomatic presentation of quantum mechanics, due to von Neumann, at a more abstract level, of compact closed categories with biproducts. We show how the essential structures found in key quantum information protocols such as teleportation, logic-gate teleportation, and entanglement-swapping can be captured at this abstract level. Moreover, from the combination of the --apparently purely qualitative-- structures of compact closure and biproducts there emerge `scalars` and a `Born rule'. This abstract and structural point of view opens up new possibilities for describing and reasoning about quantum systems. It also shows the degrees of axiomatic freedom: we can show what requirements are placed on the (semi)ring of scalars C(I,I), where C is the category and I is the tensor unit, in order to perform various protocols such as teleportation. Our formalism captures both the information-flow aspect of the protocols (see quant-ph/0402014), and the branching due to quantum indeterminism. This contrasts with the standard accounts, in which the classical information flows are `outside' the usual quantum-mechanical formalism.
    Information flowQuantum mechanicsScalarTensorUnits...
  • Quantum processes can be divided into two categories: unitary and non-unitary ones. For a given quantum process, we can define a \textit{degree of the unitarity (DU)} of this process to be the fidelity between it and its closest unitary one. The DU, as an intrinsic property of a given quantum process, is able to quantify the distance between the process and the group of unitary ones, and is closely related to the noise of this quantum process. We derive analytical results of DU for qubit unital channels, and obtain the lower and upper bounds in general. The lower bound is tight for most of quantum processes, and is particularly tight when the corresponding DU is sufficiently large. The upper bound is found to be an indicator for the tightness of the lower bound. Moreover, we study the distribution of DU in random quantum processes with different environments. In particular, The relationship between the DU of any quantum process and the non-markovian behavior of it is also addressed.
    Unitary operatorQuantum channelUnitarityRed starsExpectation ValueOperator spaceInterferenceSimulationsMixed statesRanking...
  • Bitcoin is a "crypto currency", a decentralized electronic payment scheme based on cryptography. Bitcoin economy grows at an incredibly fast rate and is now worth some 10 billions of dollars. Bitcoin mining is an activity which consists of creating (minting) the new coins which are later put into circulation. Miners spend electricity on solving cryptographic puzzles and they are also gatekeepers which validate bitcoin transactions of other people. Miners are expected to be honest and have some incentives to behave well. However. In this paper we look at the miner strategies with particular attention paid to subversive and dishonest strategies or those which could put bitcoin and its reputation in danger. We study in details several recent attacks in which dishonest miners obtain a higher reward than their relative contribution to the network. In particular we revisit the concept of block withholding attacks and propose a new concrete and practical block withholding attack which we show to maximize the advantage gained by rogue miners.
    CryptographySecurityP2pCompressibilityGame theoryStrangenessFragilityStatisticsEcosystemsError function...
  • Bitcoin is a "crypto currency", a decentralized electronic payment scheme based on cryptography which has recently gained excessive popularity. Scientific research on bitcoin is less abundant. A paper at Financial Cryptography 2012 conference explains that it is a system which "uses no fancy cryptography", and is "by no means perfect". It depends on a well-known cryptographic standard SHA-256. In this paper we revisit the cryptographic process which allows one to make money by producing bitcoins. We reformulate this problem as a Constrained Input Small Output (CISO) hashing problem and reduce the problem to a pure block cipher problem. We estimate the speed of this process and we show that the cost of this process is less than it seems and it depends on a certain cryptographic constant which we estimated to be at most 1.86. These optimizations enable bitcoin miners to save tens of millions of dollars per year in electricity bills. Miners who set up mining operations face many economic incertitudes such as high volatility. In this paper we point out that there are fundamental incertitudes which depend very strongly on the bitcoin specification. The energy efficiency of bitcoin miners have already been improved by a factor of about 10,000, and we claim that further improvements are inevitable. Better technology is bound to be invented, would it be quantum miners. More importantly, the specification is likely to change. A major change have been proposed in May 2013 at Bitcoin conference in San Diego by Dan Kaminsky. However, any sort of change could be flatly rejected by the community which have heavily invested in mining with the current technology. Another question is the reward halving scheme in bitcoin. The current bitcoin specification mandates a strong 4-year cyclic property. We find this property totally unreasonable and harmful and explain why and how it needs to be changed.
    SecurityCompressibilityCryptographyAbundancePeer-to-peer networkSoftware errorsFactorisationEcosystemsP2pVolatiles...
  • How does network structure affect diffusion? Recent studies suggest that the answer depends on the type of contagion. Complex contagions, unlike infectious diseases (simple contagions), are affected by social reinforcement and homophily. Hence, the spread within highly clustered communities is enhanced, while diffusion across communities is hampered. A common hypothesis is that memes and behaviors are complex contagions. We show that, while most memes indeed behave like complex contagions, a few viral memes spread across many communities, like diseases. We demonstrate that the future popularity of a meme can be predicted by quantifying its early spreading pattern in terms of community concentration. The more communities a meme permeates, the more viral it is. We present a practical method to translate data about community structure into predictive knowledge about what information will spread widely. This connection may lead to significant advances in computational social science, social media analytics, and marketing applications.
    Community structureSocial networkTwitterCommunity detectionStatisticsRandom forestEntropyKeyphraseStatistical significanceRanking...
  • Humanity has the knowledge to solve its problems but lacks the moral insight to implement these ideas on a global scale. New moral insight can be obtained through transformative experiences that allow us to examine and refine our underlying preferences, and the eventual landing of humans on Mars will be of tremendous transformative value. Before such an event, I propose that we liberate Mars from any controlling interests of Earth and allow Martian settlements to develop into a second independent instance of human civilization. Such a designation is consistent with the Outer Space Treaty and allows Mars to serve as a test bed and point of comparison by which to learn new information about the phenomenon of civilization. Rather than develop Mars through a series of government and corporate colonies, let us steer the future by liberating Mars and embracing the concept of planetary citizenship.
    MarsEarthPlanetOuter spaceSpace explorationInterferenceClimateNatural satelliteTragedy of the commonsExergy...
  • The smallest dark matter halos are formed first in the early universe. According to recent studies, the central density cusp is much steeper in these halos than in larger halos and scales as $\rho \propto r^{-(1.5-1.3)}$. We present results of very large cosmological $N$-body simulations of the hierarchical formation and evolution of halos over a wide mass range, beginning from the formation of the smallest halos. We confirmed early studies that the inner density cusps are steeper in halos at the free streaming scale. The cusp slope gradually becomes shallower as the halo mass increases. The slope of halos 50 times more massive than the smallest halo is approximately $-1.3$. No strong correlation exists between inner slope and the collapse epoch. The cusp slope of halos above the free streaming scale seems to be reduced primarily due to major merger processes. The concentration, estimated at the present universe, is predicted to be $60-70$, consistent with theoretical models and earlier simulations, and ruling out simple power law mass-concentration relations. Microhalos could still exist in the present universe with the same steep density profiles.
    SimulationsDark matter subhaloFree streamingNavarro-Frenk-White profileVirial massBoost factorStatisticsCutoff scaleSubhalo mass functionWarm dark matter...
  • Based on a set of large--scale cosmological simulations, we investigate baryon effects on the halo mass function, with emphasis on the role played by AGN feedback. Halo mass functions are computed after identifying halos with both Friends-of-Friends and Spherical Overdensity halo finding algorithms. We embed the standard SO algorithm into a memory-controlled frame program and present the $\rm P$python spher$\rm I$c$\rm A$l $\rm O$verdensity code --- PIAO. The SO halos are identified at three overdensities $\Delta_c = 2500, 500, 200$, and masses computed within the three corresponding radii. We confirm that hydrodynamical simulations based on radiative cooling, star formation and supernova feedback (CSF) produce mass functions higher than from collisionless simulations. In contrast, the effect of AGN feedback is that to suppressing the HMFs to a level even below that of Dark Matter simulations, for both FoF and SO halos. We find that the ratio between the halo mass functions in the AGN and in the DM simulations is ~ 0.8, almost independent of the mass, when estimated at overdensity $\Delta_c=500$, a difference that increases at higher overdensity $\Delta_c=2500$, with no significant redshift dependence for these ratios. We verify that the decrease of the HMF in the AGN simulation is induced by a corresponding decrease of halo masses with respect to the DM case. The shallower inner density profiles of halos in the AGN simulation witnesses that mass reduction is induced by the sudden expulsion of displacement of gas induced by AGN energy feedback. We provide fitting functions to describe halo mass variations at different overdensities. We demonstrate that, using these fitting functions, we recover the DM halo mass function starting from that of hydrodynamical simulations, with a residual random scatter $\leqslant 5$ per cent for halo mass larger than $10^{13} h^{-1} M_{\odot}$.
    Halo mass functionSimulationsVirial massAGN feedbackDark matterActive Galactic NucleiFriends of friends algorithmHydrodynamical simulationsDark matter particleStar formation...
  • 1211.0469  ,  ,  et al.,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  show less
    The Tokai-to-Kamioka (T2K) experiment studies neutrino oscillations using an off-axis muon neutrino beam with a peak energy of about 0.6 GeV that originates at the J-PARC accelerator facility. Interactions of the neutrinos are observed at near detectors placed at 280 m from the production target and at the far detector -- Super-Kamiokande (SK) -- located 295 km away. The flux prediction is an essential part of the successful prediction of neutrino interaction rates at the T2K detectors and is an important input to T2K neutrino oscillation and cross section measurements. A FLUKA and GEANT3 based simulation models the physical processes involved in the neutrino production, from the interaction of primary beam protons in the T2K target, to the decay of hadrons and muons that produce neutrinos. The simulation uses proton beam monitor measurements as inputs. The modeling of hadronic interactions is re-weighted using thin target hadron production data, including recent charged pion and kaon measurements from the NA61/SHINE experiment. For the first T2K analyses the uncertainties on the flux prediction are evaluated to be below 15% near the flux peak. The uncertainty on the ratio of the flux predictions at the far and near detectors is less than 2% near the flux peak.
    NeutrinoHadronizationNA61/SHINEBeamlineProton beamPionPrimary star in a binary systemNeutrino beamKaonPhase space...
  • In recent papers we have constructed the conformal theory of metric-torsional gravitation, and in this paper we shall include the gauge fields to study the conformal U(1) X SU(2) Standard Model; we will show that the metric-torsional degrees of freedom give rise to a potential of conformal-gauge dynamical symmetry breaking: consequences are discussed.
    Torsion tensorStandard ModelCurvatureScalar fieldVacuum expectation valueCosmological constantGauge fieldDegree of freedomCosmological constant problemCovariance...
  • We briefly review the formulation of effective field theories (EFTs) in time-dependent situations, with particular attention paid to their domain of validity. Our main interest is the extent to which solutions of the EFT capture the dynamics of the full theory. For a simple model we show by explicit calculation that the low-energy action obtained from a sensible UV completion need not take the restrictive form required to obtain only second-order field equations, and we clarify why runaway solutions are nevertheless typically not a problem for the EFT. Although our results will not be surprising to many, to our knowledge they are only mentioned tangentially in the EFT literature, which (with a few exceptions) largely addresses time-independent situations.
    Effective field theoryUV completionEffective theoryGraphCosmologyOne particle irreducibleExergyDegree of freedomSingular perturbationCosmological model...
  • Relic neutrinos play an important role in the evolution of the Universe, modifying some of the cosmological observables. We summarize the main aspects of cosmological neutrinos and describe how the precision of present cosmological data can be used to learn about neutrino properties. In particular, we discuss how cosmology provides information on the absolute scale of neutrino masses, complementary to beta decay and neutrinoless double-beta decay experiments. We explain why the combination of Planck temperature data with measurements of the baryon acoustic oscillation angular scale provides a strong bound on the sum of neutrino masses, 0.23 eV at the 95% confidence level, while the lensing potential spectrum and the cluster mass function measured by Planck are compatible with larger values. We also review the constraints from current data on other neutrino properties. Finally, we describe the very good perspectives from future cosmological measurements, which are expected to be sensitive to neutrino masses close the minimum values guaranteed by flavour oscillations.
    NeutrinoNeutrino massCosmic microwave backgroundPlanck missionCosmologyBaryon acoustic oscillationsSterile neutrinoFree streamingHubble Space TelescopeCluster of galaxies...
  • The BICEP2 instrument was designed to measure the polarization of the cosmic microwave background (CMB) on angular scales of 1 to 5 degrees ($\ell$=40-200), near the expected peak of the B-mode polarization signature of primordial gravitational waves from cosmic inflation. Measuring B-modes requires dramatic improvement in sensitivity combined with exquisite control of systematics. We have built on the successful strategy of BICEP1, which achieved the most sensitive limit on B-modes at these scales. The telescope had a 26 cm aperture and cold, on-axis, refractive optics, and it observed from a three-axis mount at the South Pole. BICEP2 adopted a new detector design in which beam-defining slot antenna arrays couple to transition-edge sensor (TES) bolometers, all fabricated monolithically on a common substrate. BICEP2 took advantage of this design's scalable fabrication and multiplexed SQUID readout to field more detectors than BICEP1, improving mapping speed by more than a factor of ten. In this paper we report on the design and performance of the instrument and on the three-year data set. BICEP2 completed three years of observation with 500 detectors at 150 GHz. After optimization of detector and readout parameters BICEP2 achieved an instrument noise equivalent temperature of 17.0 $\mu$K sqrt(s) and the full data set reached Stokes Q and U map depths of 87.8 nK in square-degree pixels (5.3 $\mu$K arcmin) over an effective area of 390.3 square degrees within a 1000 square degree field. These are the deepest CMB polarization maps at degree angular scales.
    TelescopesTransition edge sensorCosmic microwave backgroundSQUIDMountingSimulationsMultidimensional ArrayAzimuthThermal conductanceTransfer function...
  • Kepler's laws are deduced from those valid for a harmonic oscillator, following the approach of Bohlin, Levi-Civita and Arnold.
    Kepler's lawsHarmonic oscillatorCometSegmentationKeplerian orbitMajor axisDilationComplex planeEllipticityAphelion...
  • We review the role of massive neutrinos in astrophysics and cosmology, assuming that the oscillation interpretation of solar and atmospheric neutrinos is correct. In particular, we discuss cosmological mass limits, neutrino flavor oscillations in the early universe, leptogenesis, and neutrinos in core-collapse supernovae.
    NeutrinoSupernova neutrinosNeutrino massCosmologyNeutrino oscillationsShock waveDark matterSupernova 1987AMassive neutrinoSupernova...
  • Non-linear realizations of spacetime symmetries can be obtained by a generalization of the coset construction valid for internal ones. The physical equivalence of different representations for spacetime symmetries is not obvious, since their relation involves not only a redefinition of the fields but also a field-dependent change of coordinates. A simple and relevant spacetime symmetry is obtained by the contraction of the 4D conformal group that leads to the Galileon group. We analyze two non-linear realizations of this group, focusing in particular on the propagation of signals around non-trivial backgrounds. The aperture of the lightcone is in general different in the two representations and in particular a free (luminal) massless scalar is mapped in a Galileon theory which admits superluminal propagation. We show that in this theory, if we consider backgrounds that vanish at infinity, there is no asymptotic effect: the displacement of the trajectory integrates to zero, as can be expected since the S-matrix is trivial. Regarding local measurements, we show that the puzzle is solved taking into account that a local coupling with fixed sources in one theory is mapped into a non-local coupling and we show that this effect compensates the different lightcone. Therefore the two theories have a different notion of locality. The same applies to the different non-linear realizations of the conformal group and we study the particular case of a cosmologically interesting background: the Galilean Genesis scenarios.
    CosetCoset constructionGoldstone bosonSpacetime symmetriesConformal groupDilatonS-matrixInfrared Space ObservatoryGeodesicParadoxism...