- Helicity

by Dr. Pedro Botija03 Jan 2014 19:07 - Dwyer-Fried invariant

by Prof. Alex Suciu15 Dec 2013 03:22 - General relativity

by Wilson Yu13 Dec 2013 04:40 - Lattice QCD

by Dr. Pedro Botija25 Nov 2013 20:02 - QCD sum rules

by Dr. Pedro Botija23 Nov 2013 16:01 - Benjamin-Ono equation

by Prof. Alexander Abanov03 Nov 2009 21:51 - Fermi surface

by Dr. Vadim Cheianov05 Dec 2010 22:11 - Grey-body radiation

by Prof. Carlo Beenakker08 Jan 2011 20:36 - Quantum point contact

by Prof. Carlo Beenakker08 Jan 2011 20:32 - Minimal Dark Matter

by Dr. Marco Cirelli05 Dec 2010 22:13

- Beams of neutrinos have been proposed as a vehicle for communications under unusual circumstances, such as direct point-to-point global communication, communication with submarines, secure communications and interstellar communication. We report on the performance of a low-rate communications link established using the NuMI beam line and the MINERvA detector at Fermilab. The link achieved a decoded data rate of 0.1 bits/sec with a bit error rate of 1% over a distance of 1.035 km, including 240 m of earth.NeutrinoNuMINeutrino beamFermilabMuonEarthCountingProton beamMINERνAMuon neutrino...
- We study various modifications to the minimal models of gauge mediated supersymmetry breaking. We argue that, under reasonable assumptions, the structure of the messenger sector is rather restricted. We investigate the effects of possible mixing between messenger and ordinary squark and slepton fields and, in particular, violation of universality. We show that acceptable values for the $\mu$ and $B$ parameters can naturally arise from discrete, possibly horizontal, symmetries. We claim that in models where the supersymmetry breaking parameters $A$ and $B$ vanish at tree level, $\tan\beta$ could be large without fine tuning. We explain how the supersymmetric CP problem is solved in such models.Supersymmetry breakingYukawa couplingGauginoSuperpotentialSupersymmetryQuarkSfermionSupersymmetry breaking scaleMinimal modelsHiggs doublet...
- We investigate the issue of how accurately we can constrain the lepton number asymmetry xi_nu = mu_nu/T_nu in the Universe by using future observations of 21 cm line fluctuations and cosmic microwave background (CMB). We find that combinations of the 21 cm line and the CMB observations can constrain the lepton asymmetry better than big-bang nucleosynthesis (BBN). Additionally, we also discuss constraints on xi_nu in the presence of some extra radiation, and show that the 21 cm line observations can substantially improve the constraints obtained by CMB alone, and allow us to distinguish the effects of the lepton asymmetry from the ones of extra radiation.Hydrogen 21 cm lineCosmic microwave backgroundBig bang nucleosynthesisLepton asymmetryNeutrinoPlanck missionCosmological parametersSquare Kilometre ArrayPrimordial abundanceCosmological observation...
- The smallest dark matter halos are formed first in the early universe. According to recent studies, the central density cusp is much steeper in these halos than in larger halos and scales as $\rho \propto r^{-(1.5-1.3)}$. We present results of very large cosmological $N$-body simulations of the hierarchical formation and evolution of halos over a wide mass range, beginning from the formation of the smallest halos. We confirmed early studies that the inner density cusps are steeper in halos at the free streaming scale. The cusp slope gradually becomes shallower as the halo mass increases. The slope of halos 50 times more massive than the smallest halo is approximately $-1.3$. No strong correlation exists between inner slope and the collapse epoch. The cusp slope of halos above the free streaming scale seems to be reduced primarily due to major merger processes. The concentration, estimated at the present universe, is predicted to be $60-70$, consistent with theoretical models and earlier simulations, and ruling out simple power law mass-concentration relations. Microhalos could still exist in the present universe with the same steep density profiles.SimulationsDark matter subhaloFree streamingNavarro-Frenk-White profileVirial massBoost factorStatisticsCutoff scaleSubhalo mass functionWarm dark matter...
- In this paper, we present a new pipeline which automatically identifies and annotates axoplasmic reticula, which are small subcellular structures present only in axons. We run our algorithm on the Kasthuri11 dataset, which was color corrected using gradient-domain techniques to adjust contrast. We use a bilateral filter to smooth out the noise in this data while preserving edges, which highlights axoplasmic reticula. These axoplasmic reticula are then annotated using a morphological region growing algorithm. Additionally, we perform Laplacian sharpening on the bilaterally filtered data to enhance edges, and repeat the morphological region growing algorithm to annotate more axoplasmic reticula. We track our annotations through the slices to improve precision, and to create long objects to aid in segment merging. This method annotates axoplasmic reticula with high precision. Our algorithm can easily be adapted to annotate axoplasmic reticula in different sets of brain data by changing a few thresholds. The contribution of this work is the introduction of a straightforward and robust pipeline which annotates axoplasmic reticula with high precision, contributing towards advancements in automatic feature annotations in neural EM data.IntensityElectron microscopySegmentationExergyParticle filterDilationConfusion matrixAlgorithmsObjectiveDroplet...
- This is the first in a series of papers in which we measure accurate weak-lensing masses for 51 of the most X-ray luminous galaxy clusters known at redshifts 0.15<z<0.7, in order to calibrate X-ray and other mass proxies for cosmological cluster experiments. The primary aim is to improve the absolute mass calibration of cluster observables, currently the dominant systematic uncertainty for cluster count experiments. Key elements of this work are the rigorous quantification of systematic uncertainties, high-quality data reduction and photometric calibration, and the "blind" nature of the analysis to avoid confirmation bias. Our target clusters are drawn from RASS X-ray catalogs, and provide a versatile calibration sample for many aspects of cluster cosmology. We have acquired wide-field, high-quality imaging using the Subaru and CFHT telescopes for all 51 clusters, in at least three bands per cluster. For a subset of 27 clusters, we have data in at least five bands, allowing accurate photo-z estimates of lensed galaxies. In this paper, we describe the cluster sample and observations, and detail the processing of the SuprimeCam data to yield high-quality images suitable for robust weak-lensing shape measurements and precision photometry. For each cluster, we present wide-field color optical images and maps of the weak-lensing mass distribution, the optical light distribution, and the X-ray emission, providing insights into the large-scale structure in which the clusters are embedded. We measure the offsets between X-ray centroids and Brightest Cluster Galaxies in the clusters, finding these to be small in general, with a median of 20kpc. For offsets <100kpc, weak-lensing mass measurements centered on the BCGs agree well with values determined relative to the X-ray centroids; miscentering is therefore not a significant source of systematic uncertainty for our mass measurements. [abridged]Point spread functionWeak lensingBrightest cluster galaxyGalaxyVirial cluster massStarSystematic errorCosmologyPhotometric redshiftPhotometry...
- Last year we argued that if slow-roll inflation followed the decay of a false vacuum in a large landscape, the steepening of the scalar potential between the inflationary plateau and the barrier generically leads to a potentially observable suppression of the scalar power spectrum at large distances. Here we revisit this analysis in light of the recent BICEP2 results. Assuming that both the BICEP2 B-mode signal and the Planck analysis of temperature fluctuations hold up, we find that the data now discriminate more sharply between our scenario and $\Lambda$CDM. Nonzero tensor modes exclude standard $\Lambda$CDM with notable but not yet conclusive confidence: at $\sim 3.8\,\sigma$ if $r\approx0.2$, or at $\sim 3.5\,\sigma$ if $r=0.15$. Of the two steepening models of our previous work, one is now ruled out by existing bounds on spatial curvature. The other entirely reconciles the tension between BICEP2 and Planck. Upcoming $EE$ polarization measurements have the potential to rule out unmodified $\Lambda$CDM decisively. Next generation Large Scale Structure surveys can further increase the significance. More precise measurements of $BB$ at low $\ell$ will help distinguish our scenario from other explanations. If steepening is confirmed, the prospects for detecting open curvature increase but need not be large.BICEP2CurvatureCold dark matterTensor mode fluctuationsHorizonFalse vacuumPlanck missionSlow rollStatisticsInflaton potential...
- We introduce a general model of trapping for random walks on graphs. We give the possible scaling limits of these "Randomly Trapped Random Walks" on Z. These scaling limits include the well known Fractional Kinetics process, the Fontes-Isopi-Newman singular diffusion as well as a new broad class we call Spatially Subordinated Brownian Motions. We give sufficient conditions for convergence and illustrate these on two important examples.Random walkBrownian motionRandom measureScaling limitGraphIntensityRegularizationPoint processLévy processLaplace transform...
- This paper is a tour of how the laws of nature can distinguish between the past and the future, or be T-violating. I argue that, in terms of the basic argumentative structure, there are really just three approaches currently being explored. I show how each is characterized by a symmetry principle, which provides a template for detecting T-violating laws even without knowing the laws of physics themselves. Each approach is illustrated with an example, and the prospects of each are considered in extensions of particle physics beyond the standard model.Time-reversal symmetryHamiltonianEigenvalueStrangenessElectric dipole momentBeyond the Standard ModelWeak interactionKaonUnitary operatorExtensions of the standard model...
- We propose an order parameter for the Symmetry Protected Topological (SPT) phases which are protected under an Abelian on-site symmetry. This order parameter, called the SPT entanglement, is defined as the entanglement between A and B, two distant regions of the system, given that the total charge (associated with the symmetry) in a third region C is measured and known, where C is a connected region surrounded by the regions A and B and the boundaries of the system. In the case of 1-dimensional systems we prove that at the limit where the regions A and B are large and far from each other compared to the correlation length, the SPT entanglement remains constant throughout a SPT phase, and furthermore, it is zero for the trivial phase while it is nonzero for all the non-trivial phases. Moreover, we show that the SPT entanglement is invariant under the low-depth local quantum circuits which respect the symmetry, suggesting that the SPT entanglement remains constant throughout a SPT phase in the higher dimensions as well. Finally, we discuss the relation between the SPT entanglement and the string order parameters.EntanglementLOCCFactorization systemOne-dimensional systemMatrix product statesCohomologyUnitary representationIsometryAlice and BobHomogenization...
- If the initial state of a system and its dynamical equations are both symmetric, which is to say invariant under a symmetry group of transformations, then the final state will also be symmetric. This implies that under symmetric dynamics any symmetry-breaking in the final state must have its origin in the initial state. Specifically, the final state can only break the symmetry in ways in which it was broken by the initial state, and its measure of asymmetry can be no greater than that of the initial state. It follows that for the purpose of understanding the consequences of symmetries of dynamics, in particular, complicated and open-system dynamics, it is useful to introduce the notion of a state's asymmetry properties, which includes the type and measure of its asymmetry. We demonstrate and exploit the fact that the asymmetry properties of a state can also be understood in terms of information-theoretic concepts, for instance in terms of the state's ability to encode information about an element of the symmetry group. We show that the asymmetry properties of a pure state psi relative to the symmetry group G are completely specified by the characteristic function of the state, defined as chi_psi(g)=<psi|U(g)|psi> where g\in G and U is the unitary representation of interest. Among other results, we show that for a symmetry described by a compact Lie group G, two pure states can be reversibly interconverted one to the other by symmetric dynamics if and only if their characteristic functions are equal up to a 1-dimensional representation of the group.Unitary representationEntanglementSymmetry groupLOCCDegree of freedomPositive-definite functionALICE experimentAncillaSubgroupEigenvalue...
- I. Introduction (Preface, Basic properties of semiconductor nanostructures). II. Theory of Coulomb-blockade oscillations (Periodicity of the oscillations, Amplitude and lineshape). III. Experiments on Coulomb-blockade oscillations (Quantum dots, Disordered quantum wires, Relation to earlier work on disordered quantum wires). IV. Quantum Hall effect regime (The Aharonov-Bohm effect in a quantum dot, Coulomb blockade of the Aharonov-Bohm effect, Experiments on quantum dots, Experiments on disordered quantum wires).Quantum dotsCoulomb blockadeLandau level2D electron gasEdge excitationsAharonov-Bohm effectFermi energyQuantum Hall EffectFermi levelSemiconductor nanostructure...
- The set of prime numbers has been analyzed, based on their algebraic and arithmetical structure. Here by obtaining a basic formula for prime numbers, they are identified, and it has been shown that prime numbers, under a special and systematic procedure, are combinations (unions and intersections) of some subsets of natural numbers, with elementary structures. In fact, logical essence of obtained formula for prime numbers is quite similar to formula 2n for even numbers and formula 2n -1 for odd numbers. Usually the prime numbers appear complex and irregular, here they have been described and more clarified, and finally specified examples for obtained formula are presented.Prime numberComputational scienceComposite numberRing of integersSecurityBertrand's postulateArithmeticFactorisationChaosFractal...
- We present a new determination of the concentration-mass relation for galaxy clusters based on our comprehensive lensing analysis of 19 X-ray selected galaxy clusters from the Cluster Lensing and Supernova Survey with Hubble (CLASH). Our sample spans a redshift range between 0.19 and 0.89. We combine weak lensing constraints from the Hubble Space Telescope (HST) and from ground-based wide field data with strong lensing constraints from HST. The result are reconstructions of the surface-mass density for all CLASH clusters on multi-scale grids. Our derivation of NFW parameters yields virial masses between 0.53 x 10^15 and 1.76 x 10^15 M_sol/h and the halo concentrations are distributed around c_200c ~ 3.7 with a 1-sigma significant negative trend with cluster mass. We find an excellent 4% agreement between our measured concentrations and the expectation from numerical simulations after accounting for the CLASH selection function based on X-ray morphology. The simulations are analyzed in 2D to account for possible biases in the lensing reconstructions due to projection effects. The theoretical concentration-mass (c-M) relation from our X-ray selected set of simulated clusters and the c-M relation derived directly from the CLASH data agree at the 90% confidence level.Cluster of galaxiesCluster Lensing And Supernova survey with HubbleSimulationsConcentration-mass relationStrong gravitational lensingWeak lensingHubble Space TelescopeRelaxationCosmologyNumerical simulation...
- A block cipher is intended to be computationally indistinguishable from a random permutation of appropriate domain and range. But what are the properties of a random permutation? By the aid of exponential and ordinary generating functions, we derive a series of collolaries of interest to the cryptographic community. These follow from the Strong Cycle Structure Theorem of permutations, and are useful in rendering rigorous two attacks on Keeloq, a block cipher in wide-spread use. These attacks formerly had heuristic approximations of their probability of success. Moreover, we delineate an attack against the (roughly) millionth-fold iteration of a random permutation. In particular, we create a distinguishing attack, whereby the iteration of a cipher a number of times equal to a particularly chosen highly-composite number is breakable, but merely one fewer round is considerably more secure. We then extend this to a key-recovery attack in a "Triple-DES" style construction, but using AES-256 and iterating the middle cipher (roughly) a million-fold. It is hoped that these results will showcase the utility of exponential and ordinary generating functions and will encourage their use in cryptanalytic research.Random permutationSecurityStatisticsHighly composite numberPermutationPeriodateProbability...
- In this paper we present FeynRules, a new Mathematica package that facilitates the implementation of new particle physics models. After the user implements the basic model information (e.g. particle content, parameters and Lagrangian), FeynRules derives the Feynman rules and stores them in a generic form suitable for translation to any Feynman diagram calculation program. The model can then be translated to the format specific to a particular Feynman diagram calculator via FeynRules translation interfaces. Such interfaces have been written for CalcHEP/CompHEP, FeynArts/FormCalc, MadGraph/MadEvent and Sherpa, making it possible to write a new model once and have it work in all of these programs. In this paper, we describe how to implement a new model, generate the Feynman rules, use a generic translation interface, and write a new translation interface. We also discuss the details of the FeynRules code.Feynman diagramsFeynman rulesStandard ModelSherpaProgramming LanguageEffective theoryGamma matricesMonte Carlo methodLarge Hadron ColliderScalar field...
- We study deformations of N=1 supersymmetric QCD that exhibit a rich landscape of supersymmetric and non-supersymmetric vacua.SuperpotentialEigenvalueGauge theoryMetastateQuarkElectricity and magnetismExpectation ValueF-termGlobal symmetryRanking...
- The compensation approach is applied to the problem of a calculation of parameters of the Standard Model. Examples of sets of compensation equations are considered. Principal possibility of a determination of mass ratios of fundamental quarks and leptons is demonstrated, as well as of a determination of important parameter $\sin^2\theta_W$. In case of a realization of a non-trivial solution of a set of compensation equations corresponding to an effective interaction of electroweak gauge bosons, a satisfactory value for the electromagnetic fine structure constant $\alpha$ at scale $M_W$ may be obtained. Arguments are laid down on behalf of a possibility of a calculation of fundamental parameters of the Standard Model in the framework of the compensation approach.Standard ModelMass ratioForm factorFine structure constantQuarkElectroweak interactionNambu-Jona-Lasinio modelFour-fermion interactionsCoupling constantStrong interactions...
- In this pedagogical note I will discuss one-loop integrals, where (i) different regions of the integration region lead to divergences and (ii) where these divergences cancel in the sum over all regions. These integrals cannot be calculated without regularisation, in spite of the fact that they yield a finite result. A typical example where such integrals occur is the decay H --> gamma gamma.Loop integralUltraviolet divergenceLoop momentumLaurent seriesHiggs bosonSegmentationFactorisationMetric tensorMeasurementTaylor series...
- A joint effort of cryogenic microcalorimetry (CM) and high-precision Penning-trap mass spectrometry (PT-MS) in investigating atomic orbital electron capture (EC) can shed light on the possible existence of heavy sterile neutrinos with masses from 0.5 to 100 keV. Sterile neutrinos are expected to perturb the shape of the atomic de-excitation spectrum measured by CM after a capture of the atomic orbital electrons by a nucleus. This effect should be observable in the ratios of the capture probabilities from different orbits. The sensitivity of the ratio values to the contribution of sterile neutrinos strongly depends on how accurately the mass difference between the parent and the daughter nuclides of EC-transitions can be measured by, e.g., PT-MS. A comparison of such probability ratios in different isotopes of a certain chemical element allows one to exclude many systematic uncertainties and thus could make feasible a determination of the contribution of sterile neutrinos on a level below 1%. Several electron capture transitions suitable for such measurements are discussed.Electron captureSterile neutrinoAtomic orbitalHeavy sterile neutrinoSystematic errorIsotopePenning trapMeasurementMassProbability...
- We consider chiral liquids, consisting of massless fermions and right-left asymmetric. In such media, one expects existence of electromagnetic current in equilibrium flowing along external magnetic field. The current is predicted to be dissipation free. We argue that actually the chiral liquids in the hydrodynamic approximation should satisfy further constraints, like infinite classical conductivity. Inclusion of higher orders in electromagnetic interactions is crucial to reach the conclusions.HelicityFluid dynamicsChiralityChiral liquidLiquidsQuantum anomalyDissipationVorticityChiral anomalyMagnetic helicity...
- We review recent progress in Bipartite Field Theories. We cover topics such as their gauge dynamics, emergence of toric Calabi-Yau manifolds as master and moduli spaces, string theory embedding, relationships to on-shell diagrams, connections to cluster algebras and the Grassmannian, and applications to graph equivalence and stratification of the Grassmannian.OrientationGraphPolytopeBipartite networkGrassmannianRiemann surfaceGauge theoryStratificationSuperpotentialD-brane...
- We study the moduli spaces of self-dual instantons on CP^2 in a simple group G. When G is a classical group, these instanton solutions can be realised using ADHM-like constructions which can be naturally embedded into certain three dimensional quiver gauge theories with 4 supercharges. The topological data for such instanton bundles and their relations to the quiver gauge theories are described. Based on such gauge theory constructions, we compute the Hilbert series of the moduli spaces of instantons that correspond to various configurations. The results turn out to be equal to the Hilbert series of their counterparts on C^2 upon an appropriate mapping. We check the former against the Hilbert series derived from the blowup formula for the Hirzebruch surface F_1 and find an agreement. The connection between the moduli spaces of instantons on such two spaces is explained in detail.InstantonQuiverGauge theoryChiralityBundleADHM constructionChern classPartition functionSuperchargeHolomorph...
- Codimension two defects of the $(0,2)$ six dimensional theory $\mathscr{X}[\mathfrak{g}]$ have played an important role in the understanding of dualities for certain $\mathcal{N}=2$ SCFTs in four dimensions. These defects are typically understood by their behaviour under various dimensional reduction schemes. In their various guises, the defects admit partial descriptions in terms of singularities of Hitchin systems, Nahm boundary conditions or Toda operators. Here, a uniform dictionary between these descriptions is given for a large class of such defects in $\mathscr{X}[\mathfrak{g}], \mathfrak{g} \in A,D,E$.NilpotentDualityCodimensionWeyl groupSpringer correspondenceS-dualitySubgroupRepresentation theoryPartition functionIrreducible representation...
- In this paper we establish relations between three enumerative geometry tau-functions, namely the Kontsevich-Witten, Hurwitz and Hodge tau-functions. The relations allow us to describe the tau-functions in terms of matrix integrals, Virasoro constraints and Kac-Schwarz operators. All constructed operators belong to the algebra (or group) of symmetries of the KP hierarchy.Tau-functionVirasoro constraintKadomtsev-Petviashvili hierarchyAlgebraGeometrySymmetry...
- We study 't Hooft anomalies for discrete global symmetries in bosonic theories in 2, 3 and 4 dimensions. We show that such anomalies may arise in gauge theories with topological terms in the action, if the total symmetry group is a nontrivial extension of the global symmetry by the gauge symmetry. Sometimes the 't Hooft anomaly for a d-dimensional theory with a global symmetry G can be canceled by anomaly inflow from a (d+1)-dimensional topological gauge theory with gauge group G. Such d-dimensional theories can live on the surfaces of Symmetry Protected Topological Phases. We also give examples of theories with more severe 't Hooft anomalies which cannot be canceled in this way.Gauge fieldGauge theoryGlobal symmetryCohomologyGauge transformationAnomaly inflowGauge invarianceChern-Simons termFree fermionsSymmetry group...
- The two-pion contribution from low energies to the muon magnetic moment anomaly, although small, has a large relative uncertainty since in this region the experimental data on the cross sections are neither sufficient nor precise enough. It is therefore of interest to see whether the precision can be improved by means of additional theoretical information on the pion electromagnetic form factor, which controls the leading order contribution. In the present paper we address this problem by exploiting analyticity and unitarity of the form factor in a parametrization-free approach that uses the phase in the elastic region, known with high precision from the Fermi-Watson theorem and Roy equations for $\pi\pi$ elastic scattering as input. The formalism also includes experimental measurements on the modulus in the region 0.65-0.70 GeV, taken from the most recent $e^+e^-\to \pi^+\pi^-$ experiments, and recent measurements of the form factor on the spacelike axis. By combining the results obtained with inputs from CMD2, SND, BABAR and KLOE, we make the predictions $a_\mu^{\pi\pi, \LO}\,[2 m_\pi,\, 0.30 \gev]=(0.553 \pm 0.004) \times 10^{-10}$ and $a_\mu^{\pi\pi, \LO}\,[0.30 \gev,\, 0.63 \gev]=(133. 083 \pm 0.837)\times 10^{-10}$. These are consistent with the other recent determinations, and have slightly smaller errors.Form factorMuonPionUnitarityCharge radiusIsospinHadronizationElasticityStatisticsVacuum polarization...
- We review and analyze the available information for nuclear fusion cross sections that are most important for solar energy generation and solar neutrino production. We provide best values for the low-energy cross-section factors and, wherever possible, estimates of the uncertainties. We also describe the most important experiments and calculations that are required in order to improve our knowledge of solar fusion rates.NeutrinoSolar neutrinoSunElectron captureNuclear fusionScreening effectSystematic errorSNO+CNO cycleStatistics...
- The flux of papers from electron positron colliders containing data on the photon structure function ended naturally around 2005. It is thus timely to review the theoretical basis and confront the predictions with a summary of the experimental results. The discussion will focus on the increase of the structure function with x (for x away from the boundaries) and its rise with log Q**2, both characteristics beeing dramatically different from hadronic structure functions. Comparing the data with a specific QCD prediction a new determination of the QCD coupling coupling constant is presented. The agreement of the experimental observations with the theoretical calculations of the real and virtual photon structure is a striking success of QCD.QuarkHadronizationNext-to-leading order computationPositronLight quarkPartonVector mesonInterferenceStrong coupling constantL3...
- There has been substantial progress in recent years in the quantitative understanding of the nonequilibrium time evolution of quantum fields. Important topical applications, in particular in high energy particle physics and cosmology, involve dynamics of quantum fields far away from the ground state or thermal equilibrium. In these cases, standard approaches based on small deviations from equilibrium, or on a sufficient homogeneity in time underlying kinetic descriptions, are not applicable. A particular challenge is to connect the far-from-equilibrium dynamics at early times with the approach to thermal equilibrium at late times. Understanding the ``link'' between the early- and the late-time behavior of quantum fields is crucial for a wide range of phenomena. For the first time questions such as the explosive particle production at the end of the inflationary universe, including the subsequent process of thermalization, can be addressed in quantum field theory from first principles. The progress in this field is based on efficient functional integral techniques, so-called n-particle irreducible effective actions, for which powerful nonperturbative approximation schemes are available. Here we give an introduction to these techniques and show how they can be applied in practice. Though we focus on particle physics and cosmology applications, we emphasize that these techniques can be equally applied to other nonequilibrium phenomena in complex many body systems.Effective actionG2Evolution equationSelf-energyStatisticsDensity matrixRenormalizationTwo-point correlation functionGraphQuantum field theory...
- We solve the nonequilibrium dynamics of a 3+1 dimensional theory with Dirac fermions coupled to scalars via a chirally invariant Yukawa interaction. The results are obtained from a systematic coupling expansion of the 2PI effective action to lowest non-trivial order, which includes scattering as well as memory and off-shell effects. The dynamics is solved numerically without further approximation, for different far-from-equilibrium initial conditions. The late-time behavior is demonstrated to be insensitive to the details of the initial conditions and to be uniquely determined by the initial energy density. Moreover, we show that at late time the system is very well characterized by a thermal ensemble. In particular, we are able to observe the emergence of Fermi--Dirac and Bose--Einstein distributions from the nonequilibrium dynamics.Evolution equationSelf-energyStatisticsEffective actionChiralityBose-Einstein statisticsChiral symmetryBosonizationDirac fermionFermionic field...
- Majorana fermions in a superconductor hybrid system are charge neutral zero-energy states. For the detection of this unique feature, we propose an interferometry of a chiral Majorana edge channel, formed along the interface between a superconductor and a topological insulator under an external magnetic field. The superconductor is of a ring shape and has a Josephson junction that allows the Majorana state to enclose continuously tunable magnetic flux. Zero-bias differential electron conductance between the Majorana state and a normal lead is found to be independent of the flux at zero temperature, manifesting the Majorana feature of a charge neutral zero-energy state. In contrast, the same setup on graphene has no Majorana state and shows Aharonov-Bohm effects.GrapheneChiralityMajorana fermionEdge excitationsAharonov-Bohm effectSuperconductivityZeeman EnergySuperconductorTopological insulatorInterferometry...
- We have analyzed a large sample of clean blazars detected by Fermi Large Area Telescope (LAT). Using literature and calculation, we obtained intrinsic $\gamma$-ray luminosity excluding beaming effect, black hole mass, broad-line luminosity (used as a proxy for disk luminosity), jet kinetic power from "cavity" power and bulk Lorentz factor for parsec-scale radio emission, and studied the distributions of these parameters and relations between them. Our main results are as follows. (i) After excluding beaming effect and redshift effect, intrinsic $\gamma$-ray luminosity with broad line luminosity, black hole mass and Eddington ratio have significant correlations. Our results confirm the physical distinction between BL Lacs and FSRQs. (ii) The correlation between broad line luminosity and jet power is significant which supports that jet power has a close link with accretion. Jet power depends on both the Eddington ratio and black hole mass. We also obtain $LogL_{\rm BLR}\sim(0.98\pm0.07)Log P_{\rm jet}$ for all blazars, which is consistent with the theoretical predicted coefficient. These results support that jets are powered by energy extraction from both accretion and black hole spin (i.e., not by accretion only). (iii) For almost all BL Lacs, $P_{\rm jet}>L_{\rm disk}$; for most of FSRQs, $P_{\rm jet}<L_{\rm disk}$. The "jet-dominance" (parameterized as $\frac{P_{\rm jet}}{L_{\rm disk}}$) is mainly controlled by the bolometric luminosity. Finally, the radiative efficiency of $\gamma$-ray and properties of TeV blazars detected by Fermi LAT were discussed.Astrophysical jetLuminosityBlazarBlack holeFlat spectrum radio quasarAccretionLorentz factorBroad-line regionFERMI telescopeActive Galactic Nuclei...
- We derive a general criterion that defines all single-field models leading to Starobinsky-like inflation and to universal predictions for the spectral index and tensor-to-scalar ratio, which are in agreement with Planck data. Out of all the theories that satisfy this criterion, we single out a special class of models with the interesting property of retaining perturbative unitarity up to the Planck scale. These models are based on induced gravity, with the Planck mass determined by the vacuum expectation value of the inflaton.UnitarityAttractorPlanck scaleStandard ModelHiggs inflationSpectral index of power spectrumInflatonModel of inflationEinstein framePlanck mission...
- (abridged) Observations of Faraday rotation for extragalactic sources probe magnetic fields both inside and outside the Milky Way. Building on our earlier estimate of the Galactic foreground (Oppermann et al., 2012), we set out to estimate the extragalactic contributions. We discuss different strategies and the problems involved. In particular, we point out that taking the difference between the observed values and the Galactic foreground reconstruction is not a good estimate for the extragalactic contributions. We present a few possibilities for improved estimates using the existing foreground map, allowing for imperfectly described observational noise. In this context, we point out a degeneracy between the contributions to the observed values due to extragalactic magnetic fields and observational noise and comment on the dangers of over-interpreting an estimate without taking into account its uncertainty information. Finally, we develop a reconstruction algorithm based on the assumption that the observational uncertainties are accurately described for a subset of the data, which can overcome this degeneracy. We demonstrate its performance in a simulation, yielding a high quality reconstruction of the Galactic Faraday depth, a precise estimate of the typical extragalactic contribution, and a well-defined probabilistic description of the extragalactic contribution for each source. We apply this reconstruction technique to a catalog of Faraday rotation observations. We vary our assumptions about the data, showing that the dispersion of extragalactic contributions to observed Faraday depths is likely lower than 7 rad/m^2, in agreement with earlier results, and that the extragalactic contribution to an individual data point is poorly constrained by the data in most cases. Posterior samples for the extragalactic contributions and all results of our fiducial model are provided online.FaradayPolar capsFaraday rotationCovarianceAngular power spectrumGalactic latitudeSimulationsStatisticsExpectation ValueGalactic plane...
- Using one-dimensional models, we show that a helical magnetic field with an appropriate sign of a helicity can compensate the Faraday depolarization resulting from the superposition of Faraday-rotated polarization planes from a spatially extended source. For radio emission from a helical magnetic field, the polarization as a function of the square of the wavelength becomes asymmetric with respect to zero. Mathematically speaking, the resulting emission occurs then either at observable or at unobservable (imaginary) wavelengths. We demonstrate that rotation measure (RM) synthesis allows the reconstruction of the underlying Faraday dispersion function in the former case, but not in the latter. The presence of positive magnetic helicity can thus be detected by observing positive RM in highly polarized regions in the sky and negative RM in weakly polarized regions. Conversely, negative magnetic helicity can be detected by observing negative RM in highly polarized regions and positive RM in weakly polarized regions. The simultaneous presence of two magnetic constituents with opposite signs of helicity is shown to possess signatures that can be quantified through polarization peaks at specific wavelengths and the gradient of the phase of the Faraday dispersion function. We discuss the possibility of detecting magnetic fields with such properties in external galaxies using the Square Kilometre Array.FaradayHelicityMagnetic helicityHelical magnetic fieldRotation measure of the plasmaGalaxyLine of sightAmplitudeOrientationFaraday rotation...
- The mass and composition of dark matter (DM) particles and the shape and damping scales of the power spectrum of density perturbations can be estimated from recent observations of the DM dominated relaxed objects -- dwarf galaxies and clusters of galaxies. We confirm that the observed velocity dispersion of dSph galaxies agrees with the possible existence of DM particles with mass $m_w\sim 15 - 20keV$. More complex analysis utilizes the well known semi analytical model of formation of DM halos in order to describe the basic properties of corresponding objects and to estimate their redshifts of formation. For the DM halos this redshift is determined by their masses and the initial power spectrum of density perturbations. This correlation allows us to partly reconstruct the small scale spectrum of perturbations. We consider the available sample of suitable observed objects that includes $\sim 40$ DM dominated galaxies and $\sim 40$ clusters of galaxies and we show that the observed characteristics of these objects are inconsistent with expectations of the standard $\Lambda$CDM cosmological model. However, they are consistent with a more complex DM model with a significant contribution of the hot DM--like power spectrum with relatively large damping scale ($\sim 10 - 30Mpc$). The HDM component of DM decelerates but does not prevent formation of low mass objects. These preliminary inferences require confirmation by a more representative observational data that should include -- if possible -- DM dominated objects with intermediate masses $M\sim 10^{10} - 10^{12} M_\odot$. Comparison of observed properties of such objects with numerical simulations will provide more detailed picture of the process of formation of DM halos.Cluster of galaxiesDark matterMatter power spectrumDark matter particleDark matter haloSimulationsWarm dark matterVirial massCold dark matterGalaxy...
- Turbulence is ubiquitous in the insterstellar medium and plays a major role in several processes such as the formation of dense structures and stars, the stability of molecular clouds, the amplification of magnetic fields, and the re-acceleration and diffusion of cosmic rays. Despite its importance, interstellar turbulence, alike turbulence in general, is far from being fully understood. In this review we present the basics of turbulence physics, focusing on the statistics of its structure and energy cascade. We explore the physics of compressible and incompressible turbulent flows, as well as magnetized cases. The most relevant observational techniques that provide quantitative insights of interstellar turbulence are also presented. We also discuss the main difficulties in developing a three-dimensional view of interstellar turbulence from these observations. Finally, we briefly present what could be the the main sources of turbulence in the interstellar medium.TurbulenceInterstellar mediumStatisticsMolecular cloudCompressibilityMagnetohydrodynamic turbulenceMaserLine of sightVelocity dispersionNumerical simulation...
- This paper is contributed to a fast algorithm for Hankel tensor-vector products. For this purpose, we first discuss a special class of Hankel tensors that can be diagonalized by the Fourier matrix, which is called \emph{anti-circulant} tensors. Then we obtain a fast algorithm for Hankel tensor-vector products by embedding a Hankel tensor into a larger anti-circulant tensor. The computational complexity is about $\mathcal{O}(m^2 n \log mn)$ for a square Hankel tensor of order $m$ and dimension $n$, and the numerical examples also show the efficiency of this scheme. Moreover, the block version for multi-level block Hankel tensors is discussed as well. Finally, we apply the fast algorithm to exponential data fitting and the block version to 2D exponential data fitting for higher performance.Fast Fourier transformCompressibilityVandermonde determinantSingular valueDegree of freedomCirculant matrixTotal least squaresEigenvalueTransposeExact solution...
- Topological insulators (TIs) exhibit many exotic properties. In particular, a topological magneto-electric (TME) effect, quantized in units of the fine structure constant, exists in TIs. In this Letter, we study theoretically the scattering properties of electromagnetic waves by TI circular cylinders particularly in the Rayleigh scattering limit. Compared with ordinary dielectric cylinders, the scattering by TI cylinders shows many unusual features due to the TME effect. Two proposals are suggested to determine the TME effect of TIs simply based on measuring the electric-field components of scattered waves in the far field at one or two scattering angles. Our results could also offer a way to measure the fine structure constant.Topological insulatorRayleigh scatteringElectromagnetismFine structure constantScattering matrixQuantizationAxionMagnetic monopoleAmplitudeConstitutive relation...
- Completely positive, trace preserving (CPT) maps and Lindblad master equations are both widely used to describe the dynamics of open quantum systems. The connection between these two descriptions is a classic topic in mathematical physics. One direction was solved by the now famous result due to Lindblad, Kossakowski Gorini and Sudarshan, who gave a complete characterisation of the master equations that generate completely positive semi-groups. However, the other direction has remained open: given a CPT map, is there a Lindblad master equation that generates it (and if so, can we find it's form)? This is sometimes known as the Markovianity problem. Physically, it is asking how one can deduce underlying physical processes from experimental observations. We give a complexity theoretic answer to this problem: it is NP-hard. We also give an explicit algorithm that reduces the problem to integer semi-definite programming, a well-known NP problem. Together, these results imply that resolving the question of which CPT maps can be generated by master equations is tantamount to solving P=NP: any efficiently computable criterion for Markovianity would imply P=NP; whereas a proof that P=NP would imply that our algorithm already gives an efficiently computable criterion. Thus, unless P does equal NP, there cannot exist any simple criterion for determining when a CPT map has a master equation description. However, we also show that if the system dimension is fixed (relevant for current quantum process tomography experiments), then our algorithm scales efficiently in the required precision, allowing an underlying Lindblad master equation to be determined efficiently from even a single snapshot in this case. Our work also leads to similar complexity-theoretic answers to a related long-standing open problem in probability theory.NP-hard problemEigenvalueMaster equationEmbedding problemComplexity classQuantum channelMarkov chainDensity matrixPositive semi definiteMarkov process...
- Quantum systems carry information. Quantum theory supports at least two distinct kinds of information (classical and quantum), and a variety of different ways to encode and preserve information in physical systems. A system's ability to carry information is constrained and defined by the noise in its dynamics. This paper introduces an operational framework, using information-preserving structures to classify all the kinds of information that can be perfectly (i.e., with zero error) preserved by quantum dynamics. We prove that every perfectly preserved code has the same structure as a matrix algebra, and that preserved information can always be corrected. We also classify distinct operational criteria for preservation (e.g., "noiseless", "unitarily correctible", etc.) and introduce two new and natural criteria for measurement-stabilized and unconditionally preserved codes. Finally, for several of these operational critera, we present efficient (polynomial in the state-space dimension) algorithms to find all of a channel's information-preserving structures.Alice and BobNP-hard problemQuantum error correctionDecoherence-free subspacesClassificationIsomorphismGalaxyQuantum mechanicsAlgorithmsTransformations...
#### On quantum channelsver. 2

One of the most challenging open problems in quantum information theory is to clarify and quantify how entanglement behaves when part of an entangled state is sent through a quantum channel. Of central importance in the description of a quantum channel or completely positive map (CP-map) is the dual state associated to it. The present paper is a collection of well-known, less known and new results on quantum channels, presented in a unified way. We will show how this dual state induces nice characterizations of the extremal maps of the convex set of CP-maps, and how normal forms for states defined on a Hilbert space with a tensor product structure lead to interesting parameterizations of quantum channels.EntanglementQuantum channelEigenvalueTensor productRankingQubitConvex setQuantum information theoryDualityQuantum mechanics...- The general stable quantum memory unit is a hybrid consisting of a classical digit with a quantum digit (qudit) assigned to each classical state. The shape of the memory is the vector of sizes of these qudits, which may differ. We determine when N copies of a quantum memory A embed in N(1+o(1)) copies of another quantum memory B. This relationship captures the notion that B is as at least as useful as A for all purposes in the bulk limit. We show that the embeddings exist if and only if for all p >= 1, the p-norm of the shape of A does not exceed the p-norm of the shape of B. The log of the p-norm of the shape of A can be interpreted as the maximum of S(\rho) + H(\rho)/p (quantum entropy plus discounted classical entropy) taken over all mixed states \rho on A. We also establish a noiseless coding theorem that justifies these entropies. The noiseless coding theorem and the bulk embedding theorem together say that either A blindly bulk-encodes into B with perfect fidelity, or A admits a state that does not visibly bulk-encode into B with high fidelity. In conclusion, the utility of a hybrid quantum memory is determined by its simultaneous capacity for classical and quantum entropy, which is not a finite list of numbers, but rather a convex region in the classical-quantum entropy plane.EntropyHybridizationQuantum information theoryMixed statesPositive elementVon Neumann algebraKelvin-Helmholtz timescaleVon neumann entropyQuantum channelClassical capacity...
- Given a list of n complex numbers, when can it be the spectrum of a quantum channel, i.e., a completely positive trace preserving map? We provide an explicit solution for the n=4 case and show that in general the characterization of the non-zero part of the spectrum can essentially be given in terms of its classical counterpart - the non-zero spectrum of a stochastic matrix. A detailed comparison between the classical and quantum case is given. We discuss applications of our findings in the analysis of time-series and correlation functions and provide a general characterization of the peripheral spectrum, i.e., the set of eigenvalues of modulus one. We show that while the peripheral eigen-system has the same structure for all Schwarz maps, the constraints imposed on the rest of the spectrum change immediately if one departs from complete positivity.EigenvalueQuantum channelComplex numberQubitRankingTwo-point correlation functionPermutationTime SeriesSpectral setSingular value...
- This article is intended as a reference guide to various notions of monoidal categories and their associated string diagrams. It is hoped that this will be useful not just to mathematicians, but also to physicists, computer scientists, and others who use diagrammatic reasoning. We have opted for a somewhat informal treatment of topological notions, and have omitted most proofs. Nevertheless, the exposition is sufficiently detailed to make it clear what is presently known, and to serve as a starting place for more in-depth study. Where possible, we provide pointers to more rigorous treatments in the literature. Where we include results that have only been proved in special cases, we indicate this in the form of caveats.MonoidVector spaceCommutative diagramTensor productLanguageSurveysTopologyDropletVectorLinear operator...
- We study quantum information and computation from a novel point of view. Our approach is based on recasting the standard axiomatic presentation of quantum mechanics, due to von Neumann, at a more abstract level, of compact closed categories with biproducts. We show how the essential structures found in key quantum information protocols such as teleportation, logic-gate teleportation, and entanglement-swapping can be captured at this abstract level. Moreover, from the combination of the --apparently purely qualitative-- structures of compact closure and biproducts there emerge `scalars` and a `Born rule'. This abstract and structural point of view opens up new possibilities for describing and reasoning about quantum systems. It also shows the degrees of axiomatic freedom: we can show what requirements are placed on the (semi)ring of scalars C(I,I), where C is the category and I is the tensor unit, in order to perform various protocols such as teleportation. Our formalism captures both the information-flow aspect of the protocols (see quant-ph/0402014), and the branching due to quantum indeterminism. This contrasts with the standard accounts, in which the classical information flows are `outside' the usual quantum-mechanical formalism.Information flowQuantum mechanicsScalarTensorUnits...
- Quantum processes can be divided into two categories: unitary and non-unitary ones. For a given quantum process, we can define a \textit{degree of the unitarity (DU)} of this process to be the fidelity between it and its closest unitary one. The DU, as an intrinsic property of a given quantum process, is able to quantify the distance between the process and the group of unitary ones, and is closely related to the noise of this quantum process. We derive analytical results of DU for qubit unital channels, and obtain the lower and upper bounds in general. The lower bound is tight for most of quantum processes, and is particularly tight when the corresponding DU is sufficiently large. The upper bound is found to be an indicator for the tightness of the lower bound. Moreover, we study the distribution of DU in random quantum processes with different environments. In particular, The relationship between the DU of any quantum process and the non-markovian behavior of it is also addressed.Unitary operatorQuantum channelUnitarityRed starsExpectation ValueOperator spaceInterferenceSimulationsMixed statesRanking...
- Bitcoin is a "crypto currency", a decentralized electronic payment scheme based on cryptography. Bitcoin economy grows at an incredibly fast rate and is now worth some 10 billions of dollars. Bitcoin mining is an activity which consists of creating (minting) the new coins which are later put into circulation. Miners spend electricity on solving cryptographic puzzles and they are also gatekeepers which validate bitcoin transactions of other people. Miners are expected to be honest and have some incentives to behave well. However. In this paper we look at the miner strategies with particular attention paid to subversive and dishonest strategies or those which could put bitcoin and its reputation in danger. We study in details several recent attacks in which dishonest miners obtain a higher reward than their relative contribution to the network. In particular we revisit the concept of block withholding attacks and propose a new concrete and practical block withholding attack which we show to maximize the advantage gained by rogue miners.CryptographySecurityP2pCompressibilityGame theoryStrangenessFragilityStatisticsEcosystemsError function...
- Bitcoin is a "crypto currency", a decentralized electronic payment scheme based on cryptography which has recently gained excessive popularity. Scientific research on bitcoin is less abundant. A paper at Financial Cryptography 2012 conference explains that it is a system which "uses no fancy cryptography", and is "by no means perfect". It depends on a well-known cryptographic standard SHA-256. In this paper we revisit the cryptographic process which allows one to make money by producing bitcoins. We reformulate this problem as a Constrained Input Small Output (CISO) hashing problem and reduce the problem to a pure block cipher problem. We estimate the speed of this process and we show that the cost of this process is less than it seems and it depends on a certain cryptographic constant which we estimated to be at most 1.86. These optimizations enable bitcoin miners to save tens of millions of dollars per year in electricity bills. Miners who set up mining operations face many economic incertitudes such as high volatility. In this paper we point out that there are fundamental incertitudes which depend very strongly on the bitcoin specification. The energy efficiency of bitcoin miners have already been improved by a factor of about 10,000, and we claim that further improvements are inevitable. Better technology is bound to be invented, would it be quantum miners. More importantly, the specification is likely to change. A major change have been proposed in May 2013 at Bitcoin conference in San Diego by Dan Kaminsky. However, any sort of change could be flatly rejected by the community which have heavily invested in mining with the current technology. Another question is the reward halving scheme in bitcoin. The current bitcoin specification mandates a strong 4-year cyclic property. We find this property totally unreasonable and harmful and explain why and how it needs to be changed.SecurityCompressibilityCryptographyAbundancePeer-to-peer networkSoftware errorsFactorisationEcosystemsP2pVolatiles...