Recent ontology graph

Recently bookmarked papers

with concepts:
  • Deviations from Gaussian statistics of the cosmological density fluctuations, so-called primordial non-Gaussianities (NG), are one of the most informative fingerprints of the origin of structures in the universe. Indeed, they can probe physics at energy scales inaccessible to laboratory experiments, and are sensitive to the interactions of the field(s) that generated the primordial fluctuations, contrary to the Gaussian linear theory. As a result, they can discriminate between inflationary models that are otherwise almost indistinguishable. In this short review, we explain how to compute the non-Gaussian properties in any inflationary scenario. We review the theoretical predictions of several important classes of models. We then describe the ways NG can be probed observationally, and we highlight the recent constraints from the Planck mission, as well as their implications. We finally identify well motivated theoretical targets for future experiments and discuss observational prospects.
    BispectrumModel of inflationCurvature perturbationNon-GaussianityTrispectrumScalar fieldDegree of freedomCosmologyCosmic microwave backgroundInflaton...
  • The zero locus of a function f on a graph G is defined as the graph with vertex set consisting of all complete subgraphs of G, on which f changes sign and where x,y are connected if one is contained in the other. For d-graphs, finite simple graphs for which every unit sphere is a d-sphere, the zero locus of (f-c) is a (d-1)-graph for all c different from the range of f. If this Sard lemma is inductively applied to an ordered list functions f_1,...,f_k in which the functions are extended on the level surfaces, the set of critical values (c_1,...,c_k) for which F-c=0 is not a (d-k)-graph is a finite set. This discrete Sard result allows to construct explicit graphs triangulating a given algebraic set. We also look at a second setup: for a function F from the vertex set to R^k, we give conditions for which the simultaneous discrete algebraic set { F=c } defined as the set of simplices of dimension in {k, k+1,...,n} on which all f_i change sign, is a (d-k)-graph in the barycentric refinement of G. This maximal rank condition is adapted from the continuum and the graph { F=c } is a (n-k)-graph. While now, the critical values can have positive measure, we are closer to calculus: for k=2 for example, extrema of functions f under a constraint {g=c} happen at points, where the gradients of f and g are parallel D f = L D g, the Lagrange equations on the discrete network. As for an application, we illustrate eigenfunctions of geometric graphs and especially the second eigenvector of 3-spheres, which by Courant-Fiedler has exactly two nodal regions. The separating nodal surface of the second eigenfunction f_2 belonging to the smallest nonzero eigenvalue always appears to be a 2-sphere in experiments if G is a 3-sphere.
    GraphDimensionsEigenfunctionRankCritical pointSimple graphEuler characteristicAlgebraic setCritical valueGraph theory...
  • Astrophysical tests of the stability of Nature's fundamental couplings are a key probe of the standard paradigms in fundamental physics and cosmology. In this report we discuss updated constraints on the stability of the fine-structure constant $\alpha$ and the proton-to-electron mass ratio $\mu=m_p/m_e$ within the Galaxy. We revisit and improve upon the analysis by Truppe {\it et al.} by allowing for the possibility of simultaneous variations of both couplings and also by combining them with the recent measurements by Levshakov {\it et al.} By considering representative unification scenarios we find no evidence for variations of $\alpha$ at the 0.4 ppm level, and of $\mu$ at the 0.6 ppm level; if one uses the Levshakov bound on $\mu$ as a prior, the$\alpha$ bound is improved to 0.1 ppm. We also highlight how these measurements can constrain (and discriminate among) several fundamental physics paradigms.
    Mass ratioCosmologyFine structure constantProton gyromagnetic ratioSpectrographsScalar fieldDimensionsEinstein equivalence principleHubble timeStatistical error...
  • Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant $\alpha$, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, $\zeta$, to the electromagnetic sector) the $\alpha$ variation. Specifically, current data tightly constrains a combination of $\zeta$ and the present dark energy equation of state $w_0$. Moreover, in these models the new degree of freedom inevitably couples to nucleons (through the $\alpha$ dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the E\"otv\"os parameter $\eta$ that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints.
    Dark energyEquation of state of dark energyScalar fieldEquivalence principleFine structure constantCosmologyDegree of freedomHubble parameterCosmological dataCosmological parameters...
  • Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is significant when referring to stacks of galaxy clusters. Finally, we study the bias due to miscentring, i.e., the displacement between any observationally defined cluster centre and the true minimum of its gravitational potential. The impact of this bias might be significant with respect to the statistical uncertainties. However, complementary future missions such as eROSITA will allow us to define stringent priors on miscentring parameters which will mitigate this bias significantly.
    Statistical errorWeak lensingCluster of galaxiesPhotometric redshiftSystematic errorWeak lensing mass estimateLarge scale structureStatisticsCosmologySpectral energy distribution...
  • If the unidentified emission line at ~3.55 keV previously found in spectra of nearby galaxies and galaxy clusters is due to radiatively decaying dark matter, one should detect the signal of comparable strength from many cosmic objects of different nature. By studying existing dark matter distributions in galaxy clusters we identified top-19 of them observed by XMM-Newton X-ray cosmic mission, and analyzed the data for the presence of the new line. In 8 of them, we identified > 2 sigma positive line-like residuals with average position 3.52 +/- 0.08 keV in the emitter's frame. Their observed properties are unlikely to be explained by statistical fluctuations or astrophysical emission lines; observed line position in M31 and Galactic Center makes an additional argument against general-type systematics. Being interpreted as decaying dark matter line, the new detections correspond to radiative decay lifetime tau_dm ~ (3.5-6) x 10^27 s consistent with previous detections.
    Cluster of galaxiesDecaying dark matterKeV lineDark Matter Density ProfileRadiative decayStatistical fluctuationsXMM-NewtonDark matterDark matter decayExtended Sources Analysis Software...
  • The $m$-$z$ relation for Type Ia supernovae is compatible with the cosmological concordance model if one assumes that the Universe is homogeneous, at least with respect to light propagation. This could be due to the density along each line of sight being equal to the overall cosmological density, or to `safety in numbers', with variation in the density along all lines of sight averaging out if the sample is large enough. Statistical correlations (or lack thereof) between redshifts, residuals (differences between the observed distance moduli and those calculated from the best-fitting cosmological model), and observational uncertainties suggest that the former scenario is the better description, so that one can use the traditional formula for the luminosity distance safely without worry.
    SupernovaLine of sightCosmologyHomogenizationSupernova Type IaCosmological parametersStatisticsLambda-CDM modelLuminosity distanceCosmological model...
  • Previous studies have shown that supersymmetric partition function on $T^2 \times S^2$ is related to elliptic genus of two dimensional supersymmetric theory. In this short note we find a four dimensional supersymmetric theory, whose partition function on $T^2 \times S^2$ is the same as elliptic genera of $\mathcal{N}=2$ minimal models in two dimensions.
    Partition functionMinimal modelsEllipticityChiralityConformal field theoryDimensionsMagnetic chargeFlavour symmetryGauge theoryGauge field...
  • We propose Neural Reasoner, a framework for neural network-based reasoning over natural language sentences. Given a question, Neural Reasoner can infer over multiple supporting facts and find an answer to the question in specific forms. Neural Reasoner has 1) a specific interaction-pooling mechanism, allowing it to examine multiple facts, and 2) a deep architecture, allowing it to model the complicated logical relations in reasoning tasks. Assuming no particular structure exists in the question and facts, Neural Reasoner is able to accommodate different types of reasoning and different forms of language expressions. Despite the model complexity, Neural Reasoner can still be trained effectively in an end-to-end manner. Our empirical studies show that Neural Reasoner can outperform existing neural reasoning systems with remarkable margins on two difficult artificial tasks (Positional Reasoning and Path Finding) proposed in [8]. For example, it improves the accuracy on Path Finding(10K) from 33.4% [6] to over 98%.
    Neural network-based reasoningRecurrent neural networkArchitectureNatural languageClassificationConvolutional neural networkDeep Neural NetworksBackpropagationEntropyHidden state...
  • We introduce Rosetta, a program allowing for the translation between different bases of effective field theory operators. We present the main functions of the program and provide an example of usage. One of the bases which Rosetta can translate into has been implemented into FeynRules, which allows Rosetta to be interfaced into various high-energy physics programs such as Monte Carlo event generators. In addition to popular bases choices, such as the Warsaw and Strongly Interacting Light Higgs bases already implemented in the program, we also detail how to add new operator bases into the Rosetta package. In this way, phenomenological studies using an effective field theory framework can be straightforwardly performed.
    Standard ModelWilson coefficientsEffective field theoryDimension--six operatorsMinimal Flavor ViolationHiggs bosonEffective LagrangianElectroweak symmetryKeyphraseMonte Carlo method...
  • After more than three decades the fractional quantum Hall effect still poses challenges to contemporary physics. While many features are phenomenologically understood, a detailed comprehension of its microscopic origin still lacks. Here, we address this problem by showing that the Hamiltonian describing interacting electrons in a strong magnetic field, restricted to the lowest Landau level, can be mapped onto a one-dimensional lattice gas with repulsive interactions. The statistical mechanics of such models leads to interpret the sequence of Hall plateaux as a fractal phase diagram. We derive an expression for the ground states, forming Wigner crystals in the reciprocal lattice of the degenerate Landau states, and predict the form of the fractional plateaux landscape.
    Fractional Quantum Hall EffectFilling fractionHamiltonianStatistical mechanicsFractalPhase diagramStrong magnetic fieldLowest Landau LevelReciprocal latticeWigner crystal...
  • Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural "thermodynamics" for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy.
    EntropyCritical pointNeural networkPrinciple of maximum entropyExpectation ValueEffective potentialMinimal modelsFluid dynamicsProteinEffective field theory...
  • Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.
    EntropyStatisticsMetastateStatistical mechanicsMonte Carlo methodOverfittingNeural networkBinary starHamiltonianSubgroup...
  • If we have a system of binary variables and we measure the pairwise correlations among these variables, then the least structured or maximum entropy model for their joint distribution is an Ising model with pairwise interactions among the spins. Here we consider inhomogeneous systems in which we constrain (for example) not the full matrix of correlations, but only the distribution from which these correlations are drawn. In this sense, what we have constructed is an inverse spin glass: rather than choosing coupling constants at random from a distribution and calculating correlations, we choose the correlations from a distribution and infer the coupling constants. We argue that such models generate a block structure in the space of couplings, which provides an explicit solution of the inverse problem. This allows us to generate a phase diagram in the space of (measurable) moments of the distribution of correlations. We expect that these ideas will be most useful in building models for systems that are nonequilibrium statistical mechanics problems, such as networks of real neurons.
    EntropyExpectation ValueCoupling constantSpin glassIsing modelPhase diagramMagnetizationMean fieldStatistical mechanicsBinary number...
  • When birds come together to form a flock, the distribution of their individual velocities narrows around the mean velocity of the flock. We argue that, in a broad class of models for the joint distribution of positions and velocities, this narrowing generates an entropic force that opposes the cohesion of the flock. The strength of this force depends strongly on the nature of the interactions among birds: if birds are coupled to a fixed number of neighbors, the entropic forces are weak, while if they couple to all other birds within a fixed distance, the entropic forces are sufficient to tear a flock apart. Similar entropic forces should occur in other non-equilibrium systems. For the joint distribution of protein structures and amino-acid sequences, these forces favor the occurrence of "highly designable" structures.
    EntropyGraphMonte Carlo methodAdjacency matrixBoltzmann distributionEffective potentialThermalisationHamiltonianZero modePartition function...
  • The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but with more spikes the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to N=160 neurons in a small patch of the vertebrate retina, using a combination of direct and model-based analyses of experiments on the response of this network to naturalistic movies. We see signs of a thermodynamic limit, where the entropy per neuron approaches a smooth function of the energy per neuron as N increases. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. Networks with more or less correlation among neurons would not reach this critical state. We suggest further tests of criticality, and give a brief discussion of its functional significance.
    EntropyCritical pointDegree of freedomStatistical mechanicsPartition functionStatisticsStatistical physicsBoltzmann distributionNeural networkGaussian distribution...
  • Life depends as much on the flow of information as on the flow of energy. Here we review the many efforts to make this intuition precise. Starting with the building blocks of information theory, we explore examples where it has been possible to measure, directly, the flow of information in biological networks, or more generally where information theoretic ideas have been used to guide the analysis of experiments. Systems of interest range from single molecules (the sequence diversity in families of proteins) to groups of organisms (the distribution of velocities in flocks of birds), and all scales in between. Many of these analyses are motivated by the idea that biological systems may have evolved to optimize the gathering and representation of information, and we review the experimental evidence for this optimization, again across a wide range of scales.
    EntropyGeneTranscription factorAmino-acidInformation flowDNAGene expressionExpectation ValueIntensityTransfer function...
  • We believe, in the sense of supporting ideas and considering them correct while dismissing doubts about them. We take sides about ideas and theories as if that was the right thing to do. And yet, from a rational point of view, this type of support and belief is not justifiable at all. The best we can hope when describing the real world, as far as we know, is to have probabilistic knowledge, to have probabilities associated to each statement. And even that can be very hard to achieve in a reliable way. Far worse, when we defend ideas and believe them as if they were true, Cognitive Psychology experiments show that we stop being able to analyze the question we believe at with competence. In this paper, I gather the evidence we have about taking sides and present the obvious but unseen conclusion that these facts combined mean that we should actually never believe in anything about the real world, except in a probabilistic way. We must actually never take sides because taking sides destroy out abilities to seek for the most correct description of the world. That means we need to start reformulating the way we debate ideas, from our teaching to our political debates, if we actually want to be able to arrive at the best solutions as suggested by whatever evidence we might have. I will show that this has deep consequences on a number of problems, ranging from the emergence of extremism to the reliability of whole scientific fields. Inductive reasoning requires that we allow every idea to make predictions so that we may rank which ideas are better and that has important consequences in scientific practice. The crisis around $p$-values is also discussed and much better understood under the light of this paper results. Finally, I will debate possible ideas to try to minimize the problem.
    BayesianGeneral relativityNeptuneInductive reasoningBayesian approachMercuryString theoryDecision makingP-valueNull hypothesis...
  • We investigate the evolution of the chiral magnetic instability in a protoneutron star and compute the resulting magnetic power and helicity spectra. The instability may act during the early cooling phase of the hot protoneutron star after supernova core collapse, where it can contribute to the buildup of magnetic fields of strength up to the order of $10^{14}$ G. The maximal field strengths generated by this instability, however, depend considerably on the temperature of the protoneutron star, on density fluctuations and turbulence spectrum of the medium. At the end of the hot cooling phase the magnetic field tends to be concentrated around the submillimeter to cm scale, where it is subject to slow resistive damping.
    InstabilityChiralityProto-neutron starNeutron starChiral asymmetryNeutrinoHelicityMagnetic energyChiral magnetic effectCooling...
  • Increasingly stringent limits from LHC searches for new physics, coupled with lack of convincing signals of weakly interacting massive particle (WIMP) in dark matter searches, have tightly constrained many realizations of the standard paradigm of thermally produced WIMPs as cold dark matter. In this article, we review more generally both thermally and non-thermally produced dark matter (DM). One may classify DM models into two broad categories: one involving bosonic coherent motion (BCM) and the other involving WIMPs. BCM and WIMP candidates need, respectively, some approximate global symmetries and almost exact discrete symmetries. Supersymmetric axion models are highly motivated since they emerge from compelling and elegant solutions to the two fine-tuning problems of the Standard Model: the strong CP problem and the gauge hierarchy problem. We review here non-thermal relics in a general setup, but we also pay particular attention to the rich cosmological properties of various aspects of mixed SUSY/axion dark matter candidates which can involve both WIMPs and BCM in an interwoven manner. We also review briefly a panoply of alternative thermal and non-thermal DM candidates.
    Dark matterAxinoThermalisationAxionWeakly interacting massive particleSupersymmetryNeutralinoGravitinoCold dark matterAbundance...
  • We give a complete analysis of indirect determinations of the top quark mass in the Standard Model by introducing a systematic procedure to identify observables that receive quantum corrections enhanced by powers of $M_t$. We propose to use flavour physics as a tool to extract the top quark mass. Although present data give only a poor determination, we show how future theoretical and experimental progress in flavour physics can lead to an accuracy in $M_t$ well below 2 GeV. We revisit determinations of $M_t$ from electroweak data, showing how an improved measurement of the $W$ mass leads to an accuracy well below 1 GeV.
    Top quark massStandard ModelFlavourCabibbo-Kobayashi-Maskawa matrixFlavour physicsNext-to-leading order computationLattice QCDLattice calculationsBranching ratioTop quark...
  • In recent years, observational $\gamma$-ray astronomy has seen a remarkable range of exciting new results in the high-energy and very-high energy regimes. Coupled with extensive theoretical and phenomenological studies of non-thermal processes in the Universe these observations have provided a deep insight into a number of fundamental problems of high energy astrophysics and astroparticle physics. Although the main moti- vations of $\gamma$-ray astronomy remain unchanged, recent observational results have contributed significantly towards our understanding of many related phenomena. This article aims to review the most important results in the young and rapidly developing field of $\gamma$-ray astrophysics.
    Cosmic raySupernova remnantDark matterCherenkov telescopeGamma-ray sourcesActive Galactic NucleiTelescopesDiffuse emissionMilky WayDark matter annihilation...
  • The atmospheric greenhouse effect, an idea that many authors trace back to the traditional works of Fourier (1824), Tyndall (1861), and Arrhenius (1896), and which is still supported in global climatology, essentially describes a fictitious mechanism, in which a planetary atmosphere acts as a heat pump driven by an environment that is radiatively interacting with but radiatively equilibrated to the atmospheric system. According to the second law of thermodynamics such a planetary machine can never exist. Nevertheless, in almost all texts of global climatology and in a widespread secondary literature it is taken for granted that such mechanism is real and stands on a firm scientific foundation. In this paper the popular conjecture is analyzed and the underlying physical principles are clarified. By showing that (a) there are no common physical laws between the warming phenomenon in glass houses and the fictitious atmospheric greenhouse effects, (b) there are no calculations to determine an average surface temperature of a planet, (c) the frequently mentioned difference of 33 degrees Celsius is a meaningless number calculated wrongly, (d) the formulas of cavity radiation are used inappropriately, (e) the assumption of a radiative balance is unphysical, (f) thermal conductivity and friction must not be set to zero, the atmospheric greenhouse conjecture is falsified.
    GlassGreenhouse EffectEarthClimateAbsorbanceIntensityCarbon dioxideSolar radiationSunThermal conductivity...
  • This work presents the application of density-based topology optimisation to the design of three-dimensional heat sinks cooled by natural convection. The governing equations are the steady-state incompressible Navier-Stokes equations coupled to the thermal convection-diffusion equation through the Bousinessq approximation. The fully coupled non-linear multiphysics system is solved using stabilised trilinear equal-order finite elements in a parallel framework allowing for the optimisation of large scale problems with order of 40-330 million state degrees of freedom. The flow is assumed to be laminar and several optimised designs are presented for Grashof numbers between $10^3$ and $10^6$. Interestingly, it is observed that the number of branches in the optimised design increases with increasing Grashof numbers, which is opposite to two-dimensional optimised designs.
    Grashof numberCoolingThermalisationDegree of freedomThermal conductivityDimensionsSteady stateDiffusion-convection equationNavier-Stokes equationsBuoyancy...
  • We investigate the analytic continuation of wave equations into the complex position plane. For the particular case of electromagnetic waves we provide a physical meaning for such an analytic continuation in terms of a family of closely related inhomogeneous media. For bounded permittivity profiles we find the phenomenon of reflection can be related to branch cuts in the wave that originate from poles of the permittivity at complex positions. Demanding that these branch cuts disappear, we derive a large family of inhomogeneous media that are reflectionless for a single angle of incidence. Extending this property to all angles of incidence leads us to a generalized form of the Poschl Teller potentials. We conclude by analyzing our findings within the phase integral (WKB) method.
    PermittivityWentzel-Kramers-Brillouin approximationWave equationAngle of incidenceWave propagationAnalytic continuationPöschl-Teller potentialDissipationTransformation opticsComplex plane...
  • We show a relation between fractional calculus and fractals, based only on physical and geometrical considerations. The link has been found in the physical origins of the power-laws, ruling the evolution of many natural phenomena, whose long memory and hereditary properties are mathematically modelled by differential operators of non integer order. Dealing with the relevant example of a viscous fluid seeping through a fractal shaped porous medium, we show that, once a physical phenomenon or process takes place on an underlying fractal geometry, then a power-law naturally comes up in ruling its evolution, whose order is related to the anomalous dimension of such geometry, as well as to the model used to describe the physics involved. By linearizing the non linear dependence of the response of the system at hand to a proper forcing action then, exploiting the Boltzmann superposition principle, a fractional differential equation is found, describing the dynamics of the system itself. The order of such equation is again related to the anomalous dimension of the underlying geometry.
    FractalAnomalous dimensionFractional calculusDimensionsSuperposition principleHereditary propertySelf-similarityViscoelasticityBernoulli EquationSierpinski carpet...
  • Research into socio-technical systems like Wikipedia has overlooked important structural patterns in the coordination of distributed work. This paper argues for a conceptual reorientation towards sequences as a fundamental unit of analysis for understanding work routines in online knowledge collaboration. We outline a research agenda for researchers in computer-supported cooperative work (CSCW) to understand the relationships, patterns, antecedents, and consequences of sequential behavior using methods already developed in fields like bio-informatics. Using a data set of 37,515 revisions from 16,616 unique editors to 96 Wikipedia articles as a case study, we analyze the prevalence and significance of different sequences of editing patterns. We illustrate the mixed method potential of sequence approaches by interpreting the frequent patterns as general classes of behavioral motifs. We conclude by discussing the methodological opportunities for using sequence analysis for expanding existing approaches to analyzing and theorizing about co-production routines in online knowledge collaboration.
    Social networkKnowledge representationStatistical significanceBonferroni correctionHuman-computer interactionComputational linguisticsDimensionsCitizen scienceKeyphraseCrowdsourcing...
  • We consider the Standard Model with right-handed neutrinos to explain the masses of active neutrinos by the seesaw mechanism. Since active neutrinos as well as heavy neutral leptons are Majorana fermions in this case, the lepton number violating process can be induced. We discuss the inverse neutrinoless double beta decay $e^- e^- \to W^- W^-$ in the framework of the seesaw mechanism and its detectability at future colliders. It is shown that the cross section can be as large as 17 fb for $\sqrt{s}=3$ TeV even with the stringent constraint from the neutrinoless double beta decays if three (or more) right-handed neutrinos exist. In such a case, the future $e^- e^-$ colliders can test lepton number violation mediated by a right-handed neutrino lighter than about 100 TeV.
    Sterile neutrinoSeesaw mechanismActive neutrinoStandard ModelLepton flavour violationMixing angleNeutrino massCompact Linear ColliderInverse neutrinoless double beta decayElectron-electron collider...
  • We test a particular theory of dark matter in which dark matter axions form ring "caustics" in the plane of the Milky Way against actual observations of Milky Way stars. According to this theory, cold, collisionless dark matter particles with angular momentum flow in and out of the Milky Way on sheets. These flows form caustic rings (at the positions of the rings, the density of the flow is formally infinite) at the locations of closest approach to the Galactic center. We show that the caustic ring dark matter theory reproduces a roughly logarithmic halo, with large perturbations near the rings. We show that the theory can reasonably match the observed rotation curve of the Milky Way. We explore the effects of the caustic rings on dwarf galaxy tidal disruption using N-body simulations. In particular, simulations of the Sagittarius dwarf galaxy tidal disruption in a caustic ring halo potential match observations of the trailing tidal tail as far as 90 kpc from the Galactic center; they do not, however, match the leading tidal tail. None of the caustic ring, NFW, or triaxial logarithmic halos fit all of the data. The source code for calculating the acceleration due to a caustic ring halo has been made publicly available in the NEMO Stellar Dynamics Toolbox and the Milkyway@home client repository.
    Phase space causticDark matterMilky WayNavarro-Frenk-White profileGalactic CenterDwarf galaxyRotation CurveTidal tailSpace debrisStar...
  • Sterile neutrinos are $SU(2)$ singlets that mix with active neutrinos via a mass matrix, its diagonalization leads to mass eigenstates that couple via standard model vertices. We study the cosmological production of heavy neutrinos via \emph{standard model charged and neutral current vertices} under a minimal set of assumptions: i) the mass basis contains a hierarchy of heavy neutrinos, ii) these have very small mixing angles with the active (flavor) neutrinos, iii) standard model particles, including light (active-like) neutrinos are in thermal equilibrium. If kinematically allowed, the same weak interaction processes that produce active-like neutrinos also produce the heavier species. We introduce the quantum kinetic equations that describe their production, freeze out and decay and discuss the various processes that lead to their production in a wide range of temperatures assessing their feasibility as dark matter candidates. We identify processes in which finite temperature collective excitations may lead to the production of the heavy species. As a specific example, we consider the production of heavy neutrinos in the mass range $M_h \lesssim 140 \,\mathrm{MeV}$ from pion decay shortly after the QCD crossover including finite temperature corrections to the pion form factors and mass. We consider the different decay channels that allow for the production of heavy neutrinos showing that their frozen distribution functions exhibit effects from "kinematic entanglement" and argue for their viability as mixed dark matter candidates. We discuss abundance, phase space density and stability constraints and argue that heavy neutrinos with lifetime $\tau> 1/H_0$ freeze out of local thermal equilibrium, and \emph{conjecture} that those with lifetimes $\tau \ll 1/H_0$ may undergo cascade decay into lighter DM candidates and/or inject non-LTE neutrinos into the cosmic neutrino background.
    Sterile neutrinoNeutrinoThermalisationStandard ModelKinematicsDark matterQuantum Boltzmann equationFreeze-outCosmologyEntanglement...
  • The main purpose of this survey is to introduce an inexperienced reader to additive prime number theory and some related branches of analytic number theory. We state the main problems in the field, sketch their history and the basic machinery used to study them, and try to give a representative sample of the directions of current research.
    Prime numberArithmetic progressionAlmost primeBinary numberArithmeticNumber theoryGoldbach's ConjectureAnalytic number theoryAbsolute convergenceRiemann hypothesis...
  • Using the sieve, we show that there are infinitely many Carmichael numbers whose prime factors all have the form $p = 1 + a^2 + b^2$ with $a,b \in{\mathbb Z}$.
    Carmichael numberArithmeticPrime numberArithmetic progressionComposite numberIndicator functionSiegel zeroDirichlet characterDivisor functionCoprime...
  • As a corollary to the recent extraordinary theorem of Maynard and Tao, we re-prove, in a stronger form, a result of Shiu concerning "strings" of consecutive, congruent primes.
    Arithmetic progressionCoprimePrime numberComposite numberNumber theory...
  • It is shown that the set of decimal palindromes is an additive basis for the natural numbers. Specifically, we prove that every natural number can be expressed as the sum of forty-nine (possibly zero) decimal palindromes.
    Binary numberArithmeticSymmetry
  • In this paper we present two families of Fibonacci-Lucas identities, with the Sury's identity being the best known representative of one of the family. While these results can be proved by means of the basic identity relating Fibonacci and Lucas sequences we also provide a bijective proof.
    Lucas sequenceLucas numberGenerating functionalPolynomial...
  • MOND reduces greatly the mass discrepancy in clusters of galaxies, but does leave a consistent global discrepancy of about a factor of two. It has been proposed, within the minimalist and purist MOND, that clusters harbor some indigenous, yet-undetected, cluster baryonic (dark) matter (CBDM). Its total amount has to be comparable with that of the observed hot gas. Following an initial discovery by van Dokkum & al. (2015a), Koda & al. (2015) have recently identified more than a thousand ultra-diffuse galaxy-like objects (UDGs) in the Coma cluster. Robustness of the UDGs to tidal disruption seems to require, within Newtonian dynamics, that they are much more massive than their observed stellar component. Here, I propound that a considerable fraction of the CBDM is internal to UDGs, which endows them with robustness. The rest of the CBDM objects formed in now-disrupted kin of the UDGs, and is dispersed in the intracluster medium. While the discovery of cluster UDGs is not in itself a resolution of the MOND cluster conundrum, it lends greater qualitative plausibility to CBDM as its resolution, for reasons I discuss. Alternatively, if the UDGs are only now falling into Coma, their large size and very low surface brightness could result from the adiabatic inflation due to the MOND external-field effect, as described in Brada & Milgrom (2000). I also consider briefly solutions to the conundrum that invoke more elaborate extensions of purist MOND, e.g., that in clusters, the MOND constant takes up larger-than-canonical values of the MOND constant.
    Modified Newtonian DynamicsUltra-diffuse galaxy-like objectCluster baryonic dark matterDark matterMass discrepancyStarHot gasNewtonian dynamicsCluster of galaxiesTidal disruption...
  • Symmetry-protected topological (SPT) phases of matter have been interpreted in terms of anomalies, and it has been expected that a similar picture should hold for SPT phases with fermions. Here, we describe in detail what this picture means for phases of quantum matter that can be understood via band theory and free fermions. The main examples we consider are time-reversal invariant topological insulators and superconductors in 2 or 3 space dimensions. Along the way, we clarify the precise meaning of the statement that in the bulk of a 3d topological insulator, the electromagnetic $\theta$-angle is equal to $\pi$.
    Path integralDimensionsTopological insulatorSymmetry protected topological orderPartition functionManifoldFree fermionsDirac fermionTopological orderOrientation...
  • The cosmological dark matter field is not completely described by its hierarchy of $N$-point functions, a non-perturbative effect with the consequence that only part of the theory can be probed with the hierarchy. We give here an exact characterization of the joint information of the full set of $N$-point correlators of the lognormal field. The lognormal field is the archetypal example of a field where this effect occurs, and, at the same time, one of the few tractable and insightful available models to specify fully the statistical properties of the evolved matter density field beyond the perturbative regime. Nonlinear growth in the Universe in that model is set letting the log-density field probability density functional evolve keeping its Gaussian shape, according to the diffusion equation in Euclidean space. We show that the hierarchy probes a different evolution equation, the diffusion equation defined not in Euclidean space but on the compact torus, with uniformity as the long-term solution. The extraction of the hierarchy of correlators can be recast in the form of a nonlinear transformation applied to the field, 'wrapping', undergoing a sharp transition towards complete disorder in the deeply nonlinear regime, where all memory of the initial conditions is lost.
    CosmologyDiffusion equationTorusRandom FieldPerturbation theoryGenerating functionalEvolution equationCold dark matterDark matterCovariance...
  • In this paper, we review classical and quantum field theory of massive non-interacting spin-two fields. We derive the equations of motion and Fierz-Pauli constraints via three different methods: the eigenvalue equations for the Casimir invariants of the Poincar\'{e} group, a Lagrangian approach, and a covariant Hamilton formalism. We also present the conserved quantities, the solution of the equations of motion in terms of polarization tensors, and the tree-level propagator. We then discuss canonical quantization by postulating commutation relations for creation and annihilation operators. We express the energy, momentum, and spin operators in terms of the former. As an application, quark-antiquark currents for tensor mesons are presented. In particular, the current for tensor mesons with quantum numbers $J^{PC}=2^{-+}$ is, to our knowledge, given here for the first time.
    CovarianceTensor fieldPolarization tensorCasimir operatorQuantizationConserved quantitiesLorentz transformationQuantum field theoryCreation and annihilation operatorsQuantum theory...
  • The deepest XMM-Newton mosaic map of the central 1.5 deg of the Galaxy is presented, including a total of about 1.5 Ms of EPIC-pn cleaned exposures in the central 15" and about 200 ks outside. This compendium presents broad-band X-ray continuum maps, soft X-ray intensity maps, a decomposition into spectral components and a comparison of the X-ray maps with emission at other wavelengths. Newly-discovered extended features, such as supernova remnants (SNRs), superbubbles and X-ray filaments are reported. We provide an atlas of extended features within +-1 degree of Sgr A*. We discover the presence of a coherent X-ray emitting region peaking around G0.1-0.1 and surrounded by the ring of cold, mid-IR-emitting material known from previous work as the "Radio Arc Bubble" and with the addition of the X-ray data now appears to be a candidate superbubble. Sgr A's bipolar lobes show sharp edges, suggesting that they could be the remnant, collimated by the circumnuclear disc, of a SN explosion that created the recently discovered magnetar, SGR J1745-2900. Soft X-ray features, most probably from SNRs, are observed to fill holes in the dust distribution, and to indicate a direct interaction between SN explosions and Galactic center (GC) molecular clouds. We also discover warm plasma at high Galactic latitude, showing a sharp edge to its distribution that correlates with the location of known radio/mid-IR features such as the "GC Lobe". These features might be associated with an inhomogeneous hot "atmosphere" over the GC, perhaps fed by continuous or episodic outflows of mass and energy from the GC region.
    Galactic CenterSagittarius A*Soft X-raySupernova remnantThermalisationSuperbubbleX-ray spectrumGalaxy filamentNeutron starIntensity...
  • Astro-H will be the first X-ray observatory to employ a high-resolution microcalorimeter, capable of measuring the shift and width of individual spectral lines to the precision necessary for estimating the velocity of the diffuse plasma in galaxy clusters. This new capability is expected to bring significant progress in understanding the dynamics, and therefore the physics, of the intracluster medium. However, because this plasma is optically thin, projection effects will be an important complicating factor in interpreting future Astro-H measurements. To study these effects in detail, we performed an analysis of the velocity field from simulations of a galaxy cluster experiencing gas sloshing, and generated synthetic X-ray spectra, convolved with model Astro-H Soft X-ray Spectrometer (SXS) responses. We find that the sloshing motions produce velocity signatures that will be observable by Astro-H in nearby clusters: the shifting of the line centroid produced by the fast-moving cold gas underneath the front surface, and line broadening produced by the smooth variation of this motion along the line of sight. The line shapes arising from inviscid or strongly viscous simulations are very similar, indicating that placing constraints on the gas viscosity from these measurements will be difficult. Our spectroscopic analysis demonstrates that, for adequate exposures, Astro-H will be able to recover the first two moments of the velocity distribution of these motions accurately, and in some cases multiple velocity components may be discerned. The simulations also confirm the importance of accurate treatment of PSF scattering in the interpretation of Astro-H/SXS spectra of cluster plasmas.
    Line of sightTurbulenceSpectral lineAstro-HIntra-cluster mediumCluster of galaxiesLine of sight velocityViscosityCluster coreCool core galaxy cluster...