- Corbino disk

by Kyrylo Snizhko25 Jul 2014 16:42 - Quantum-size electrostatic potential

by Prof. Michael V. Moskalets06 May 2014 17:44 - Dwyer-Fried invariant

by Prof. Alex Suciu15 Dec 2013 03:22 - General relativity

by Wilson Yu13 Dec 2013 04:40 - High angular resolution

by Prof. Hontas Farmer22 Nov 2013 11:14 - Andreev reflection

by Prof. Carlo Beenakker08 Dec 2010 13:33 - Full counting statistics

by Dr. Dmitri Ivanov28 Nov 2011 09:44 - Minimal Dark Matter

by Dr. Marco Cirelli05 Dec 2010 22:13 - Neutrino Minimal Standard Model

by Prof. Mikhail Shaposhnikov10 Jan 2011 23:58 - Dzyaloshinskii-Moriya interaction

by Dr. George Jackeli28 Aug 2009 09:41

- We search for viable models of direct gauge mediation, where the SUSY-breaking sector is (generalized) SQCD, which has cosmologically favorable uplifted vacua even when the reheating temperature is well above the messenger scale. This requires a relatively large tadpole term in the scalar potential for the spurion field X and, consequently, we argue that pure (deformed) SQCD is not a viable model. On the other hand, in SQCD with an adjoint, which is natural e.g. in string theory, assuming an appropriate sign in the Kahler potential for X, such metastable vacua are possible.MetastateSupersymmetry breakingTadpoleCosmologyThermalisationKahler potentialMinimal supersymmetric Standard ModelCoolingSpurionQuark...
- The BICEP2 results, when interpreted as a gravitational wave signal and combined with other CMB data, suggest a roll-off in power towards small scales in the primordial matter power spectrum. Among the simplest possibilities is a running of the spectral index. Here we show that the preferred level of running alleviates small-scale issues within the $\Lambda$CDM model, more so even than viable WDM models. We use cosmological zoom-in simulations of a Milky Way-size halo along with full-box simulations to compare predictions among four separate cosmologies: a BICEP2-inspired running index model ($\alpha_s$ = -0.024), two fixed-tilt $\Lambda$CDM models motivated by Planck, and a 2.6 keV thermal WDM model. We find that the running BICEP2 model reduces the central densities of large dwarf-size halos ($V_\mathrm{max}$ ~ 30 - 80 km s$^{-1}$) and alleviates the too-big-to-fail problem significantly compared to our adopted Planck and WDM cases. Further, the BICEP2 model suppresses the count of small subhalos by ~50% relative to Planck models, and yields a significantly lower "boost" factor for dark matter annihilation signals. Our findings highlight the need to understand the shape of the primordial power spectrum in order to correctly interpret small-scale data.SimulationsCosmologyDark matter subhaloPower spectrum of primordial density perturbationsCold dark matterCosmological modelGalaxyA dwarfsToo big to fail problemWarm dark matter...
- After recombination the cosmic gas was left in a cold and neutral state. However, as the first stars and black holes formed within early galactic systems, their UV and X-ray radiation induced a gradual phase transition of the intergalactic gas into the warm and ionized state we currently observe. This process is known as cosmic reionization. Understanding how the energy deposition connected with galaxy and star formation shaped the properties of the intergalactic gas is one of the primary goals of present-day cosmology. In addition, reionization back reacts on galaxy evolution, determining many of the properties of the high-redshift galaxy population that represent the current frontier of our discovery of the cosmos. In these two Lectures we provide a pedagogical overview of cosmic reionization and intergalactic medium and of some of the open questions in these fields.Intergalactic mediumIonizationReionizationQuasarCoolingGalaxyAbsorptivityAbsorbanceIntensityUltraviolet background...
- We present in this paper a new application of the geodesic light-cone (GLC) gauge for weak lensing calculations. Using interesting properties of this gauge, we derive an exact expression of the amplification matrix - involving convergence, magnification and shear - and of the deformation matrix - involving the optical scalars. These expressions are simple and non-perturbative as long as no caustics are created on the past light-cone and are, by construction, free from the thin lens approximation. We apply these general expressions on the example of an LTB model with an off- center observer and obtain explicit forms for the lensing quantities as a direct consequence of the non-perturbative transformation between GLC and LTB coordinates. We show their evolution in redshift for underdense and overdense LTB models and interpret their respective variations.Lemaitre-Tolman-Bondi metricGeodesicPast light conesPhase space causticLight conesHomogenizationOptical scalarsAngular distanceRight ascensionLambda-CDM model...
- Standard inflationary hot big bang cosmology predicts small fluctuations in the Cosmic Microwave Background (CMB) with isotropic Gaussian statistics. All measurements support the standard theory, except for a few anomalies discovered in the Wilkinson Microwave Anisotropy Probe maps and confirmed recently by the Planck satellite. The Cold Spot is one of the most significant of such anomalies, and the leading explanation of it posits a large void that imprints this extremely cold area via the linear Integrated Sachs-Wolfe (ISW) effect due to the decay of gravitational potentials over cosmic time, or via the Rees-Sciama (RS) effect due to late-time non-linear evolution. Despite several observational campaigns targeting the Cold Spot region, to date no suitably large void was found at higher redshifts $z > 0.3$. Here we report the detection of an $R =(192 \pm 15) h^{-1}Mpc$ size supervoid of depth $\delta = -0.13 \pm 0.03$, and centred at redshift $z = 0.22$. This supervoid, possibly the largest ever found, is large enough to significantly affect the CMB via the non-linear RS effect, as shown in our Lemaitre-Tolman-Bondi framework. This discovery presents the first plausible explanation for any of the physical CMB anomalies, and raises the possibility that local large-scale structure could be responsible for other anomalies as well.Cosmic microwave backgroundCMB cold spotIntegrated Sachs-WolfeVoidStatisticsCosmologyRees-Sciama effectLarge scale structureWaveletPlanck mission...
- X-ray astronomy is an important tool in the astrophysicist's toolkit to investigate high-energy astrophysical phenomena. Theoretical numerical simulations of astrophysical sources are fully three-dimensional representations of physical quantities such as density, temperature, and pressure, whereas astronomical observations are two-dimensional projections of the emission generated via mechanisms dependent on these quantities. To bridge the gap between simulations and observations, algorithms for generating synthetic observations of simulated data have been developed. We present an implementation of such an algorithm in the yt analysis software package. We describe the underlying model for generating the X-ray photons, the important role that yt and other Python packages play in its implementation, and present a detailed workable example of the creation of simulated X-ray observations.SimulationsPythonLine of sightMonte Carlo methodAbsorptivityCluster of galaxiesX-ray astronomyFlexible Image Transport SystemTelescopesCosmological redshift...
- The recent paper by Jeltema & Profumo(2014) claims that contributions from \ion{K}{18} and \ion{Cl}{17} lines can explain the unidentified emission line found by Bulbul et al 2014 and also by Boyarsky et al, 2014a, 2014b. We show that their analysis relies upon incorrect atomic data and inconsistent spectroscopic modeling. We address these points and summarize in the appendix the correct values for the relevant atomic data from AtomDB.PerseusCluster of galaxiesAtomDBAbundanceCoolingXMM-Newton MOS cameraChemical abundanceGalactic CenterKeV lineCool core galaxy cluster...
- We speculatively examine some issues related to the information content of primordial patches that collapse to form galaxies like the Milky Way. If the dark matter is warm, or if some other process dramatically suppressed small-scale density fluctuations, then the patch that formed the Milky Way would have had low primordial information content. Depending on assumptions about the accuracy with which the initial conditions are specified, the patch would have contained only several billion independent information-carrying `pixels' if the warm-dark-matter (WDM) particle had a mass of 1 keV. This number of `pixels' is much less than even the number of stars in the Milky Way. Like other recent observational tests, this would provide an argument disfavoring such a low mass, under two strong assumptions: (1) a high degree of structure in the Milky Way cannot arise from very smooth initial conditions, and (2) non-primordial information/randomness sources are negligible. An example of a non-primordial information source is a central black hole with an accretion disk and jets, which in principle could broadcast small-scale quantum fluctuations throughout the galaxy. This brings up a question, and even, in principle, a test (if the dark matter is warm) about the scale at which structure in the Galaxy is entirely deterministic from the initial conditions.Milky WayGalaxyWarm dark matterDark matterAccretion diskStarBlack holeQuantum fluctuationMassParticles...
- Many successful approaches to semantic parsing build on top of the syntactic analysis of text, and make use of distributional representations or statistical models to match parses to ontology-specific queries. This paper presents a novel deep learning architecture which provides a semantic parsing system through the union of two neural models of language semantics. It allows for the generation of ontology-specific queries from natural language statements and questions without the need for parsing, which makes it especially suitable to grammatically malformed or syntactically atypical text, such as tweets, as well as permitting the development of semantic parsers for resource-poor languages.Deep learningArchitectureNeural networkStatisticsGoogle+Computational linguisticsConjunctionRoboticsRecurrent neural networkNetwork model...
- This paper presents foundational theoretical results on distributed parameter estimation for undirected probabilistic graphical models. It introduces a general condition on composite likelihood decompositions of these models which guarantees the global consistency of distributed estimators, provided the local estimators are consistent.GraphFactorisationStatisticsConsistent estimatorPseudolikelihoodMaximum likelihoodGraphical modelUndirected graphMass to light ratioBoltzmann machine...
- Capturing the compositional process which maps the meaning of words to that of documents is a central challenge for researchers in Natural Language Processing and Information Retrieval. We introduce a model that is able to represent the meaning of documents by embedding them in a low dimensional vector space, while preserving distinctions of word and sentence order crucial for capturing nuanced semantics. Our model is based on an extended Dynamic Convolution Neural Network, which learns convolution filters at both the sentence and document level, hierarchically learning to capture and compose low level lexical features into high level semantic concepts. We demonstrate the effectiveness of this model on a range of document modelling tasks, achieving strong results with no feature engineering and with a more compact model. Inspired by recent advances in visualising deep convolution networks for computer vision, we present a novel visualisation technique for our document networks which not only provides insight into their learning process, but also can be interpreted to produce a compelling automatic summarisation system for texts.Neural networkImage ProcessingClassificationTraining setVector spaceComputational linguisticsRankingArchitectureInformation retrievalSentiment analysis...
- Bayesian optimisation has gained great popularity as a tool for optimising the parameters of machine learning algorithms and models. Somewhat ironically, setting up the hyper-parameters of Bayesian optimisation methods is notoriously hard. While reasonable practical solutions have been advanced, they can often fail to find the best optima. Surprisingly, there is little theoretical analysis of this crucial problem in the literature. To address this, we derive a cumulative regret bound for Bayesian optimisation with Gaussian processes and unknown kernel hyper-parameters in the stochastic setting. The bound, which applies to the expected improvement acquisition function and sub-Gaussian observation noise, provides us with guidelines on how to design hyper-parameter estimation methods. A simple simulation demonstrates the importance of following these guidelines.HyperparameterBayesianGaussian processEigenvalueMachine learningSimulationsOrthonormalityCauchy-Schwarz inequalityCovarianceMaximum likelihood...
- Portfolio methods provide an effective, principled way of combining a collection of acquisition functions in the context of Bayesian optimization. We introduce a novel approach to this problem motivated by an information theoretic consideration. Our construction additionally provides an extension of Thompson sampling to continuous domains with GP priors. We show that our method outperforms a range of other portfolio methods on several synthetic problems, automated machine learning tasks, and a simulated control task. Finally, the effectiveness of even the random portfolio strategy suggests that portfolios in general should play a more pivotal role in Bayesian optimization.HyperparameterBayesianEntropySimulationsSummarizationLatent Dirichlet allocationMachine learningGaussian processMonte Carlo methodMarginal likelihood...
- In this paper we analyze a simple scenario in which Dark Matter (DM) consists of free fermions with mass $m_f$. We assume that on galactic scales these fermions are capable to form a degenerate Fermi gas, in which stability against gravitational collapse is ensured by the Pauli exclusion principle. The mass density of the resulting configuration is governed by a non-relativistic Lane-Emden equation, thus leading to a universal cored profile that depends only on one free parameter in addition to $m_f$. After reviewing the basic formalism, we test this scenario against experimental data describing the dispersion velocity of the eight classical dwarf spheroidal galaxies of the Milky Way. We find that, despite its extreme simplicity, the model exhibits a good fit of the data and realistic predictions for the size of DM halos providing that $m_f \simeq 200$ eV. We propose a concrete realization of this model in which DM is produced non-thermally via inflaton decay. We show that imposing the correct relic abundance and the bound on the free-streaming length constrains the inflation model in terms of inflaton mass, its branching ratio into DM and the reheating temperature.Dark matterGalaxyDark matter particleDegenerate Fermi gasDwarf galaxyVelocity dispersionElliptical galaxyClassical dwarf spherodial galaxyBosonizationFermi gas...
- The EoR 21-cm signal is expected to become increasingly non-Gaussian as reionization proceeds. We have used semi-numerical simulations to study how this affects the error predictions for the EoR 21-cm power spectrum. We expect $SNR=\sqrt{N_k}$ for a Gaussian random field where $N_k$ is the number of Fourier modes in each $k$ bin. We find that the effect of non-Gaussianity on the $SNR$ does not depend on $k$. Non-Gaussianity is important at high $SNR$ where it imposes an upper limit $[SNR]_l$. It is not possible to achieve $SNR > [SNR]_l$ even if $N_k$ is increased. The value of $[SNR]_l$ falls as reionization proceeds, dropping from $\sim 500$ at $\bar{x}_{{\rm HI}} = 0.8-0.9$ to $\sim 10$ at $\bar{x}_{{\rm HI}} = 0.15$. For $SNR \ll [SNR]_l$ we find $SNR = \sqrt{N_k}/A$ with $A \sim 1.5 - 2.5$, roughly consistent with the Gaussian prediction. We present a fitting formula for the $SNR$ as a function of $N_k$, with two parameters $A$ and $[SNR]_l$ that have to be determined using simulations. Our results are relevant for predicting the sensitivity of different instruments to measure the EoR 21-cm power spectrum, which till date have been largely based on the Gaussian assumption.Non-GaussianityEpoch of reionizationReionizationHydrogen 21 cm lineSignal to noise ratio21-cm power spectrumSimulationsIonizationRandom FieldDark matter...
- This is the draft version of a textbook on "real-world" applications of the AdS/CFT duality for beginning graduate students in particle physics and for researchers in the other fields. The aim of this book is to provide background materials such as string theory, general relativity, nuclear physics, nonequilibrium physics, and condensed-matter physics as well as some key applications of the AdS/CFT duality in a single textbook. Contents: (1) Introduction, (2) General relativity and black holes, (3) Black holes and thermodynamics, (4) Strong interaction and gauge theories, (5) The road to AdS/CFT, (6) The AdS spacetime, (7) AdS/CFT - equilibrium, (8) AdS/CFT - adding probes, (9) Basics of nonequilibrium physics, (10) AdS/CFT - nonequilibrium, (11) Other AdS spacetimes, (12) Applications to quark-gluon plasma, (13) Basics of phase transition, (14) AdS/CFT - phase transition.Black holeAdS/CFT correspondenceGauge theoryHorizonSuper Yang-Mills theoryEntropyString theoryFluid dynamicsQuantum chromodynamicsQuark-gluon plasma...
- In this letter we show that magnetic fields generated at the electroweak phase transition must have helicity in order to explain the void magnetic fields apparently observed today. In the most optimistic case, the helicity fraction must be at least of order $10^{-11}$. We show that the helicity naturally produced in conjunction with the baryon asymmetry is too small to explain observations, and therefore new mechanisms to generate primordial helicity are required.HelicityVoidElectroweak phase transitionMagnetic field generationMagnetic energyMagnetic helicityCoherence lengthMagnetogenesisRecombinationHorizon...
- Magnetic fields large enough to be observable are ubiquitous in astrophysics, even at extremely large length scales. This has led to the suggestion that such fields are seeded at very early (inflationary) times, and subsequently amplified by various processes involving, for example, dynamo effects. Many such mechanisms give rise to extremely large magnetic fields at the end of inflationary reheating, and therefore also during the quark-gluon plasma epoch of the early universe. Such plasmas have a well-known holographic description. We show that holography imposes an upper bound on the intensity of magnetic fields (scaled by the squared temperature) in these circumstances, and that the values expected in some models of cosmic magnetism come close to attaining that bound.Black holeHolographic principleCosmic magnetic fieldEvent horizonFoliationHorizonCurvatureHolographic boundCosmologyAdS black hole...
- With the rapid development in wide area networks and low cost, powerful computational resources, grid computing has gained its popularity. With the advent of grid computing, space limitations of conventional distributed systems can be overcome and underutilized computing resources at different locations around the world can be put to distributed jobs. Workload and resource management is the main key grid services at the service level of grid infrastructures, out of which load balancing in the main concern for grid developers. It has been found that load is the major problem which server faces, especially when the number of users increases. A lot of research is being done in the area of load management. This paper presents the various mechanisms of load balancing in grid computing so that the readers will get an idea of which algorithm would be suitable in different situations. Keywords: wide area network, distributed computing, load balancing.KeyphraseNetworksAlgorithms
- As a contribution to the current efforts to understand supersymmetry-breaking by meta-stable vacua, we study general properties of supersymmetry-breaking vacua in Wess-Zumino models: we show that tree-level degeneracy is generic, explore some constraints on the couplings and present a simple model with a long-lived meta-stable vacuum, ending with some generalizations to non-renormalizable models.Supersymmetry breakingSuperpotentialKahler potentialEigenvalueWess-Zumino modelBosonizationGoldstinoDynamical supersymmetry breakingSupersymmetric vacuaNilpotent...
- In direct gauge mediation, gaugino masses often vanish at the leading order of supersymmetry breaking. Recently, this phenomenon is understood in connection with the global structure of vacua in O'Raifeartaigh-type models. In this note, we further explore a connection between gaugino masses and the landscape of vacua in more general situations, focusing on a few examples which demonstrate our idea. In particular, we present a calculable model with non-vanishing leading order gaugino masses on the lowest energy vacuum.GauginoSupersymmetry breakingKahler potentialStandard ModelTachyonFermion massEigenvalueSuperpotentialGauge symmetryBosonization...
- We give the first algorithm for Matrix Completion whose running time and sample complexity is polynomial in the rank of the unknown target matrix, linear in the dimension of the matrix, and logarithmic in the condition number of the matrix. To the best of our knowledge, all previous algorithms either incurred a quadratic dependence on the condition number of the unknown matrix or a quadratic dependence on the dimension of the matrix in the running time. Our algorithm is based on a novel extension of Alternating Minimization which we show has theoretical guarantees under standard assumptions even in the presence of noise.Singular valueRankingOrthonormalityLeast squaresEigenvalueSpectral decompositionDominant singularityRecommendation systemGram-Schmidt processOperator norm...
- We report on a precision measurement of the parity-violating asymmetry in fixed target electron-electron (Moller) scattering: A_PV = -131 +/- 14 (stat.) +/- 10 (syst.) parts per billion, leading to the determination of the weak mixing angle \sin^2\theta_W^eff = 0.2397 +/- 0.0010 (stat.) +/- 0.0008 (syst.), evaluated at Q^2 = 0.026 GeV^2. Combining this result with the measurements of \sin^2\theta_W^eff at the Z^0 pole, the running of the weak mixing angle is observed with over 6 sigma significance. The measurement sets constraints on new physics effects at the TeV scale.Mixing angleAzimuthStatisticsMomentum transferStandard ModelParity violationRadiative correctionLinear regressionHelicityCoupling constant...
- We study the generation of strong magnetic fields in magnetars and in the early universe. For this purpose we calculate the antisymmetric contribution to the photon polarization tensor in a medium consisting of an electron-positron plasma and a gas of neutrinos and antineutrinos, interacting within the Standard Model. Such a contribution exactly takes into account the temperature and the chemical potential of plasma as well as the photon dispersion law in this background matter. It is shown that a nonvanishing Chern-Simons parameter, which appears if there is a nonzero asymmetry between neutrinos and antineutrinos, leads to the instability of a magnetic field resulting to its growth. We apply our result to the description of the magnetic field amplification in the first second of a supernova explosion. It is suggested that this mechanism can explain strong magnetic fields of magnetars. Then we use our approach to study the cosmological magnetic field evolution. We find a lower bound on the neutrino asymmetries consistent with the well-known Big Bang nucleosynthesis bound in a hot universe plasma. Finally we examine the issue of whether a magnetic field can be amplified in a background matter consisting of self-interacting electrons and positrons.NeutrinoMagnetarChern-Simons termThe early UniverseInstabilityAntineutrinoStrong magnetic fieldPositronBig bang nucleosynthesisPhoton polarization tensor...
- Starting from the collisional Boltzmann equation, we derive for the first time and from first principles a Boltzmann hierarchy for neutrinos including neutrino$-$neutrino interactions mediated by a scalar particle. Such interactions appear, for example, in majoron-like models of neutrino mass generation. In contrast to, e.g., the first-order Boltzmann hierarchy for Thomson-scattering photons, our interacting neutrino Boltzmann hierarchy contains additional momentum-dependent collision terms arising from a non-negligible energy transfer in the neutrino--neutrino scattering process. This necessitates that we track each momentum mode of the neutrino phase space distribution individually, even in the case of massless neutrinos. Comparing our hierarchy with the commonly used $(c_{\rm eff}^2,c_{\rm vis}^2)$-parameterisation, we find no formal correspondence between the two approaches, which raises the question of whether the latter parameterisation even has an interpretation in terms of particle scattering. Lastly, although we have invoked majoron-like models as a motivation for our study, our treatment is in fact generally applicable to all scenarios in which the neutrino and/or other pre-thermalised relativistic fermions interact with scalar particles.NeutrinoNeutrino massCosmic microwave backgroundBoltzmann transport equationPhase spaceNeutrino interactionsCMB temperature anisotropyFree streamingPhase space densityCollision integral...
- Results for the leading two-loop corrections of $\mathcal{O}{\left(\alpha_t^2\right)}$ from the Yukawa sector to the Higgs-boson mass spectrum of the MSSM with complex parameters are presented, with details of the analytical calculation performed in the Feynman-diagrammatic approach using a mixed $\left.\text{on-shell}\middle/\,\overline{\text{DR}}\right.$ scheme that can be directly matched onto the higher-order terms in the code ${\tt FeynHiggs}$. Numerical results are shown for the masses and mixing effects in the neutral Higgs-boson sector and their variation with the phases of the complex parameters. Furthermore, the analytical expressions of the two-loop self-energies and the required renormalization constants are recorded. The new results can consistently be implemented in ${\tt FeynHiggs}$.Self-energyRenormalizationHiggs bosonMinimal supersymmetric Standard ModelTadpoleHiggs boson massGoldstone bosonYukawa couplingFeynman diagramsBosonization...
- We investigate the weak lensing effect by line-of-sight structures with a surface mass density of <~10^8 solar mass/arcsec^2 in QSO-galaxy quadruple lens systems. Using high-resolution N-body simulations in warm dark matter (WDM) models and observed four quadruple lenses that show anomalies in the flux ratios, we obtain constraints on the mass of thermal WDM, m_WDM>= 1.3keV(95%CL), which is consistent with those from Lyman-$\alpha$ forests and the number counts of high-redshift galaxies at z>4. Our results show that WDM with a free-streaming comoving wavenumber k_{fs} <= 27 h/Mpc is disfavored as the major component of cosmological density at redshifts 0.5 <~ z <~ 4.WDM particlesSimulationsLine of sightWarm dark matterGravitational lens galaxyFree streamingWeak lensingLambda-CDM modelThermalisationCold dark matter...
- The objective of this paper is to present a new Riemannian obstruction to minimal isometric immersions into Euclidean spaces, as an another answer to a question raised by S.S. Chern concerning the existence of minimal isometric immersions into Euclidean spaces.CurvatureReal spaceSecond fundamental formSectional curvatureScalar curvatureGeodesicCurvature tensorTwo-formManifoldCritical point...
- The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide detailed a analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.ClassificationImage ProcessingObjectiveField...
- With the help of the recently developed SIESTA-PEXSI method [J. Phys.: Condens. Matter 2014, 26, 305503], we perform Kohn-Sham density functional theory (DFT) calculations to study the stability and electronic structure of hexagonal graphene nanoflakes (GNFs) with up to 11,700 atoms. We calculate several quantities such as the cohesive energy and the HOMO-LUMO energy gap for armchair (AC) and zigzag (ZZ) edged GNFs, respectively. We observe that the cohesive energies of ACGNFs and ZZGNFs behave qualitatively differently for systems of large size. We find that, for ZZGNFs, the HOMO state exhibits features that are dominated by the electron density around the edges of the nanoflake. The contribution from the edge becomes more significant as the system size increases. We observe the opposite for ACGNFs. For this type of nanoflakes, the structure of the HOMO state allows us to identify two aromatic structures with different stability characteristics. These aromatic structures appear to depend only on whether the system has 4N or 4N+2 electrons, where N is an integer.GrapheneLocal density of statesDensity of statesEdge excitationsCohesive energyDensity functional theoryElectronic densityAtomic orbitalBand gapHydrogen atom...
- We study the acceleration of CR protons and nuclei at GRB internal shocks. Physical quantities and their time evolution are estimated using the internal shock modeling implemented by Daigne & Mochkovitch 1998. We consider different hypotheses about the way the energy dissipated at internal shocks is shared between accelerated CR, e- and B field. We model CR acceleration at mildly relativistic shocks, including all the significant energy loss processes. We calculate CR and neutrino release from single GRBs, assuming that nuclei heavier than protons are present in the relativistic wind. Protons can only reach maximum energies of ~ 10^19.5 eV, while intermediate and heavy nuclei are able to reach values of ~ 10^20 eV and above. The spectra of nuclei escaping from the acceleration site are found to be very hard while the combined spectrum of protons and neutrons is much softer. We calculate the diffuse UHECR flux expected on Earth using the GRB luminosity function from Wanderman & Piran 2010. Only the models assuming that the prompt emission represent a very small fraction of the energy dissipated at internal shocks, and that most of this dissipated energy is communicated to accelerated CR, are able to reproduce the magnitude of the UHECR flux observed. For these models, the observed shape of the UHECR spectrum can be well reproduced and the evolution of the composition is compatible with the trend suggested by Auger. We discuss implications of the softer proton component for the GCR to EGCR transition in the light of the recent composition analyses (KASCADE-Grande experiment). The associated secondary particle diffuse fluxes do not upset any current observational limit. Diffuse neutrino flux from GRB sources should however be detected with the lifetime of neutrino observatories.Gamma ray burstLuminosityCosmic rayUltra-high-energy cosmic rayLorentz factorTurbulencePrompt emissionNeutrinoDissipationPhotodisintegration...
- We study the effect of a possible helicity component of a primordial magnetic field on the tensor part of the cosmic microwave background temperature anisotropies and polarization. We give analytical approximations for the tensor contributions induced by helicity, discussing their amplitude and spectral index in dependence of the power spectrum of the primordial magnetic field. We find that an helical magnetic field creates a parity odd component of gravity waves inducing parity odd polarization signals. However, only if the magnetic field is close to scale invariant and if its helical part is close to maximal, the effect is sufficiently large to be observable. We also discuss the implications of causality on the magnetic field spectrum.HelicityGravitational waveCosmic microwave backgroundTensor mode fluctuationsCausalityScale invarianceHelical magnetic fieldTwo-point correlation functionCross-correlationAnisotropic stress...
- Several models of physics beyond the Standard Model predict neutral particles that decay into final states consisting of collimated jets of light leptons and hadrons (so-called "lepton jets"). These particles can also be long-lived with decay length comparable to, or even larger than, the LHC detectors' linear dimensions. This paper presents the results of a search for lepton jets in proton--proton collisions at the centre-of-mass energy of $\sqrt{s}$ = 8 TeV in a sample of 20.3 fb$^{-1}$ collected during 2012 with the ATLAS detector at the LHC. Limits on models predicting Higgs boson decays to neutral long-lived lepton jets are derived as a function of the particle's proper decay length.QCD jetMonte Carlo methodMuonCosmic rayHidden photonHiggs bosonHidden sectorElectromagnetismStandard ModelSimulations...
- In the presence of cosmic chiral asymmetry, chiral-vorticity and chiral-magnetic effects can play an important role in the generation and evolution of magnetic fields in the early universe. We include these chiral effects in the magnetic field equations and find solutions under simplifying assumptions. Our numerical and analytical results show the presence of an attractor solution in which chiral effects produce a strong, narrow, Gaussian peak in the magnetic spectrum and the magnetic field becomes maximally helical. The peak in the spectrum shifts to longer length scales and becomes sharper with evolution. We also find that the dynamics may become non-linear for certain parameters, pointing to the necessity of a more complete analysis.HelicityChiralityTurbulenceMagnetic helicityAttractorCosmologyChiral anomalyPhase transitionsMagnetohydrodynamicsThe early Universe...
- The Euclid space telescope will observe ~10^5 strong galaxy-galaxy gravitational lens events in its wide field imaging survey over around half the sky, but identifying the gravitational lenses from their observed morphologies requires solving the difficult problem of reliably separating the lensed sources from contaminant populations, such as tidal tails, as well as presenting challenges for spectroscopic follow-up redshift campaigns. Here I present alternative selection techniques for strong gravitational lenses in both Euclid and the Square Kilometer Array, exploiting the strong magnification bias present in the steep end of the Halpha luminosity function and HI mass function. Around 10^3 strong lensing events are detectable with this method in the Euclid wide survey. While only ~1% of the total haul of Euclid lenses, this sample has ~100% reliability, known source redshifts, high signal-to-noise and a magnification-based selection independent of assumptions of lens morphology. With the proposed Square Kilometer Array dark energy survey, the numbers of reliable strong gravitational lenses with source redshifts can reach 10^5.Strong gravitational lensingGravitational lensingSquare Kilometre ArrayGalaxyLuminosity functionLuminosityMass functionGravitational lens galaxyStar formation rateStatistics...
- We show that novel paths to dark matter generation and baryogenesis are open when the Standard Model is extended with three sterile neutrinos $N_i$ and a charged scalar $\delta^+$. Specifically, we propose a new production mechanism for the dark matter particle --a multi-keV sterile neutrino, $N_1$-- that does not depend on the active-sterile mixing angle and does not rely on a large primordial lepton asymmetry. Instead, $N_1$ is produced, via freeze-in, by the decays of $\delta^+$ while it is in equilibrium in the early Universe. In addition, we demonstrate that, thanks to the couplings between the heavier sterile neutrinos $N_{2,3}$ and $\delta^+$, baryogenesis via leptogenesis can be realized close to the electroweak scale. The lepton asymmetry is generated either by $N_{2,3}$-decays for masses $M_{2,3}\gtrsim$ TeV, or by $N_{2,3}$-oscillations for $M_{2,3}\sim$ GeV. Experimental signatures of this scenario include an X-ray line from dark matter decays, and the direct production of $\delta^+$ at the LHC. This model thus describes a minimal, testable scenario for neutrino masses, the baryon asymmetry, and dark matter.Sterile neutrinoDark matterLeptogenesisStandard ModelLepton asymmetryYukawa couplingFlavourNeutrino massActive neutrinoBaryogenesis...
- We present the first measurement of the cross-correlation of weak gravitational lensing and the extragalactic gamma-ray background emission using data from the Canada-France-Hawaii Lensing Survey and the Fermi Large Area Telescope. The cross-correlation is a powerful probe of signatures of dark matter annihilation, because both cosmic shear and gamma-ray emission originate directly from the same DM distribution in the universe, and it can be used to derive constraints on dark matter annihilation cross-section. We show that the measured lensing-gamma correlation is consistent with a null signal. Comparing the result to theoretical predictions, we exclude dark matter annihilation cross sections of <sigma v> =10^{-24}-10^{-25} cm^3 s^-1 for a 100 GeV dark matter. If dark matter halos exist down to the mass scale of 10^-6 M_sun, we are able to place constraints on the thermal cross sections <sigma v> ~ 3 x 10^{-26} cm^3 s^-1 for a 10 GeV dark matter annihilation into tau^{+} tau^{-}. Future gravitational lensing surveys will increase sensitivity to probe annihilation cross sections of <sigma v> ~ 3 x 10^{-26} cm^3 s^-1 even for a 100 GeV dark matter. Detailed modeling of the contributions from astrophysical sources to the cross correlation signal could further improve the constraints by ~ 40-70 %.Cross-correlationDark matterDark matter annihilationGalaxyCosmic shearPoint spread functionCFHTLenS surveyDark matter haloBlazarHalo model...
- We examine the claimed excess X-ray line emission near 3.5 keV with a new analysis of XMM-Newton observations of the Milky Way center and with a re-analysis of the data on M31 and clusters. In no case do we find conclusive evidence for an excess. We show that known plasma lines, including in particular K XVIII lines at 3.48 and 3.52 keV, provide a satisfactory fit to the XMM data from the Galactic center. We assess the expected flux for the K XVIII lines and find that the measured line flux falls squarely within the predicted range based on the brightness of other well-measured lines in the energy range of interest. We then re-evaluate the evidence for excess emission from clusters of galaxies, including a previously unaccounted for Cl XVII line at 3.51 keV, and allowing for systematic uncertainty in the expected flux from known plasma lines and for additional uncertainty due to potential variation in the abundances of different elements. We find that no conclusive excess line emission is present within the systematic uncertainties in Perseus or in other clusters. Finally, we re-analyze XMM data for M31 and find no statistically significant line emission near 3.5 keV to a level greater than one sigma.Cluster of galaxiesLine emissionGalactic CenterAndromeda galaxyDark matterXMM-Newton MOS cameraEPIC PN cameraKeV lineAbundanceThermalisation...
- Kurie-plot experiments allow for neutrino-mass measurements based on kinematics in an almost model-independent manner. A future tritium-based KATRIN-like experiment can be sensitive to light sterile neutrinos with masses below 18 keV, which are among the prime candidates for warm dark matter. Here we consider such keV neutrinos in left--right symmetric extensions, i.e. coupled to right-handed currents, which allow for an enhanced contribution to beta decay even for small active--sterile mixing, without violating astrophysical X-ray constraints. The modified spectral shape is in principle distinguishable from the standard contribution---especially for sterile neutrino masses below 9 keV, which can lead to a distinct peak. We compare the sensitivity to constraints from the LHC and neutrinoless double beta decay.Sterile neutrinoKATRIN experimentNeutrinoInterferenceNeutrino massBeta decayActive-sterile neutrino mixingSterile neutrino massWarm dark matterLarge Hadron Collider...
- We conduct a comprehensive search for X-ray emission lines from sterile neutrino dark matter, motivated by recent claims of unidentified emission lines in the stacked X-ray spectra of galaxy clusters and the centers of the Milky Way and M31. Since the claimed emission lines lie around 3.5 keV, we focus on galaxies and galaxy groups (masking the central regions), since these objects emit very little radiation above ~2 keV and offer a clean background against which to detect emission lines. We develop a formalism for maximizing the signal-to-noise of sterile neutrino emission lines by weighing each X-ray event according to the expected dark matter profile. In total, we examine 81 and 89 galaxies with Chandra and XMM-Newton respectively, totaling 15.0 and 14.6 Ms of integration time. We find no significant evidence of any emission lines, placing strong constraints on the mixing angle of sterile neutrinos with masses between 4.8-12.4 keV. In particular, if the 3.57 keV feature from Bulbul et al. (2014) were due to 7.1 keV sterile neutrino emission, we would have detected it at 4.4 sigma and 11.8 sigma in our two samples. Unlike previous constraints, our measurements do not depend on the model of the X-ray background or on the assumed logarithmic slope of the center of the dark matter profile.GalaxySterile neutrinoXMM-NewtonChandra X-ray ObservatoryCluster of galaxiesDark matterMixing angleX-ray spectrumPoint sourceAndromeda galaxy...
- Several recent works have reported the detection of an unidentified X-ray line at 3.55 keV, which could possibly be attributed to the decay of dark matter (DM) particles in the halos of galaxy clusters and in the M31 galaxy. We analyze all publicly-available XMM-Newton data of dwarf spheroidal galaxies to test the possible DM origin of the line. Dwarf spheroidal galaxies have high mass-to-light ratios and their interstellar medium is not a source of diffuse X-ray emission, thus they are expected to provide the cleanest DM decay line signal. Our analysis shows no evidence for the presence of the line in the stacked spectra of the dwarf galaxies. It excludes the sterile neutrino DM decay origin of the 3.5 keV line reported by Bulbul et al. (2014) at the level of 4.6 sigma under standard assumptions about the Galactic DM column density in the direction of selected dwarf galaxies and at the level of 3.3 sigma assuming minimal Galactic DM column density. As a by-product of our analysis, we provide updated upper limits to the mixing angle of sterile neutrino DM in the mass range between 2 and 20 keV.Dwarf spheroidal galaxyDark matter decayDark matterMilky WayCluster of galaxiesDM decay lineSterile neutrino DMDark matter column densityMixing angleField of view...
- We revisit the X-ray spectrum of the central 14' of the Andromeda galaxy, discussed in our previous work [1402.4119]. Recently in [1408.1699] it was claimed that if one limits the analysis of the data to the interval 3-4 keV, the significance of the detection of the line at 3.53 keV drops below 2 sigma. In this note we show that such a restriction is not justified, as the continuum is well-modeled as a power law up to 8 keV, and parameters of the background model are well constrained over this larger interval of energies. This allows for a detection of the line at 3.53 keV with a statistical significance greater than ~3 sigma and for the identification of several known atomic lines in the energy range 3-4 keV. Limiting the analysis to the 3-4 keV interval results in increased uncertainty, thus decreasing the significance of the detection. We also argue that, with the M31 data included, a consistent interpretation of the 3.53 keV line as an atomic line of K XVIII in all studied objects is problematic.Andromeda galaxyAbundanceGalactic CenterCluster of galaxiesKeV lineStatistical significanceX-ray spectrumDark matterDark matter decaySurface brightness profile...
- We detect a line at $3.539\pm 0.011$ keV in the deep exposure dataset of the Galactic Center region, observed with the XMM-Newton. Although it is hard to exclude completely astrophysical origin of this line in the Galactic Center data alone, the dark matter interpretation of the signal observed in Perseus galaxy cluster and Andromeda galaxy [1402.4119] and in the stacked spectra of galaxy clusters [1402.2301] is fully consistent with these data. Moreover, the Galactic Center data support this interpretation as the line is observed at the same energy and has flux consistent with the expectations about the Galactic dark matter distribution for a class of the Milky Way mass models.Galactic CenterDark matterAndromeda galaxyCluster of galaxiesAbundancePerseus ClusterXMM-NewtonDecaying dark matterKeV lineMass of the Milky Way...
- [Abridged] We present an application of statistical tools to characterize the relationship between input parameters and observational predictions of semi-analytic models of galaxy formation coupled to cosmological $N$-body simulations. We use statistical emulators to efficiently explore the input parameter space of our model, ChemTreeN. We show how a sensitivity analysis can be performed on these model emulators to characterize and quantify the relationship between model input parameters and predicted observable properties. The result of this analysis provides the user with information about which parameters are most important and likely to affect the prediction of a given observable. It can also be used to simplify models by identifying input parameters that have no effect on the outputs of interest. Conversely, it allow us to identify what model parameters can be most efficiently constrained by the given observational data set. We have applied this technique to real observational data sets associated to the Milky Way, such as its luminosity function of satellite galaxies. A statistical comparison of model outputs and real observables is used to obtain a "best-fitting" parameter set. We consider different Milky Way-like dark matter halos to account for the dependence of the best-fitting parameters selection process on underlying the merger history of the models. For all formation histories considered, running ChemTreeN with best-fitting parameters produced luminosity functions that tightly fit their observed counterpart. However, only one models was able to reproduce the observed stellar halo mass within 40 kpc of the Galactic center. On the basis of this analysis it is possible to disregard certain models, and their corresponding merger histories, as good representations of the underlying merger history of the Milky Way.Luminosity functionMilky WayStatisticsGalaxySatellite galaxyDark matter haloSupernovaSimulationsSimulations of structure formationStar...
- We show that a scalar and a fermion charged under a global $U(1)$ symmetry can not only explain the existence and abundance of dark matter (DM) and dark radiation (DR), but also imbue DM with improved scattering properties at galactic scales, while remaining consistent with all other observations. Delayed DM-DR kinetic decoupling eases the missing satellites problem, while DR mediated self-interactions of DM ease the cusp vs. core and too big to fail problems. In this scenario, DM is expected to be pseudo-Dirac and have a mass $100\,{\rm keV}\lesssim m_\chi\lesssim 10\,{\rm GeV}$. The predicted DR may be measurable using the primordial elemental abundances from big bang nucleosynthesis (BBN), and using the cosmic microwave background (CMB).Dark matterDark RadiationDark sectorAbundanceCosmic microwave backgroundCosmologyBig bang nucleosynthesisFreeze-outDark matter haloStandard Model...
- We investigate the issue of how accurately we can constrain the lepton number asymmetry xi_nu = mu_nu/T_nu in the Universe by using future observations of 21 cm line fluctuations and cosmic microwave background (CMB). We find that combinations of the 21 cm line and the CMB observations can constrain the lepton asymmetry better than big-bang nucleosynthesis (BBN). Additionally, we also discuss constraints on xi_nu in the presence of some extra radiation, and show that the 21 cm line observations can substantially improve the constraints obtained by CMB alone, and allow us to distinguish the effects of the lepton asymmetry from the ones of extra radiation.Hydrogen 21 cm lineCosmic microwave backgroundBig bang nucleosynthesisLepton asymmetryNeutrinoPlanck missionCosmological parametersSquare Kilometre ArrayPrimordial abundanceCosmological observation...
- Content Based Image Retrieval(CBIR) is one of the important subfield in the field of Information Retrieval. The goal of a CBIR algorithm is to retrieve semantically similar images in response to a query image submitted by the end user. CBIR is a hard problem because of the phenomenon known as $\textit {semantic gap}$. In this thesis, we aim at analyzing the performance of a CBIR system build using local feature vectors and Intermediate Matching Kernel. We also propose a Two-Step Matching process for reducing the response time of the CBIR systems. Further, we develop a Meta-Learning framework for improving the retrieval performance of these systems. Our results show that the Two-Step Matching process significantly reduces response time and the Meta-Learning Framework improves the retrieval performance by more than two fold. We also analyze the performance of various image classification systems that use different image representations constructed from the local feature vectors.Local feature vectorsMeta learningClassification systemsInformation retrievalSemantic similarityClassificationAlgorithmsField...
- Distributed Hash Tables (DHTs) have been used in several applications, but most DHTs have opted to solve lookups with multiple hops, to minimize bandwidth costs while sacrificing lookup latency. This paper presents D1HT, an original DHT which has a peer-to-peer and self-organizing architecture and maximizes lookup performance with reasonable maintenance traffic, and a Quarantine mechanism to reduce overheads caused by volatile peers. We implemented both D1HT and a prominent single-hop DHT, and we performed an extensive and highly representative DHT experimental comparison, followed by complementary analytical studies. In comparison with current single-hop DHTs, our results showed that D1HT consistently had the lowest bandwidth requirements, with typical reductions of up to one order of magnitude, and that D1HT could be used even in popular Internet applications with millions of users. In addition, we ran the first latency experiments comparing DHTs to directory servers, which revealed that D1HT can achieve latencies equivalent to or better than a directory server, and confirmed its greater scalability properties. Overall, our extensive set of results allowed us to conclude that D1HT can provide a very effective solution for a broad range of environments, from large-scale corporate datacenters to widely deployed Internet applications.Self-organizationConcurrenceVolatilesArchitectureSelf-organizing systemP2pSolar prominenceTCPFlat topologyIntensity...
- We construct a natural inflation model in supergravity where the inflaton is identified with a modulus field possessing a shift symmetry. The superpotential for the inflaton is generated by meson condensation due to strong dynamics with deformed moduli constraints. In contrast to models based on gaugino condensation, the inflaton potential is generated without $R$-symmetry breaking and hence does not depend on the gravitino mass. Thus, our model is compatible with low scale supersymmetry.AxionInflatonCondensationShift symmetryInflaton potentialSuperpotentialAxion potentialGravitinoSupersymmetryGauge theory...
- We derive the Boltzmann equation in the synchronous gauge for massive neutrinos with a deformed dispersion relation. Combining the 7-year WMAP data with lower-redshift measurements of the expansion rate, we give constraints on the deformation parameter and find that the deformation parameter is strong degenerate with the physical dark matter density instead of the neutrino mass. Our results show that there is no evidence for Lorentz invariant violation in the neutrino sector. The ongoing Planck experiment could provide improved constraints on the deformation parameter.NeutrinoCosmic microwave backgroundCosmologyNeutrino massLorentz violationWilkinson Microwave Anisotropy ProbeMassive neutrinoBaryon acoustic oscillationsBoltzmann transport equationHubble constant...