Recent ontology graph

Recently bookmarked papers

with concepts:
  • We study the ratios R_{e/mu}^{(P)} = Gamma(P -> e nu [gamma])/Gamma(P -> mu nu [gamma]) (P=pi,K) in Chiral Perturbation Theory to order e^2 p^4. We complement the two-loop effective theory results with a matching calculation of the counterterm, finding R_{e/mu}^{(pi)} = (1.2352 \pm 0.0001)*10^(-4) and R_{e/mu}^{(K)} = (2.477 \pm 0.001)*10^(-5).
    Standard ModelChiralityQuantum chromodynamicsHadronizationNext-to-leading order computationGraphForm factorRenormalizationEffective field theoryEffective theory...
  • We describe the current status of the DarkLight experiment at Jefferson Laboratory. DarkLight is motivated by the possibility that a dark photon in the mass range 10 to 100 MeV/c$^2$ could couple the dark sector to the Standard Model. DarkLight will precisely measure electron proton scattering using the 100 MeV electron beam of intensity 5 mA at the Jefferson Laboratory energy recovering linac incident on a windowless gas target of molecular hydrogen. The complete final state including scattered electron, recoil proton, and e+e- pair will be detected. A phase-I experiment has been funded and is expected to take data in the next eighteen months. The complete phase-II experiment is under final design and could run within two years after phase-I is completed. The DarkLight experiment drives development of new technology for beam, target, and detector and provides a new means to carry out electron scattering experiments at low momentum transfers.
    DarkLight experimentStandard ModelHidden photonThomas Jefferson National Accelerator FacilityElectron scatteringIntensityDark sectorMomentum transferSolenoidBeam dump...
  • Hidden sectors with light extra U(1) gauge bosons, so-called hidden photons, recently received much interest as natural feature of beyond standard model scenarios like string theory and SUSY and because of their possible connection to dark matter. This paper presents limits on hidden photons from past electron beam dump experiments including two new limits from experiments at KEK and Orsay. Additionally, various hidden sector models containing both a hidden photon and a dark matter candidate are discussed with respect to their viability and potential signatures in direct detection.
    Hidden photonDark matterBeam dumpStandard ModelRelic abundanceLaboratory dark matter searchDirac fermionKEKGauge symmetryMajorana fermion...
  • The author reviews the results of a series of proton beam-dump experiments, carried out in 1979 at BNL with 28 GeV/c protons, at FNAL with 350 GeV/c protons, and at CERN with 400 GeV/c protons. The aims of these experiments were: (i) to check the equality of the prompt nu /sub e/, nu /sub e/, nu /sub mu /, and nu /sub mu / fluxes expected from the DD production model, and in particular to establish the prompt nu /sub mu / and nu /sub mu / fluxes by use of the extrapolation method, (ii) to clear up some discrepancy in the prompt electron-neutrino flux between the CERN-Dortmund-Heidelberg-Saclay (CDHS) counter experiment, and the BEBC and Gargamelle bubble-chamber experiments, (iii) to explore the energy dependence and the differential cross section for charmed-particle production, and, of course, (iv) to look for new and unexpected phenomena beyond charm production.
    Beam dumpDifferential cross sectionProton beam-dump experimentCharm productionNeutrino fluxCharm quarkCERNGargamelleBubble chamberElectron neutrino...
  • In a broad class of consistent models, MeV to few-GeV dark matter interacts with ordinary matter through weakly coupled GeV-scale mediators. We show that a suitable meter-scale (or smaller) detector situated downstream of an electron beam-dump can sensitively probe dark matter interacting via sub-GeV mediators, while B-factory searches cover the 1-5 GeV range. Combined, such experiments explore a well-motivated and otherwise inaccessible region of dark matter parameter space with sensitivity several orders of magnitude beyond existing direct detection constraints. These experiments would also probe invisibly decaying new gauge bosons ("dark photons") down to kinetic mixing of \epsilon ~ 10^{-4}, including the range of parameters relevant for explaining the (g-2)_{\mu} discrepancy. Sensitivity to other long-lived dark sector states and to new milli-charge particles would also be improved.
    Dark matterBeam dumpNeutrinoB-factoryKinematicsStandard ModelMuonPionCosmic microwave backgroundWeak neutral current interaction...
  • Interest in new physics models including so-called hidden sectors has increased in recent years as a result of anomalies from astrophysical observations. The Heavy Photon Search (HPS) experiment proposed at Jefferson Lab will look for a mediator of a new force, a GeV-scale massive U(1) vector boson, the Heavy Photon, which acquires a weak coupling to electrically charged matter through kinetic mixing. The HPS detector, a large acceptance forward spectrometer based on a dipole magnet, consists of a silicon tracker-vertexer, a lead-tungstate electromagnetic calorimeter, and a muon detector. HPS will search for the e+e- or mu+mu- decay of the Heavy Photon produced in the interaction of high energy electrons with a high Z target, possibly with a displaced decay vertex. In this article, the description of the detector and its sensitivity are presented.
    Heavy Photon Search experimentHidden sectorMuonVector bosonDipole magnetSpectrometersDark matterCoolingBeam dumpTungstate...
  • Charm quarkSemileptonic decayMuonBeam dumpPair productionNeutrinoInterferenceTau leptonParticlesLeptons...
  • The radial velocities of the galaxies in the vicinity of a cluster shows deviation from the pure Hubble flow due to their gravitational interaction with the cluster. According to a recent study of Falco et al. (2014) based on a high-resolution N-body simulation, the radial velocity profile of the galaxies located at distances larger than three times the virial radius of a neighbor cluster can be well approximated by a universal formula and could be reconstructed from direct observables provided that the galaxies are distributed along one dimensional filament. They suggested an algorithm for the estimation of the dynamic mass of a cluster $M_{\rm v}$ by fitting the universal formula from the simulation to the reconstructed radial velocity profile of the filament galaxies around the cluster from observations. We apply the algorithm to two narrow straight filaments (called the Filament A and B) that were identified recently by Kim et al. (2015) in the vicinity of the Virgo cluster from the NASA-Sloan-Atlas catalog. The dynamical mass of the Virgo cluster is estimated to be $M_{\rm v}=(0.84^{+2.75}_{-0.51})\times10^{15}\,h^{-1}M_{\odot}$ and $M_{\rm v}= (3.24^{+4.99}_{-1.31})\times 10^{15}\,h^{-1}M_{\odot}$ for the cases of the Filament A and B, respectively. We discuss observational and theoretical systematics intrinsic to the method of Falco et al. (2014) as well as the physical implication of the final results.
    Galaxy filamentGalaxyVirgo ClusterRadial velocity profileWeak lensing mass estimateGeneral relativityLine of sightVirial radiusCold dark matterPeculiar velocity...
  • We investigate the internal structure and density profiles of halos of mass $10^{10}-10^{14}~M_\odot$ in the Evolution and Assembly of Galaxies and their Environment (EAGLE) simulations. These follow the formation of galaxies in a $\Lambda$CDM Universe and include a treatment of the baryon physics thought to be relevant. The EAGLE simulations reproduce the observed present-day galaxy stellar mass function, as well as many other properties of the galaxy population as a function of time. We find significant differences between the masses of halos in the EAGLE simulations and in simulations that follow only the dark matter component. Nevertheless, halos are well described by the Navarro-Frenk-White (NFW) density profile at radii larger than ~5% of the virial radius but, closer to the centre, the presence of stars can produce cuspier profiles. Central enhancements in the total mass profile are most important in halos of mass $10^{12}-10^{13}M_\odot$, where the stellar fraction peaks. Over the radial range where they are well resolved, the resulting galaxy rotation curves are in very good agreement with observational data for galaxies with stellar mass $M_*<5\times10^{10}M_\odot$. We present an empirical fitting function that describes the total mass profiles and show that its parameters are strongly correlated with halo mass.
    Navarro-Frenk-White profileGalaxyDark matterVirial massStellar massStarStar formationAGN feedbackEAGLE simulation projectRotation Curve...
  • Predictions of the microwave thermal emission from the interplanetary dust cloud are made using several contemporary meteoroid models to construct the distributions of cross-section area of dust in space, and applying the Mie light-scattering theory to estimate the temperatures and emissivities of dust particles in broad size and heliocentric distance ranges. In particular, the model of the interplanetary dust cloud by Kelsall et al. (1998, ApJ 508: 44-73), the five populations of interplanetary meteoroids of Divine (1993, JGR 98(E9): 17,029-17,048) and the Interplanetary Meteoroid Engineering Model (IMEM) by Dikarev et al. (2004, EMP 95: 109-122) are used in combination with the optical properties of olivine, carbonaceous and iron spherical particles. The Kelsall model has been widely accepted by the Cosmic Microwave Background (CMB) community. We show, however, that it predicts the microwave emission from interplanetary dust remarkably different from the results of application of the meteoroid engineering models. We make maps and spectra of the microwave emission predicted by the three models assuming variant composition of dust particles. Predictions can be used to look for the emission from interplanetary dust in CMB experiments as well as to plan new observations.
    SunAstronomical UnitAsteroidsZodiacal cloudAbsorptivityInclinationEarthEccentricitySolar systemEcliptic plane...
  • This paper summarizes the relevant theoretical developments, outlines some ideas to improve experimental searches for free neutron-antineutron oscillations, and suggests avenues for future improvement in the experimental sensitivity.
    AntineutronNeutron-antineutron oscillationCosmologyBaryon numberNeutron sourcesInstabilityNeutrinoBaryogenesisPhase spaceBaryon number violation...
  • Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
    Neural networkBackpropagationFeature vectorArchitectureRegularizationConvolutional neural networkImage ProcessingRankingRecurrent neural networkLearning rule...
  • The expressive power of a machine learning model is closely related to the number of sequential computational steps it can learn. For example, Deep Neural Networks have been more successful than shallow networks because they can perform a greater number of sequential computational steps (each highly parallel). The Neural Turing Machine (NTM) is a model that can compactly express an even greater number of sequential computational steps, so it is even more powerful than a DNN. Its memory addressing operations are designed to be differentiable; thus the NTM can be trained with backpropagation. While differentiable memory is relatively easy to implement and train, it necessitates accessing the entire memory content at each computational step. This makes it difficult to implement a fast NTM. In this work, we use the Reinforce algorithm to learn where to access the memory, while using backpropagation to learn what to write to the memory. We call this model the RL-NTM. Reinforce allows our model to access a constant number of memory cells at each computational step, so its implementation can be faster. The RL-NTM is the first model that can, in principle, learn programs of unbounded running time. We successfully trained the RL-NTM to solve a number of algorithmic tasks that are simpler than the ones solvable by the fully differentiable NTM. As the RL-NTM is a fairly intricate model, we needed a method for verifying the correctness of our implementation. To do so, we developed a simple technique for numerically checking arbitrary implementations of models that use Reinforce, which may be of independent interest.
    Reinforcement learningBackpropagationHyperparameterDeep Neural NetworksArchitectureMachine learningBinary starRefractoryFeature vectorNeural network...
  • Despite the recent achievements in machine learning, we are still very far from achieving real artificial intelligence. In this paper, we discuss the limitations of standard deep learning approaches and show that some of these limitations can be overcome by learning how to grow the complexity of a model in a structured way. Specifically, we study the simplest sequence prediction problems that are beyond the scope of what is learnable with standard recurrent networks, algorithmically generated sequences which can only be learned by models which have the capacity to count and to memorize sequences. We show that some basic algorithms can be learned from sequential data using a recurrent network associated with a trainable memory.
    RegularizationMachine learningStochastic gradient descentOverfittingArchitectureDeep learningArtificial intelligenceEntropyTraining setBackpropagation...
  • In this paper we introduce a variant of Memory Networks that needs significantly less supervision to perform question and answering tasks. The original model requires that the sentences supporting the answer be explicitly indicated during training. In contrast, our approach only requires the answer to the question during training. We apply the model to the synthetic bAbI tasks, showing that our approach is competitive with the supervised approach, particularly when trained on a sufficiently large amount of data. Furthermore, it decisively beats other weakly supervised approaches based on LSTMs. The approach is quite general and can potentially be applied to many other tasks that require capturing long-term dependencies.
    NetworksPotential
  • Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.
    ArchitectureStatisticsRecurrent neural networkFeedforward neural networkNeural networkMultigraphConjunctionGoogle+Long short term memoryStrong focusing...
  • During the last decades, increasingly precise astronomical observations of the Galactic Centre (GC) region at radio, infrared, and X-ray wavelengths laid the foundations to a detailed understanding of the high energy astroparticle physics of this most remarkable location in the Galaxy. Recently, observations of this region in high energy (HE, 10 MeV - 100GeV) and very high energy (VHE, > 100 GeV) gamma rays added important insights to the emerging picture of the Galactic nucleus as a most violent and active region where acceleration of particles to very high energies -- possibly up to a PeV -- and their transport can be studied in great detail. Moreover, the inner Galaxy is believed to host large concentrations of dark matter (DM), and is therefore one of the prime targets for the indirect search for gamma rays from annihilating or decaying dark matter particles. In this article, the current understanding of the gamma-ray emission emanating from the GC is summarised and the results of recent DM searches in HE and VHE gamma rays are reviewed.
    Globular clusterDark matterHESS telescopeBlack holeSupernova remnantFERMI telescopeDiffuse emissionGalaxyDark matter annihilationCherenkov Telescope Array...
  • Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.
    ClassificationNeural networkRecurrent neural networkTranslational invarianceArchitectureImage ProcessingReinforcement learningPartially observable Markov decision processBackpropagationHyperparameter...
  • This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use recurrent neural networks and visual semantic embeddings without intermediate stages such as object detection and image segmentation. Our model performs 1.8 times better than the recently published results on the same dataset. Another main contribution is an automatic question generation algorithm that converts the currently available image description dataset into QA form, resulting in a 10 times bigger dataset with more evenly distributed answers.
    EmbeddingDimensionsImage ProcessingRecurrent neural networkConvolutional neural networkComputational linguisticsError propagationLong short term memoryBayesianClassification...
  • Real-world videos often have complex dynamics; methods for generating open-domain video descriptions should be senstive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).
    ClassificationArchitectureDimensionsConvolutional neural networkRecurrent neural networkLong short term memoryGraphComplex dynamicsIntensityConjunction...
  • Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.
    RankingClassificationTraining setImage ProcessingOverfittingRecurrent neural networkConvolutional neural networkStatisticsStochastic gradient descentArchitecture...
  • Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep"' in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
    ArchitectureActivity recognitionClassificationFlow networkBackpropagationImage ProcessingStatisticsHybridizationRankingRecurrent neural network...
  • Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
    Neural networkArchitectureBackpropagationDeep Neural NetworksRecurrent neural networkLong short term memoryFeedforward neural networkTraining setConvolutional neural networkTopic model...
  • This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.
    Recurrent neural networkArchitectureTraining setRegularizationCompressibilityLong short term memoryNeural networkBackpropagationGraphOverfitting...
  • Recent advances in stochastic gradient variational inference have made it possible to perform variational Bayesian inference with posterior approximations containing auxiliary random variables. This enables us to explore a new synthesis of variational inference and Monte Carlo methods where we incorporate one or more steps of MCMC into our variational approximation. By doing so we obtain a rich class of inference algorithms bridging the gap between variational methods and MCMC, and offering the best of both worlds: fast posterior approximation through the maximization of an explicit objective, with the option of trading off additional computation for additional accuracy. We describe the theoretical foundations that make this possible and show some promising first results.
    Monte Carlo Markov chainHamiltonianMarkov chainMonte Carlo methodMarginal likelihoodCovarianceBayesian approachLatent variableNeural networkGibbs sampling...
  • Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes, i.e., diagonal preconditioners. We show that the optimal preconditioner is based on taking the absolute value of the Hessian's eigenvalues, which is not what Newton and classical preconditioners like Jacobi's do. In this paper, we propose a novel adaptive learning rate scheme based on the equilibration preconditioner and show that RMSProp approximates it, which may explain some of its success in the presence of saddle points. Whereas RMSProp is a biased estimator of the equilibration preconditioner, the proposed stochastic estimator, ESGD, is unbiased and only adds a small percentage to computing time. We find that both schemes yield very similar step directions but that ESGD sometimes surpasses RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent.
    CurvatureNeural networkNon-convexityDeep Neural NetworksEigenvalueSaddle pointAutoencoderHyperparameterGraphDecay rate...
  • The Multi-scale Entanglement Renormalization Ansatz (MERA) is a tensor network that provides an efficient way of variationally estimating the ground state of a critical quantum system. The network geometry resembles a discretization of spatial slices of an AdS spacetime and "geodesics" in the MERA reproduce the Ryu-Takayanagi formula for the entanglement entropy of a boundary region in terms of bulk properties. It has therefore been suggested that there could be an AdS/MERA correspondence, relating states in the Hilbert space of the boundary quantum system to ones defined on the bulk lattice. Here we investigate this proposal and derive necessary conditions for it to apply, using geometric features and entropy inequalities that we expect to hold in the bulk. We show that, perhaps unsurprisingly, the MERA lattice can only describe physics on length scales larger than the AdS radius. Further, using the covariant entropy bound in the bulk, we show that there are no conventional MERA parameters that completely reproduce bulk physics even on super-AdS scales. We suggest modifications or generalizations of this kind of tensor network that may be able to provide a more robust correspondence.
    CausalityConformal field theoryDimensionsIsometryGraphHorizonAncillaCoarse grainingDensity matrixCentral charge...
  • This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
    IntensityArchitectureRecurrent neural networkAutoencoderGenerative modelClassificationCompressibilityBernoulli distributionNeural networkFilter bank...
  • We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.
    ClassificationLearning ruleFeature vectorIndicator functionArchitectureImage ProcessingHyperparameterGoogle+Recurrent neural networkReinforcement learning...
  • We present 20 WISE-selected galaxies with bolometric luminosities L_bol > 10^14 L_sun, including five with infrared luminosities L_IR = L(rest 8-1000 micron) > 10^14 L_sun. These "extremely luminous infrared galaxies," or ELIRGs, were discovered using the "W1W2-dropout" selection criteria which requires marginal or non-detections at 3.4 and 4.6 micron (W1 and W2, respectively) but strong detections at 12 and 22 micron in the WISE survey. Their spectral energy distributions are dominated by emission at rest-frame 4-10 micron, suggesting that hot dust with T_d ~ 450K is responsible for the high luminosities. These galaxies are likely powered by highly obscured AGNs, and there is no evidence suggesting these systems are beamed or lensed. We compare this WISE-selected sample with 116 optically selected quasars that reach the same L_bol level, corresponding to the most luminous unobscured quasars in the literature. We find that the rest-frame 5.8 and 7.8 micron luminosities of the WISE-selected ELIRGs can be 30-80% higher than that of the unobscured quasars. The existence of AGNs with L_bol > 10^14 L_sun at z > 3 suggests that these supermassive black holes are born with large mass, or have very rapid mass assembly. For black hole seed masses ~ 10^3 M_sun, either sustained super-Eddington accretion is needed, or the radiative efficiency must be <15%, implying a black hole with slow spin, possibly due to chaotic accretion.
    Supermassive black holeSpectral energy distributionPhotometrySloan Digital Sky SurveyBlack holeQuasarBlack hole massGravitational lensingDust emissionEddington limit...
  • The rapidity dependence of the initial energy density in heavy-ion collisions is calculated from a three-dimensional McLerran-Venugopalan model (3dMVn) introduced by Lam and Mahlon. This model is infrared safe since global color neutrality is enforced. In this non-boost-invariant framework, the nuclei have non-zero thickness in the longitudinal direction. This results in Bjorken-x dependent unintegrated gluon distribution functions which lead to a rapidity-dependent initial energy density after the collision. The initial energy density and its rapidity dependence are important initial conditions for the quark gluon plasma and its hydrodynamic evolution.
    Heavy ion collisionQuark-gluon plasmaPartonRelativistic Heavy Ion ColliderTwo-point correlation functionFluid dynamicsLarge Hadron ColliderRapidityLight conesColor charge...
  • We consider a version of the McLerran-Venugopalan model by Lam and Mahlon where confinement is implemented via colored noise in the infrared. This model does not assume an infinite momentum frame nor that the boosted nuclei are infinitely thin; rather, nuclei have a finite extension in the longitudinal direction and therefore depend on the longitudinal coordinate. In this fully three dimensional framework an x dependence of the gluon distribution function emerges naturally. In order to fix the parameters of the model, we calculate the gluon distribution function and compare it with the JR09 parametrization of the data. We explore the parameter space of the model to attain a working framework that can be used to calculate the initial conditions in heavy ion collisions.
    Color chargeHeavy ion collisionPartonTwo-point correlation functionColor glass condensateQuarkGlassCondensationWhite noisePropagator...
  • The rate of vacuum changing topological solutions of the gluon field, sphalerons, is estimated to be large at the typical temperatures of heavy-ion collisions, particularly at the Relativistic Heavy Ion Collider. Such windings in the gluon field are expected to produce parity-odd bubbles, which cause separation of positively and negatively charged quarks along the axis of the external magnetic field. This chiral magnetic effect can be mimicked by Chern-Simons modified electromagnetism. Here we present a model of relativistic hydrodynamics including the effects of axial anomalies via the Chern-Simons term.
    EntropyChiral magnetic effectWinding numberFluid dynamicsCharge separationChiralityChern-Simons termRelativistic Heavy Ion ColliderQuark-gluon plasmaTopological quantum number...
  • We calculate the triple- and quadruple-gluon inclusive distributions with arbitrary rapidity and azimuthal angle dependences in the gluon saturation regime by using glasma diagrams. Also, we predict higher-dimensional ridges in triple- and quadruple-hadron correlations for p-p and p-Pb collisions at LHC, which have yet to be measured. In p-p and p-Pb collisions at the top LHC energies, gluon saturation is expected to occur since smaller Bjorken-$x$ values are being probed. Glasma diagrams, which are enhanced at small-$x$, include the gluon saturation effects, and they are used for calculating the long-range rapidity correlations ("ridges") and $v_n$ moments of the azimuthal distribution of detected hadrons. The glasma description reproduces the systematics of the data on both p-p and p-Pb ridges. As an alternative, relativistic hydrodynamics has also been applied to these small systems quite successfully. With the triple- and quadruple-gluon azimuthal correlations, this work aims to set the stage by going beyond the double-gluon azimuthal correlations in order to settle unambiguously the origin of "collectivity" in p-p and p-Pb collisions. We derive the triple- and quadruple-gluon azimuthal correlation functions in terms of unintegrated gluon distributions at arbitrary rapidities and azimuthal angles of the produced gluons. Then, unintegrated gluon distributions from the running coupling Balitsky-Kovchegov evolution equation are used to calculate the triple- and quadruple-gluon correlations for various parameters of gluon momenta, initial scale for small-$x$ evolution and beam energy.
    GlasmaAzimuthal correlationAzimuthTwo-point correlation functionHadronizationColor chargeFluid dynamicsQuark-gluon plasmaEvolution equationRelativistic hydrodynamics...
  • The initial conditions for $N$-body simulations are usually generated by applying the Zel'dovich approximation to the initial displacements of the particles using an initial power spectrum of density fluctuations generated by an Einstein-Boltzmann solver. We show that the initial displacements generated in this way generally receive a first-order relativistic correction. We define a new gauge, the $N$-body gauge, in which this relativistic correction is absent and show that a conventional Newtonian $N$-body simulation includes all first-order relativistic contributions if we identify the coordinates in Newtonian simulations with those in the $N$-body gauge.
    General relativityContinuity equationGeodesicZeldovich approximationCold dark matterN-body simulationMatter power spectrumEvolution equationCosmologyLarge scale structure...
  • Ultra-light axions (ULAs) with masses in the range 10^{-33} eV <m <10^{-20} eV are motivated by string theory and might contribute to either the dark-matter or dark-energy density of the Universe. ULAs could suppress the growth of structure on small scales, or lead to an enhanced integrated Sachs-Wolfe effect on large-scale cosmic microwave-background (CMB) anisotropies. In this work, cosmological observables over the full ULA mass range are computed, and then used to search for evidence of ULAs using CMB data from the Wilkinson Microwave Anisotropy Probe (WMAP), Planck satellite, Atacama Cosmology Telescope, and South Pole Telescope, as well as galaxy clustering data from the WiggleZ galaxy-redshift survey. In the mass range 10^{-32} eV < m <10^{-25.5} eV, the axion relic-density \Omega_{a} (relative to the total dark-matter relic density \Omega_{d}) must obey the constraints \Omega_{a}/\Omega_{d} < 0.05 and \Omega_{a}h^{2} < 0.006 at 95%-confidence. For m> 10^{-24} eV, ULAs are indistinguishable from standard cold dark matter on the length scales probed, and are thus allowed by these data. For m < 10^{-32} eV, ULAs are allowed to compose a significant fraction of the dark energy.
    AxionCold dark matterDark matterDark energyCosmic microwave backgroundMatter power spectrumNeutrinoIntegrated Sachs-Wolfe effectScalar fieldCosmology...
  • We provide a prescription for setting initial conditions for cosmological N-body simulations, which simultaneously employ Lagrangian meshes (`particles') and Eulerian grids (`fields'). Our description is based on coordinate systems in arbitrary geometry, and can therefore be used in any metric theory of gravity. We apply our prescription to a choice of Effective Field Theory of Modified Gravity, and show how already in the linear regime, particle trajectories are curved. For some viable models of modified gravity, the Dark Matter trajectories are affected at the level of 5% at Mpc scales. Moreover, we show initial conditions for a simulation where a scalar modification of gravity is modelled in a Lagrangian particle-like description.
    Cold dark matterModified gravityN-body simulationPrimordial density perturbationEffective field theoryGeneral relativityRegularizationDark matterEinstein field equationsTheories of gravity...
  • We consider ways of conceptualizing, rendering and perceiving quantum music, and quantum art in general. Thereby we give particular emphasis to its non-classical aspects, such as coherent superposition and entanglement.
    QuantizationEntanglementUnitary transformationSublimationCondensationFermionic fieldQuantum mechanicsComplementarityBosonizationOrthonormal basis...
  • The Standard-Model interpretation of the ratios of charged and neutral B-> pi K rates, R_c and R_n, respectively, points towards a puzzling picture. Since these observables are affected significantly by colour-allowed electroweak (EW) penguins, this ``B -> pi K puzzle'' could be a manifestation of new physics in the EW penguin sector. Performing the analysis in the R_n-R_c plane, which is very suitable for monitoring various effects, we demonstrate that we may, in fact, move straightforwardly to the experimental region in this plane through an enhancement of the relevant EW penguin parameter q. We derive analytical bounds for q in terms of a quantity L, that measures the violation of the Lipkin sum rule, and point out that strong phases around 90 deg are favoured by the data, in contrast to QCD factorisation. The B -> pi K modes imply a correlation between q and the angle gamma that in the limit of negligible rescattering effects and colour suppressed EW penguins depends only on the value of L. Concentrating on a minimal flavour-violating new-physics scenario with enhanced Z^0 penguins, we find that the current experimental values on B -> X_s mu^+ mu^- require roughly L <= 1.8. As the B -> pi K data give L = 5.7 +- 2.4, L has either to move to smaller values once the B -> pi K data improve or new sources of flavour and CP violation are needed. In turn, the enhanced values of L seen in the B -> pi K data could be accompanied by enhanced branching ratios for rare decays. Most interesting turns out to be the correlation between the B -> pi K modes and BR(K^+ -> pi^+ nu nu), with the latter depending approximately on a single ``scaling'' variable \bar L= L (|V_{ub}/V_{cb}|/0.086)^2.3.
    Standard ModelBranching ratioRare decayFactorisationQuantum chromodynamicsCP violationFlavourNext-to-leading order computationFlavour symmetryCharm quark...
  • Neutrinoless double beta decay allows to constrain lepton number violating extensions of the standard model. If neutrinos are Majorana particles, the mass mechanism will always contribute to the decay rate, however, it is not a priori guaranteed to be the dominant contribution in all models. Here, we discuss whether the mass mechanism dominates or not from the theory point of view. We classify all possible (scalar-mediated) short-range contributions to the decay rate according to the loop level, at which the corresponding models will generate Majorana neutrino masses, and discuss the expected relative size of the different contributions to the decay rate in each class. Our discussion is general for models based on the SM group but does not cover models with an extended gauge. We also work out the phenomenology of one concrete 2-loop model in which both, mass mechanism and short-range diagram, might lead to competitive contributions, in some detail.
    Neutrino massStandard ModelYukawa couplingQuarkLepton flavour violationYukawa interactionFlavourDouble-beta decayLeptoquarkVacuum expectation value...
  • We present a holographic perspective on magnetic oscillations in strongly correlated electron systems via a fluid of charged spin 1/2 particles outside a black brane in an asymptotically anti-de-Sitter spacetime. The resulting back-reaction on the spacetime geometry and bulk gauge field gives rise to magnetic oscillations in the dual field theory, which can be directly studied without introducing probe fermions, and which differ from those predicted by Fermi liquid theory.
    Black braneLandau levelHorizonPhase transitionsField theoryStarFermi surfaceThomas-Fermi modelGauge fieldDensity of states...
  • If dark matter (DM) is composed by particles which are non-gravitationally coupled to ordinary matter, their annihilations or decays in cosmic structures can result in detectable radiation. We show that the most powerful technique to detect a particle DM signal outside the Local Group is to study the angular cross-correlation of non-gravitational signals with low-redshift gravitational probes. This method allows to enhance signal-to-noise from the regions of the Universe where the DM-induced emission is preferentially generated. We demonstrate the power of this approach by focusing on GeV-TeV DM and on the recent cross-correlation analysis between the 2MASS galaxy catalogue and the Fermi-LAT gamma-ray maps. We show that this technique is more sensitive than other extragalactic gamma-ray probes, such as the energy spectrum and angular autocorrelation of the extragalactic background, and emission from clusters of galaxies. Intriguingly, we find that the measured cross-correlation can be well fitted by a DM component, with thermal annihilation cross section and mass between 10 and 100 GeV, depending on the small-scale DM properties and gamma-ray production mechanism. This solicits further data collection and dedicated analyses.
    Dark matterCross-correlation functionLocal groupCross-correlationDark matter annihilationWeakly interacting massive particleDark matter decayDark matter haloDecaying dark matterDark matter particle mass...
  • In this study we explore the LHC's Run II potential to discover heavy Majorana neutrinos with luminosities between $30$ and $3000$ fb$^{-1}$ in the $l^{\pm}l^{\prime\pm}j~j$ final state. Given that there are many models for neutrino mass generation, even within the Type I seesaw framework, we use a simplified model approach and assume that a heavy single Majorana neutrino extension of the SM and a limiting case of the left-right symmetric model. We then extend the analysis to a future hadron collider running at $100$ TeV center of mass energies. This extrapolation in energy allows us to study the relative importance of the resonant production versus gauge boson fusion processes in the study of Majorana neutrinos at hadron colliders. We analyze and propose different search strategies designed to maximize the discovery potential in either the resonant production or the gauge boson fusion modes.
    Standard ModelInvariant massMajorana neutrinoMajorana massMuonLarge Hadron ColliderUnitarityActive neutrinoSterile neutrinoCharged current...
  • A new fast algorithm for clustering and classification of large collections of text documents is introduced. The new algorithm employs the bipartite graph that realizes the word-document matrix of the collection. Namely, the modularity of the bipartite graph is used as the optimization functional. Experiments performed with the new algorithm on a number of text collections had shown a competitive quality of the clustering (classification), and a record-breaking speed.
    ModularityCluster analysisGraphClassificationBipartite networkTraining setGenerative modelCompletenessAlgorithmsAction...
  • Merging galaxy clusters such as the Bullet Cluster provide a powerful testing ground for indirect detection of dark matter. The spatial distribution of the dark matter is both directly measurable through gravitational lensing and substantially different from the distribution of potential astrophysical backgrounds. We propose to use this spatial information to identify the origin of indirect detection signals, and we show that even statistical excesses of a few sigma can be robustly tested for consistency--or inconsistency--with a dark matter source. For example, our methods, combined with already-existing observations of the Coma Cluster, would allow the 3.55 keV line to be tested for compatibility with a dark matter origin. We also discuss the optimal spatial reweighting of photons for indirect detection searches. The current discovery rate of merging galaxy clusters and associated lensing maps strongly motivates deep exposures in these dark matter targets for both current and upcoming indirect detection experiments in the X-ray and gamma-ray bands.
    Dark matterIntra-cluster mediumWeak lensingCluster of galaxiesDark matter decayMerging galaxy clusterStatisticsGlobular clusterDark matter annihilationGalactic Center...
  • The origin of large magnetic fields in the Universe remains currently unknown. We investigate here a mechanism before recombination based on known physics. The source of the vorticity is due to the changes in the photon distribution function caused by the fluctuations in the background photons. We show that the magnetic field generated in the MHD limit, due to the Coulomb scattering, is of the order $10^{-49}$ G. We explicitly show that the magnetic fields generated from this process are sustainable and are not erased by resistive diffusion. We compare the results with current observations and discuss the implications.
    Cosmological magnetic fieldCoherence lengthThomson scatteringMagnetogenesisIntergalactic magnetic fieldRecombinationMagnetohydrodynamicsMagnetic field generationDynamo theoryVoid...
  • We take advantage of the wealth of rotation measures data contained in the NVSS catalogue to derive new, statistically robust, upper limits on the strength of extragalactic magnetic fields. We simulate the extragalactic contribution to the rotation measures for a given field strength and correlation length, by assuming that the electron density follows the distribution of Lyman-$\alpha$ clouds. Based on the observation that rotation measures from low-luminosity distant radio sources do not exhibit any trend with redshift, while the extragalactic contribution instead grows with distance, we constrain fields with Mpc coherence length to be below 1.2 nG at the $2\sigma$ level, and fields coherent across the entire observable Universe below 0.5 nG. These limits do not depend on the particular origin of these cosmological fields.
    Rotation measure of the plasmaLuminosityLine of sightRotation measureExtragalactic magnetic fieldUltra-high-energy cosmic rayNational Radio Astronomy Observatory VLA Sky SurveyRedshift binsP-valueRadio sources...
  • When a local potential changes abruptly in time, an electron gas shifts to a new state which at long times is orthogonal to the one in the absence of the local potential. This is known as Anderson's orthogonality catastrophe and it is relevant for the so-called X-ray edge or Fermi edge singularity, and for tunneling into an interacting one dimensional system of fermions. It often happens that the finite frequency response of the photon absorption or the tunneling density of states exhibits a singular behavior as a function of frequency: $(\frac{\omega_{\rm th}}{\omega-\omega_{\rm th}})^\alpha\Theta(\omega-\omega_{\rm th})$ where $\omega_{\rm th}$ is a threshold frequency and $\alpha$ is an exponent characterizing the singular response. In this paper singular responses of spin-incoherent Luttinger liquids are reviewed. Such responses most often do not fall into the familiar form above, but instead typically exhibit logarithmic corrections and display a much higher universality in terms of the microscopic interactions in the theory. Specific predictions are made, the current experimental situation is summarized, and key outstanding theoretical issues related to spin-incoherent Luttinger liquids are highlighted.
    Luttinger liquidTunneling density of statesAbsorptivityBackscatteringHolonOne-dimensional systemBosonizationGreen's functionDimensionsHamiltonian...
  • 1310.4340  ,  ,  et al.,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  show less
    This document represents the response of the Intensity Frontier Neutrino Working Group to the Snowmass charge. We summarize the current status of neutrino physics and identify many exciting future opportunities for studying the properties of neutrinos and for addressing important physics and astrophysics questions with neutrinos.
    NeutrinoIntensity frontier experimentNeutrino physicsCharge...
  • LCDM is remarkably successful in predicting the cosmic microwave background and large-scale structure, and LCDM parameters have been determined with only mild tensions between different types of observations. Hydrodynamical simulations starting from cosmological initial conditions are increasingly able to capture the complex interactions between dark matter and baryonic matter in galaxy formation. Simulations with relatively low resolution now succeed in describing the overall galaxy population. For example, the EAGLE simulation in volumes up to 100 cubic Mpc reproduces the observed local galaxy mass function nearly as well as semi-analytic models. It once seemed that galaxies are pretty smooth, that they generally grow in size as they evolve, and that they are a combination of disks and spheroids. But recent HST observations combined with high-resolution hydrodynamic simulations are showing that most star-forming galaxies are very clumpy; that galaxies often undergo compaction which reduces their radius and increases their central density; and that most lower-mass star-forming galaxies are not spheroids or disks but are instead elongated when their centers are dominated by dark matter. We also review LCDM challenges on smaller scales: cusp-core, "too big to fail," and substructure issues. Although starbursts can rapidly drive gas out of galaxy centers and thereby reduce the dark matter density, it remains to be seen whether this or other baryonic physics can explain the observed rotation curves of the entire population of dwarf and low surface brightness galaxies. If not, perhaps more complicated physics such as self-interacting dark matter may be needed. But standard LCDM appears to be successful in predicting the dark matter halo substructure that is now observed via gravitational lensing and breaks in cold stellar streams, and any alternative theory must do at least as well.
    GalaxyCold dark matterDark matter haloCosmologyDark matter subhaloDark matterStarStructure formationWarm dark matterSatellite galaxy...