Recent ontology graph

Recent definitions

Recently bookmarked papers

with concepts:
  • We describe methods for evaluating one-loop integrals in $4-2\e$ dimensions. We give a recursion relation that expresses the scalar $n$-point integral as a cyclicly symmetric combination of $(n-1)$-point integrals. The computation of such integrals thus reduces to the calculation of box diagrams ($n=4$). The tensor integrals required in gauge theory may be obtained by differentiating the scalar integral with respect to certain combinations of the kinematic variables. Such relations also lead to differential equations for scalar integrals. For box integrals with massless internal lines these differential equations are easy to solve.
    KinematicsLoop momentumFeynman parametrizationInfrared divergenceAmplitudeRadiative correctionRankingLevi-Civita symbolHomogenizationUltraviolet divergence...
  • In stark contrast to their laboratory and terrestrial counterparts, the cosmic jets appear to be very stable. The are able to penetrate vast spaces, which exceed by up to a billion times the size of their central engines. We propose that the reason behind this remarkable property is the loss of causal connectivity across these jets, caused by their rapid expansion in response to fast decline of external pressure with the distance from the "jet engine". In atmospheres with power-law pressure distribution, the total loss of causal connectivity occurs, when the power index k>2 - the steepness which is expected to be quite common for many astrophysical environments. This conclusion does not seem to depend on the physical nature of jets - it applies both to relativistic and non-relativistic flows, both magnetically-dominated and unmagnetized jets. In order to verify it, we have carried out numerical simulations of moderately magnetized and moderately relativistic jets. Their results give strong support to our hypothesis and provide with valuable insights. In particular, we find that the z-pinched inner cores of magnetic jets expand slower than their envelopes and become susceptible to instabilities even when the whole jet is stable. This may result in local dissipation and emission without global disintegration of the flow. Cosmic jets may become globally unstable when they enter flat sections of external atmospheres. We propose that the Fanaroff-Riley morphological division of extragalactic radio sources into two classes is related to this issue. In particular, we argue that the low power FR-I jets become re-confined, causally connected and globally unstable on the scale of galactic X-ray coronas, whereas more powerful FR-II jets re-confine much further out, already on the scale of radio lobes, and remain largely intact until they terminate at hot spots.
    Astrophysical jetInstabilitySimulationsDissipationCausalityActive Galactic NucleiAzimuthRadio lobesSteady stateThermalisation...
  • The physical processes responsible of sweeping up the surrounding gas in the host galaxy of an AGN, and able in some circumstances to expel it from the galaxy, are not yet well known. The various mechanisms are briefly reviewed: quasar or radio modes, either momentum-conserving outflows, energy-conserving outflows, or intermediate. They are confronted to observations, to know whether they can explain the M-sigma relation, quench the star formation or whether they can also provide some positive feedback and how the black hole accretion history is related to that of star formation.
    Active Galactic NucleiGalaxyCoolingSimulationsAccretionStar formationAGN feedbackMolecular outflowBlack holeCool core galaxy cluster...
  • Accurate quantum Monte Carlo calculations of ground and low-lying excited states of light p-shell nuclei are now possible for realistic nuclear Hamiltonians that fit nucleon-nucleon scattering data. At present, results for more than 30 different (J^pi;T) states, plus isobaric analogs, in A \leq 8 nuclei have been obtained with an excellent reproduction of the experimental energy spectrum. These microscopic calculations show that nuclear structure, including both single-particle and clustering aspects, can be explained starting from elementary two- and three-nucleon interactions. Various density and momentum distributions, electromagnetic form factors, and spectroscopic factors have also been computed, as well as electroweak capture reactions of astrophysical interest.
    HamiltonianExpectation ValueExcited stateIsospinMonte Carlo methodStatistical errorQuantum Monte CarloSpin orbitBound stateP-wave...
  • This short review aims at giving a brief overview of the various states of matter that have been suggested to exist in the ultra-dense centers of neutron stars. Particular emphasis is put on the role of quark deconfinement in neutron stars and on the possible existence of compact stars made of absolutely stable strange quark matter (strange stars). Astrophysical phenomena, which distinguish neutron stars from quark stars, are discussed and the question of whether or not quark deconfinement may occur in neutron stars is investigated. Combined with observed astrophysical data, such studies are invaluable to delineate the complex structure of compressed baryonic matter and to put firm constraints on the largely unknown equation of state of such matter.
    Neutron starQuark starQuarkQuark matterStarDeconfinementStrange quarkCompact starPhase transitionsCompressibility...
  • We report on a mean-field study of spontaneous breaking of chiral symmetry for Dirac fermions with contact interactions in the presence of chiral imbalance, which is modelled by nonzero chiral chemical potential. We point out that chiral imbalance lowers the vacuum energy of Dirac fermions, which leads to the increase of the renormalized chiral chemical potential upon chiral symmetry breaking. The critical coupling strength for the transition to the broken phase is slightly lowered as the chiral chemical potential is increased, and the transition itself becomes milder. Furthermore, we study the chiral magnetic conductivity in different phases and find that it grows both in the perturbative weak-coupling regime and in the strongly coupled phase with broken chiral symmetry. In the strong coupling regime the chiral magnetic effect is saturated by vector-like bound states (vector mesons) with mixed transverse polarizations. General pattern of meson mixing in the presence of chiral imbalance is also considered. We discuss the relevance of our study for Weyl semimetals and strongly interacting QCD matter. Finally, we comment on the ambiguity of the regularization of the vacuum energy of Dirac fermions in the presence of chirality imbalance.
    Chiral chemical potentialChiralityRegularizationHamiltonianMean fieldChiral magnetic effectChiral symmetryChiral magnetic conductivityVacuum energyRenormalization...
  • The origin of large-scale magnetic fields is an unsolved problem in cosmology. In order to overcome, a possible scenario comes from the idea that these fields emerged from a small primordial magnetic field (PMF), produced in the early universe. This field could lead to the observed large-scales magnetic fields but also, would have left an imprint on the cosmic microwave background (CMB). In this work we summarize some statistical properties of this PMFs on the FLRW background. Then, we show the resulting PMF power spectrum using cosmological perturbation theory and some effects of PMFs on the CMB anisotropies.
    Cosmological magnetic fieldCosmic microwave backgroundCross-correlationCMB temperature anisotropyAmplitudeAngular power spectrumInfrared cutoffStatisticsSummarizationMagnetic energy...
  • Using the recently developed approach to quantum Hall physics based on Newton-Cartan geometry, we consider the hydrodynamics of an interacting system on the lowest Landau level. We rephrase the non-relativistic fluid equations of motion in a manner that manifests the spacetime diffeomorphism invariance of the underlying theory. In the massless (or lowest Landau level) limit, the fluid obeys a force-free constraint which fixes the charge current. An entropy current analysis further constrains the energy response, determining four transverse response functions in terms of only two: an energy magnetization and a thermal Hall conductivity. Kubo formulas are presented for all transport coefficients and constraints from Weyl invariance derived. We also present a number of Streda-type formulas for the equilibrium response to external electric, magnetic and gravitational fields.
    Lowest Landau LevelCovarianceConstitutive relationKubo formulaFluid dynamicsThermalisationEntropy currentMassless limitCharged currentNewton-Cartan theory...
  • Magnetic reconnection is a process that changes magnetic field topology in highly conducting fluids. Traditionally, magnetic reconnection was associated mostly with solar flares. In reality, the process must be ubiquitous as astrophysical fluids are magnetized and motions of fluid elements necessarily entail crossing of magnetic frozen in field lines and magnetic reconnection. We consider magnetic reconnection in realistic 3D geometry in the presence of turbulence. This turbulence in most astrophysical settings is of pre-existing nature, but it also can be induced by magnetic reconnection itself. In this situation turbulent magnetic field wandering opens up reconnection outflow regions, making reconnection fast. We discuss Lazarian \& Vishniac (1999) model of turbulent reconnection, its numerical and observational testings, as well as its connection to the modern understanding of the Lagrangian properties of turbulent fluids. We show that the predicted dependences of the reconnection rates on the level of MHD turbulence make the generally accepted Goldreich \& Sridhar (1995) model of turbulence self-consistent. Similarly, we argue that the well-known Alfv\'en theorem on flux freezing is not valid for the turbulent fluids and therefore magnetic fields diffuse within turbulent volumes. This is an element of magnetic field dynamics that was not accounted by earlier theories. For instance, the theory of star formation that was developing assuming that it is only the drift of neutrals that can violate the otherwise perfect flux freezing, is affected and we discuss the consequences of the turbulent diffusion of magnetic fields mediated by reconnection.
    TurbulenceMagnetic reconnectionSimulationsMagnetohydrodynamicsMagnetohydrodynamic turbulenceEddyDissipationLundquist numberCompressibilityInstability...
  • We present a new determination of the primordial helium mass fraction Yp, based on 93 spectra of 86 low-metallicity extragalactic HII regions, and taking into account the latest developments concerning systematic effects. These include collisional and fluorescent enhancements of HeI recombination lines, underlying HeI stellar absorption lines, collisional and fluorescent excitation of hydrogen lines and temperature and ionization structure of the HII region. Using Monte Carlo methods to solve simultaneously for the above systematic effects, we find the best value to be Yp=0.2565+/-0.0010(stat.)+/-0.0050(syst.). This value is higher at the 2sigma level than the value given by Standard Big Bang Nucleosynthesis (SBBN), implying deviations from it. The effective number of light neutrino species Nnu is equal to 3.68^+0.80_-0.70 (2sigma) and 3.80^+0.80_-0.70 (2sigma) for a neutron lifetime tau(n) equal to 885.4+/-0.9 s and 878.5+/-0.8 s, respectively, i.e. it is larger than the experimental value of 2.993+/-0.011.
    FluorescenceAbsorptivityIonizationWilkinson Microwave Anisotropy ProbeLinear regressionNeutrinoCollisional excitationRecombinationEquivalent widthReddening...
  • We present near-infrared spectroscopic observations of the high-intensity HeI 10830 emission line in 45 low-metallicity HII regions. We combined these NIR data with spectroscopic data in the optical range to derive the primordial He abundance. The use of the HeI 10830A line, the intensity of which is very sensitive to the density of the HII region, greatly improves the determination of the physical conditions in the He^+ zone. This results in a considerably tighter Y - O/H linear regression compared to all previous studies. We extracted a final sample of 28 HII regions with Hbeta equivalent width EW(Hbeta)>150A, excitation parameter O^2+/O>0.8, and with helium mass fraction Y derived with an accuracy better than 3%. With this final sample we derived a primordial He mass fraction Yp = 0.2551+/-0.0022. The derived value of Yp is higher than the one predicted by the standard big bang nucleosynthesis (SBBN) model. Using our derived Yp together with D/H = (2.53+/-0.04)x10^-5, and the chi^2 technique, we found that the best agreement between these light element abundances is achieved in a cosmological model with a baryon mass density Omega_b h^2 = 0.0240+/-0.0017 (68% CL), +/-0.0028 (95.4% CL), +/-0.0034 (99% CL) and an effective number of neutrino species Neff = 3.58+/-0.25 (68% CL), +/-0.40 (95.4% CL), +/-0.50 (99% CL). A non-standard value of Neff is preferred at the 99% CL, implying the possible existence of additional types of neutrino species.
    Near-infraredAbundanceGalaxyIntensityPrimordial helium abundanceElectron temperatureSystematic errorLinear regressionEquivalent widthStatistical error...
  • We present the first measurement of the cross-correlation of weak gravitational lensing and the extragalactic gamma-ray background emission using data from the Canada-France-Hawaii Lensing Survey and the Fermi Large Area Telescope. The cross-correlation is a powerful probe of signatures of dark matter annihilation, because both cosmic shear and gamma-ray emission originate directly from the same DM distribution in the universe, and it can be used to derive constraints on dark matter annihilation cross-section. We show that the measured lensing-gamma correlation is consistent with a null signal. Comparing the result to theoretical predictions, we exclude dark matter annihilation cross sections of <sigma v> =10^{-24}-10^{-25} cm^3 s^-1 for a 100 GeV dark matter. If dark matter halos exist down to the mass scale of 10^-6 M_sun, we are able to place constraints on the thermal cross sections <sigma v> ~ 3 x 10^{-26} cm^3 s^-1 for a 10 GeV dark matter annihilation into tau^{+} tau^{-}. Future gravitational lensing surveys will increase sensitivity to probe annihilation cross sections of <sigma v> ~ 3 x 10^{-26} cm^3 s^-1 even for a 100 GeV dark matter. Detailed modeling of the contributions from astrophysical sources to the cross correlation signal could further improve the constraints by ~ 40-70 %.
    Cross-correlationDark matterDark matter annihilationGalaxyCosmic shearPoint spread functionCFHTLenS surveyDark matter haloBlazarHalo model...
  • Image Mosaicing is a method of constructing multiple images of the same scene into a larger image. The output of the image mosaic will be the union of two input images. Image-mosaicing algorithms are used to get mosaiced image. Image Mosaicing processed is basically divided in to 5 phases. Which includes; Feature point extraction, Image registration, Homography computation, Warping and Blending if Image. Various corner detection algorithm is being used for Feature extraction. This corner produces an efficient and informative output mosaiced image. Image mosaicing is widely used in creating 3D images, medical imaging, computer vision, data from satellites, and military automatic target recognition.
    Mosaic imageImage ProcessingFeature extractionAlgorithms...
  • Internet is fast becoming critically important to commerce, industry and individuals. Search Engine (SE) is the most vital component for communication network and also used for discover information for users or people. Search engine optimization (SEO) is the process that is mostly used to increasing traffic from free, organic or natural listings on search engines and also helps to increase website ranking. It includes techniques like link building, directory submission, classified submission etc. but SMO, on the other hand, is the process of promoting your website on social media platforms. It includes techniques like RSS feeds, social news and bookmarking sites, video and blogging sites, as well as social networking sites, such as Facebook, Twitter, Google+, Tumblr, Pinterest, Instagram etc.Social media optimization is becoming increasingly important for search engine optimization, as search engines are increasingly utilizing the recommendations of users of social networks to rank pages in the search engine result pages. Since it is more difficult to tip the influence the search engines in this way. Social Media Optimization (SMO) may also use to generate traffic on a website, promote your business at the center of social marketing place and increase ranking.
    RankingSocial networkMarketFacebookGoogle+TwitterCommunicationNetworks...
  • We present a novel approach for learning nonlinear dynamic models, which leads to a new set of tools capable of solving problems that are otherwise difficult. We provide theory showing this new approach is consistent for models with long range structure, and apply the approach to motion capture and high-dimensional video data, yielding results superior to standard alternatives.
    Hidden Markov modelStatisticsSupervised learningProjection operatorKalman filterTime SeriesBayesian approachStochastic gradient descentNeural networkNon-linear dynamical system...
  • We study the prevalent problem when a test distribution differs from the training distribution. We consider a setting where our training set consists of a small number of sample domains, but where we have many samples in each domain. Our goal is to generalize to a new domain. For example, we may want to learn a similarity function using only certain classes of objects, but we desire that this similarity function be applicable to object classes not present in our training sample (e.g. we might seek to learn that "dogs are similar to dogs" even though images of dogs were absent from our training set). Our theoretical analysis shows that we can select many more features than domains while avoiding overfitting by utilizing data-dependent variance properties. We present a greedy feature selection algorithm based on using T-statistics. Our experiments validate this theory showing that our T-statistic based greedy feature selection is more robust at avoiding overfitting than the classical greedy procedure.
    OverfittingTraining setStatisticsFeature selectionGreedy algorithmRapidityCovarianceReceiver-operator characteristicGeneralization errorMachine learning...
  • An efficient way to learn deep density models that have many layers of latent variables is to learn one layer at a time using a model that has only one layer of latent variables. After learning each layer, samples from the posterior distributions for that layer are used as training data for learning the next layer. This approach is commonly used with Restricted Boltzmann Machines, which are undirected graphical models with a single hidden layer, but it can also be used with Mixtures of Factor Analysers (MFAs) which are directed graphical models. In this paper, we present a greedy layer-wise learning algorithm for Deep Mixtures of Factor Analysers (DMFAs). Even though a DMFA can be converted to an equivalent shallow MFA by multiplying together the factor loading matrices at different levels, learning and inference are much more efficient in a DMFA and the sharing of each lower-level factor loading matrix by many different higher level MFAs prevents overfitting. We demonstrate empirically that DMFAs learn better density models than both MFAs and two types of Restricted Boltzmann Machine on a wide variety of datasets.
    Graphical modelLatent variableOverfittingAlgorithms...
  • Visual perception is a challenging problem in part due to illumination variations. A possible solution is to first estimate an illumination invariant representation before using it for recognition. The object albedo and surface normals are examples of such representations. In this paper, we introduce a multilayer generative model where the latent variables include the albedo, surface normals, and the light source. Combining Deep Belief Nets with the Lambertian reflectance assumption, our model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable in our model. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is possible in our model. Experiments demonstrate that our model is able to generalize as well as improve over standard baselines in one-shot face recognition.
    AlbedoLatent variableGenerative modelObjectiveNetworks...
  • When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random "dropout" gives big improvements on many benchmark tasks and sets new records for speech and object recognition.
    Neural networkBackpropagationHyperparameterHidden Markov modelFilter bankOverfittingTraining setBayesianClassificationStochastic gradient descent...
  • The recent proliferation of richly structured probabilistic models raises the question of how to automatically determine an appropriate model for a dataset. We investigate this question for a space of matrix decomposition models which can express a variety of widely used models from unsupervised learning. To enable model selection, we organize these models into a context-free grammar which generates a wide variety of structures through the compositional application of a few simple rules. We use our grammar to generically and efficiently infer latent components and estimate predictive likelihood for nearly 2500 structures using a small toolbox of reusable algorithms. Using a greedy search over our grammar, we automatically choose the decomposition structure from raw data by evaluating only a small fraction of all models. The proposed method typically finds the correct structure for synthetic data and backs off gracefully to simpler models under heavy noise. It learns sensible structures for datasets as diverse as image patches, motion capture, 20 Questions, and U.S. Senate votes, all using exactly the same code.
    Model structureUnsupervised learningAlgorithmsLikelihood...
  • We introduce a new family of matrix norms, the "local max" norms, generalizing existing methods such as the max norm, the trace norm (nuclear norm), and the weighted or smoothed weighted trace norms, which have been extensively used in the literature as regularizers for matrix reconstruction problems. We show that this new family can be used to interpolate between the (weighted or unweighted) trace norm and the more conservative max norm. We test this interpolation on simulated data and on the large-scale Netflix and MovieLens ratings data, and find improved accuracy relative to the existing matrix norms. We also provide theoretical results showing learning guarantees for some of the new norms.
    RegularizationRademacher complexitySimulationsConvex hullDualityCollaborative filteringGrothendieck inequalityOverfittingSegmentationRanking...
  • We present a framework for high-redshift ($z \geq 7$) galaxy formation that traces their dark matter (DM) and baryonic assembly in four cosmologies: Cold Dark Matter (CDM) and Warm Dark Matter (WDM) with particle masses of $m_x =$ 1.5, 3 and 5 ${\rm keV}$. We use the same astrophysical parameters regulating star formation and feedback, chosen to match current observations of the evolving ultra violet luminosity function (UV LF). We find that the assembly of observable (with current and upcoming instruments) galaxies in CDM and $m_x \geq 3 {\rm keV}$ WDM results in similar halo mass to light ratios (M/L), stellar mass densities (SMDs) and UV LFs. However the suppression of small-scale structure leads to a notably delayed and subsequently more rapid stellar assembly in the $1.5 {\rm keV}$ WDM model. Thus galaxy assembly in $m_x \leq 2 {\rm keV}$ WDM cosmologies is characterized by: (i) a dearth of small-mass halos hosting faint galaxies; and (ii) a younger, more UV bright stellar population, for a given stellar mass. The higher M/L ratio (effect ii) partially compensates for the dearth of small-mass halos (effect i), making the resulting UV LFs closer to CDM than expected from simple estimates of halo abundances. We find that the redshift evolution of the SMD is a powerful probe of the nature of DM. Integrating down to a limit of $M_{UV} =-16.5$ for the James Webb Space Telescope (JWST), the SMD evolves as $\log$(SMD)$\propto -0.63 (1+z)$ in $m_x = 1.5 {\rm keV}$ WDM, as compared to $\log$(SMD)$\propto -0.44 (1+z)$ in CDM. Thus high-redshift stellar assembly provides a powerful testbed for WDM models, accessible with the upcoming JWST.
    GalaxyCold dark matterWarm dark matterLuminosity functionStellar massWDM particlesStar formationDark matterStarDark matter model...
  • Many practitioners who use the EM algorithm complain that it is sometimes slow. When does this happen, and what can be done about it? In this paper, we study the general class of bound optimization algorithms - including Expectation-Maximization, Iterative Scaling and CCCP - and their relationship to direct optimization algorithms such as gradient-based methods for parameter learning. We derive a general relationship between the updates performed by bound optimization methods and those of gradient and second-order methods and identify analytic conditions under which bound optimization algorithms exhibit quasi-Newton behavior, and conditions under which they possess poor, first-order convergence. Based on this analysis, we consider several specific algorithms, interpret and analyze their convergence properties and provide some recipes for preprocessing input to these algorithms to yield faster convergence behavior. We report empirical results supporting our analysis and showing that simple data preprocessing can result in dramatically improved performance of bound optimizers in practice.
    Expectation maximizationAlgorithmsNewton
  • In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed. It is shown that the proposed method is a generalization of recently successful methods of dropout (Hinton et al., 2012), explicit noise injection (Vincent et al., 2010; Bishop, 1995) and semantic hashing (Salakhutdinov & Hinton, 2009).
    PerceptronEdge connectivityBackpropagationProbabilityAlgorithms...
  • Deep learning methods have recently enjoyed a number of successes in the tasks of classification and representation learning. These tasks are very important for brain imaging and neuroscience discovery, making the methods attractive candidates for porting to a neuroimager's toolbox. Successes are, in part, explained by a great flexibility of deep learning models. This flexibility makes the process of porting to new areas a difficult parameter optimization problem. In this work we demonstrate our results (and feasible parameter ranges) in application of deep learning methods to structural and functional brain imaging data. We also describe a novel constraint-based approach to visualizing high dimensional data. We use it to analyze the effect of parameter choices on data transformations. Our results show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data.
    Deep learningClassificationSegmentationInfomaxCross-correlationSimulationsConstraint Satisfaction ProblemBipartite networkTime SeriesBackpropagation...
  • Attention has long been proposed by psychologists as important for effectively dealing with the enormous sensory stimulus available in the neocortex. Inspired by visual attention models in computational neuroscience and by the need for deep generative models to learn on object-centric data, we describe a framework for generative learning using attentional mechanisms. Attentional mechanism propagate signals from region-of-interest in a scene to higher layer areas of canonical representation, where generative modeling takes place. By ignoring background clutter, generative model can concentrate its resources to model objects of interest. Our model is a proper graphical model where the 2D similarity transformation from computer vision is part of the top-down process. A ConvNet is used to initialize good guesses during posterior inference, which is based on Hamiltonian Monte Carlo. Upon learning on face images, we demonstrate that our model can robustly attend to face regions of novel test subjects. Most importantly, our model can learn generative models of new faces from a novel dataset of large images where the location of the face is not known.
    Generative modelBoltzmann machineGraphical modelHamiltonianManifoldMonte Carlo methodImage ProcessingInferenceLatent variableSimulations...
  • Matrix factorization (MF) has become a common approach to collaborative filtering, due to ease of implementation and scalability to large data sets. Two existing drawbacks of the basic model is that it does not incorporate side information on either users or items, and assumes a common variance for all users. We extend the work of constrained probabilistic matrix factorization by deriving the Gibbs updates for the side feature vectors for items (Salakhutdinov and Minh, 2008). We show that this Bayesian treatment to the constrained PMF model outperforms simple MAP estimation. We also consider extensions to heteroskedastic precision introduced in the literature (Lakshminarayanan, Bouchard, and Archambeau, 2011). We show that this tends result in overfitting for deterministic approximation algorithms (ex: Variational inference) when the observed entries in the user / item matrix are distributed in an non-uniform manner. In light of this, we propose a truncated precision model. Our experimental results suggest that this model tends to delay overfitting.
    OverfittingHyperparameterFeature vectorGibbs samplingBayesianTraining setMean-field approximationGaussian distributionSummarizationCollaborative filtering...
  • We present the preliminary results of a systematic search for GRB and other transients in the publicly available data for the IBIS/PICsIT (0.2-10 MeV) detector on board INTEGRAL. Lightcurves in 2-8 energy bands with time resolution from 1 to 62.5 ms have been collected and an analysis of spectral and temporal characteristics has been performed. This is the nucleus of a forthcoming first catalog of GRB observed by PICsIT.
    Gamma ray burstLight curveSpectral imagingCosmic rayImager on Board the INTEGRAL SatelliteBackground subtractionSolar activityVan Allen beltsINTEGRAL satelliteEnergy...
  • We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and characterizations of possible modes and support sets of multivariate probability distributions that can be represented by both model classes. We find, in particular, that an exponentially larger mixture model, requiring an exponentially larger number of parameters, is required to represent probability distributions that can be represented by the restricted Boltzmann machines. The title question is intimately related to questions in coding theory and point configurations in hyperplane arrangements.
    Binary starMixture modelBoltzmann machineHamming distanceRankingConvex combinationCrab NebulaInferenceStatisticsGraphical model...
  • We study the complexity of functions computable by deep feedforward neural networks with piece-wise linear activations in terms of the number of regions of linearity that they have. Deep networks are able to sequentially map portions of each layer's input space to the same output. In this way, deep models compute functions with a compositional structure that is able to re-use pieces of computation exponentially often in terms of their depth. This note investigates the complexity of such compositional maps and contributes new theoretical results regarding the advantage of depth for neural networks with piece-wise linear activation functions.
    Neural networkLinear functionalActivation functionVoronoi tessellationRankingFeedforward neural networkMultidimensional ArrayQuadrantsOrientationVisualization technique...
  • Effective LagrangianChiralityNumerical methodsElastic scatteringElectroweak interactionMonte Carlo methodNuclear forceForm factorBound stateEffective potential...
  • Muon capture on the deuteron is studied in heavy baryon chiral perturbation theory (HBChPT). It is found that by far the dominant contribution to $\mu d$ capture comes from a region of the final three-body phase-space in which the energy of the two neutrons is sufficiently small for HBChPT to be applicable. The single unknown low-energy constant having been fixed from the tritium beta decay rate, our calculation contains no free parameter. Our estimate of the $\mu d$ capture rate is consistent with the existing data. The relation between $\mu d$ capture and the $\nu d$ reactions, which are important for the SNO experiments, is briefly discussed.
    Effective field theoryDeuteronSNO+AmplitudeMuonInterferenceHeavy Baryon Chiral Perturbation TheoryChiralityMomentum transferPion...
  • Gravitational field is the manifestation of space-time translational ($T_4$) gauge symmetry, which enables gravitational interaction to be unified with the strong and the electroweak interactions. Such a total-unified model is based on a generalized Yang-Mills framework in flat space-time. Following the idea of Glashow-Salam-Ward-Weinberg, we gauge the groups $T_4 \times (SU_3)_{color} \times SU_2 \times U_1\times U_{1b}$ on equal-footing, so that we have the total-unified gauge covariant derivative ${\bf \d}_{\mu} = \p_{\mu} - ig\phi_{\mu}^{\nu} p_{\nu}+ig_{s}{G_{\mu}^{a}}(\ld^a/2) +if{W_{\mu}^{k}}{t^k} + if' U_{\mu}t_{o} + ig_{b}B_{\mu}$. The generators of the external $T_4$ group have the representation $p_{\mu}=i\p_{\mu}$, which differs from other generators of all internal groups, which have constant matrix representations. Consequently, the total-unified model leads to the following new results: (a) All internal $(SU_3)_{color}, SU_2, U_1$ and baryonic $U_{1b}$ gauge symmetries have extremely small violations due to the gravitational interaction. (b) The $T_4$ gauge symmetry remains exact and dictates the universal coupling of gravitons. (c) Such a gravitational violation of internal gauge symmetries leads to modified eikonal and Hamilton-Jacobi type equations, which are obtained in the geometric-optics limit and involve effective Riemann metric tensors. (d) The rules for Feynman diagrams involving new couplings of photon-graviton, gluon-graviton and quark-gaviton are obtained.
    Gauge fieldGauge symmetryQuarkCurvatureGravitonGauge invarianceGauge transformationGeneral relativityGravitational interactionCoupling constant...
  • Gamma-ray searches for dark matter annihilation and decay in dwarf galaxies rely on an understanding of the dark matter density profiles of these systems. Conversely, uncertainties in these density profiles propagate into the derived particle physics limits as systematic errors. In this paper we quantify the expected dark matter signal from 20 Milky Way dwarfs using a uniform analysis of the most recent stellar-kinematic data available. Assuming that the observed stellar populations are equilibrium tracers of spherically-symmetric gravitational potentials that are dominated by dark matter, we find that current stellar-kinematic data can predict the amplitudes of annihilation signals to within a factor of a few for the ultra-faint dwarfs of greatest interest. On the other hand, the expected signal from several classical dwarfs (with high-quality observations of large numbers of member stars) can be localized to the ~20% level. These results are important for designing maximally sensitive searches in current and future experiments using space and ground-based instruments.
    A dwarfsStarDwarf galaxyDark matterKinematicsStellar kinematicsDark matter annihilationLine of sightUltra-faint dwarf spheroidal galaxyMilky Way...
  • We propose an experiment to search for QCD axion and axion-like-particle (ALP) dark matter. Nuclei that are interacting with the background axion dark matter acquire time-varying CP-odd nuclear moments such as an electric dipole moment. In analogy with nuclear magnetic resonance, these moments cause precession of nuclear spins in a material sample in the presence of a background electric field. This precession can be detected through high-precision magnetometry. With current techniques, this experiment has sensitivity to axion masses m_a <~ 10^(-9) eV, corresponding to theoretically well-motivated axion decay constants f_a >~ 10^16 GeV. With improved magnetometry, this experiment could ultimately cover the entire range of masses m_a <~ 10^(-6) eV, just beyond the region accessible to current axion searches.
    Electric dipole momentAxion-like particleAxionQCD axionDark matterMagnetometerAxionic dark matterWeakly interacting massive particleLarmor frequencyCosmology...
  • The two-pion contribution from low energies to the muon magnetic moment anomaly, although small, has a large relative uncertainty since in this region the experimental data on the cross sections are neither sufficient nor precise enough. It is therefore of interest to see whether the precision can be improved by means of additional theoretical information on the pion electromagnetic form factor, which controls the leading order contribution. In the present paper we address this problem by exploiting analyticity and unitarity of the form factor in a parametrization-free approach that uses the phase in the elastic region, known with high precision from the Fermi-Watson theorem and Roy equations for $\pi\pi$ elastic scattering as input. The formalism also includes experimental measurements on the modulus in the region 0.65-0.70 GeV, taken from the most recent $e^+e^-\to \pi^+\pi^-$ experiments, and recent measurements of the form factor on the spacelike axis. By combining the results obtained with inputs from CMD2, SND, BABAR and KLOE, we make the predictions $a_\mu^{\pi\pi, \LO}\,[2 m_\pi,\, 0.30 \gev]=(0.553 \pm 0.004) \times 10^{-10}$ and $a_\mu^{\pi\pi, \LO}\,[0.30 \gev,\, 0.63 \gev]=(133. 083 \pm 0.837)\times 10^{-10}$. These are consistent with the other recent determinations, and have slightly smaller errors.
    Form factorMuonPionUnitarityCharge radiusIsospinHadronizationElasticityStatisticsVacuum polarization...
  • High quality CdS nanowires suspended in air were optically pumped both below and above the lasing threshold. The polarization of the pump laser was varied while emission out of the end facet of the nanowire was monitored in a 'head-on' measurement geometry. Highest pump-efficiency and most efficient absorption of the pump radiation are demonstrated for an incident electric field being polarized parallel to the nanowire axis. This polarization dependence, which was observed both above the lasing threshold and in the regime of amplified spontaneous emission, is caused by an enhanced absorption for parallel polarized optical pumping. Measured Stokes parameters of the nanowire emission reveal that due to the onset of lasing the degree of polarization rapidly increases from approximately 15% to 85%. Both, Stokes parameters and degree of polarization of the nanowire lasing emission are independent of the excitation polarization. The transversal lasing mode is therefore not notably affected by the polarization of the pumping beam, although the supply with optical gain is significantly enhanced for an excitation polarization parallel to the nanowire axis.
    NanowireOptical pumpingAbsorptivityStokes parametersLasersSpontaneous emissionPolarizationMeasurementGeometryElectric field...
  • The aim of near-field scanning optical microscopy (NSOM) is to reveal the distribution of the electromagnetic field around nanoscale objects. The full vectorial nature of this field is more difficult to measure than just its amplitude. It can only be fully reconstructed with exact knowledge of the optical properties of the probe. Here, we report and numerically explain that NSOM tips with a well-defined apex diameter selectively support azimuthally polarized light (|$E_{\text{azi}}$|$^2$/|$E_{\text{tot}}$|$^2$ $\approx$ 55$\,$% $\pm $ 5$\,$% for 1.4$\,$\mu m tip aperture diameter and \lambda$_0$ = 1550$\,$nm). We attribute the generation of azimuthal polarization in the metal-coated fiber tip to symmetry breaking in the bend and subsequent plasmonic mode filtering in the truncated conical taper.
    AzimuthAmplitudeApexPlasmonPolarizationOpticsSymmetryMeasurementObjectiveField...
  • Plasmonic metasurfaces are investigated that consist of a sub wavelength line pattern in an ultrathin (~ 10 nm) silver film, designed for extraordinarily suppressed transmission (EOST) in the visible spectral range. Measurements with a near-field scanning optical microscope (NSOM) demonstrate that far field irradiation creates resonant excitations of antenna like (bright) modes that are localized on the metal ridges. In contrast, bound (dark) surface plasmon polaritons (SPPs) launched from an NSOM tip propagate well across the metasurface, preferentially perpendicular to the grating lines.
    MetasurfaceNSOMSurface Plasmon PolaritonPlasmonIrradianceMeasurementMetalsWavelengthResonanceField...
  • We experimentally demonstrate plasmonic nano-circuits operating as sub-diffraction directional couplers optically excited with high efficiency from free-space using optical Yagi-Uda style antennas at \lambda = 1550 nm. The optical Yagi-Uda style antennas are designed to feed channel plasmon waveguides with high efficiency (45 % in coupling, 60 % total emission), narrow angular directivity (< 40{\deg}) and low insertion loss. SPP gap waveguides exhibit propagation lengths as large as 34 {\mu}m with adiabatically tuned confinement, and are integrated with ultra-compact (5 \mu m x 10 \mu m), highly dispersive directional couplers, which enable 30 dB discrimination over {\Delta}{\lambda} = 200 nm with only 0.3 dB device loss.
    Wave guidePlasmonSurface Plasmon PolaritonDirectional CouplerGalaxy filamentFinite-difference time-domain methodIndex of refractionWavelength division multiplexerYagi-Uda antennaNear-infrared...
  • This paper is based on our previous work on neural coding. It is a self-organized model supported by existing evidences. Firstly, we briefly introduce this model in this paper, and then we explain the neural mechanism of language and reasoning with it. Moreover, we find that the position of an area determines its importance. Specifically, language relevant areas are in the capital position of the cortical kingdom. Therefore they are closely related with autonomous consciousness and working memories. In essence, language is a miniature of the real world. Briefly, this paper would like to bridge the gap between molecule mechanism of neurons and advanced functions such as language and reasoning.
    Self-organizationNeural codingLanguage
  • We study on the lattice the 3d SU(2)+Higgs model, which is an effective theory of a large class of 4d high temperature gauge theories. Using the exact constant physics curve, continuum ($V\to\infty, a\to 0$) results for the properties of the phase transition (critical temperature, latent heat, interface tension) are given. The 3-loop correction to the effective potential of the scalar field is determined. The masses of scalar and vector excitations are determined and found to be larger in the symmetric than in the broken phase. The vector mass is considerably larger than the scalar one, which suggests a further simplification to a scalar effective theory at large Higgs masses. The use of consistent 1-loop relations between 3d parameters and 4d physics permits one to convert the 3d simulation results to quantitatively accurate numbers for different physical theories, such as the Standard Model -- excluding possible nonperturbative effects of the U(1) subgroup -- for Higgs masses up to about 70 GeV. The applications of our results to cosmology are discussed.
    Weight functionHiggs phaseHiggs fieldMonte Carlo methodCondensationPerturbation theoryCosmologyGauge fieldEffective potentialPhase transitions...
  • The flux of papers from electron positron colliders containing data on the photon structure function ended naturally around 2005. It is thus timely to review the theoretical basis and confront the predictions with a summary of the experimental results. The discussion will focus on the increase of the structure function with x (for x away from the boundaries) and its rise with log Q**2, both characteristics beeing dramatically different from hadronic structure functions. Comparing the data with a specific QCD prediction a new determination of the QCD coupling coupling constant is presented. The agreement of the experimental observations with the theoretical calculations of the real and virtual photon structure is a striking success of QCD.
    QuarkQuantum chromodynamicsHadronizationNext-to-leading order computationPositronLight quarkPartonVector mesonInterferenceStrong coupling constant...
  • We have analyzed a large sample of clean blazars detected by Fermi Large Area Telescope (LAT). Using literature and calculation, we obtained intrinsic $\gamma$-ray luminosity excluding beaming effect, black hole mass, broad-line luminosity (used as a proxy for disk luminosity), jet kinetic power from "cavity" power and bulk Lorentz factor for parsec-scale radio emission, and studied the distributions of these parameters and relations between them. Our main results are as follows. (i) After excluding beaming effect and redshift effect, intrinsic $\gamma$-ray luminosity with broad line luminosity, black hole mass and Eddington ratio have significant correlations. Our results confirm the physical distinction between BL Lacs and FSRQs. (ii) The correlation between broad line luminosity and jet power is significant which supports that jet power has a close link with accretion. Jet power depends on both the Eddington ratio and black hole mass. We also obtain $LogL_{\rm BLR}\sim(0.98\pm0.07)Log P_{\rm jet}$ for all blazars, which is consistent with the theoretical predicted coefficient. These results support that jets are powered by energy extraction from both accretion and black hole spin (i.e., not by accretion only). (iii) For almost all BL Lacs, $P_{\rm jet}>L_{\rm disk}$; for most of FSRQs, $P_{\rm jet}<L_{\rm disk}$. The "jet-dominance" (parameterized as $\frac{P_{\rm jet}}{L_{\rm disk}}$) is mainly controlled by the bolometric luminosity. Finally, the radiative efficiency of $\gamma$-ray and properties of TeV blazars detected by Fermi LAT were discussed.
    Astrophysical jetLuminosityBlazarBlack holeFlat spectrum radio quasarAccretionLorentz factorBroad-line regionFERMI telescopeActive Galactic Nuclei...
  • (abridged) Observations of Faraday rotation for extragalactic sources probe magnetic fields both inside and outside the Milky Way. Building on our earlier estimate of the Galactic foreground (Oppermann et al., 2012), we set out to estimate the extragalactic contributions. We discuss different strategies and the problems involved. In particular, we point out that taking the difference between the observed values and the Galactic foreground reconstruction is not a good estimate for the extragalactic contributions. We present a few possibilities for improved estimates using the existing foreground map, allowing for imperfectly described observational noise. In this context, we point out a degeneracy between the contributions to the observed values due to extragalactic magnetic fields and observational noise and comment on the dangers of over-interpreting an estimate without taking into account its uncertainty information. Finally, we develop a reconstruction algorithm based on the assumption that the observational uncertainties are accurately described for a subset of the data, which can overcome this degeneracy. We demonstrate its performance in a simulation, yielding a high quality reconstruction of the Galactic Faraday depth, a precise estimate of the typical extragalactic contribution, and a well-defined probabilistic description of the extragalactic contribution for each source. We apply this reconstruction technique to a catalog of Faraday rotation observations. We vary our assumptions about the data, showing that the dispersion of extragalactic contributions to observed Faraday depths is likely lower than 7 rad/m^2, in agreement with earlier results, and that the extragalactic contribution to an individual data point is poorly constrained by the data in most cases. Posterior samples for the extragalactic contributions and all results of our fiducial model are provided online.
    FaradayPolar capsFaraday rotationCovarianceAngular power spectrumGalactic latitudeSimulationsStatisticsExpectation ValueGalactic plane...
  • The Local Group galaxy M33 exhibits a regular spiral structure and is close enough to permit high resolution analysis of its kinematics, making it an ideal candidate for rotation curve studies of its inner regions. Previous studies have claimed the galaxy has a dark matter halo with an NFW profile, based on statistical comparisons with a small number of other profiles. We apply a Bayesian method from our previous paper to place the dark matter density profile in the context of a continuous, and more general, parameter space, and find that the relation between the baryonic mass-to-light ratio and the log slope of the dark matter density profile in the inner region of the galaxy is inconsistent with observations from similar mass disk galaxies and with theoretical expectations of the impact of feedback on halo profiles. For a wide range of initial assumptions we find the region of parameter space with the inner log slope $\gamma_{\rm in}<0.9$ and a baryonic mass-to-light ratio $\Upsilon_{3.6}<2$ to be strongly excluded by the dynamics of the galaxy. This points to a galaxy whose formation is dominated by contraction rather than feedback, which may be due to interaction with M31.
    Rotation CurveGalaxyTriangulum GalaxyDark matter haloMass to light ratioNavarro-Frenk-White profileDark Matter Density ProfileMonte Carlo Markov chainStellar diskAndromeda galaxy...
  • Can we efficiently learn the parameters of directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and in case of large datasets? We introduce a novel learning and approximate inference method that works efficiently, under some mild conditions, even in the on-line and intractable case. The method involves optimization of a stochastic objective function that can be straightforwardly optimized w.r.t. all parameters, using standard gradient-based optimization methods. The method does not require the typically expensive sampling loops per datapoint required for Monte Carlo EM, and all parameter updates correspond to optimization of the variational lower bound of the marginal likelihood, unlike the wake-sleep algorithm. These theoretical advantages are reflected in experimental results.
    Latent variableMarginal likelihoodAutoencoderMonte Carlo methodNeural networkGenerative modelRegularizationCovarianceStochastic optimizationMean field...
  • We report the discovery of a low-mass planet orbiting Gl 15 A based on radial velocities from the Eta-Earth Survey using HIRES at Keck Observatory. Gl 15 Ab is a planet with minimum mass Msini = 5.35 $\pm$ 0.75 M$_\oplus$, orbital period P = 11.4433 $\pm$ 0.0016 days, and an orbit that is consistent with circular. We characterize the host star using a variety of techniques. Photometric observations at Fairborn Observatory show no evidence for rotational modulation of spots at the orbital period to a limit of ~0.1 mmag, thus supporting the existence of the planet. We detect a second RV signal with a period of 44 days that we attribute to rotational modulation of stellar surface features, as confirmed by optical photometry and the Ca II H & K activity indicator. Using infrared spectroscopy from Palomar-TripleSpec, we measure an M2 V spectral type and a sub-solar metallicity ([M/H] = -0.22, [Fe/H] = -0.32). We measure a stellar radius of 0.3863 $\pm$ 0.0021 R$_\odot$ based on interferometry from CHARA.
    PlanetStarPhotometryHigh resolution échelle spectrometerAmplitudeEccentricityTime SeriesSpectral energy distributionInterferometryM dwarfs...
  • We use three-dimensional simulations to study the atmospheric circulation on the first Earth-sized exoplanet discovered in the habitable zone of an M star. We treat Gliese 581g as a scaled-up version of Earth by considering increased values for the exoplanetary radius and surface gravity, while retaining terrestrial values for parameters which are unconstrained by current observations. We examine the long-term, global temperature and wind maps near the surface of the exoplanet --- the climate. The specific locations for habitability on Gliese 581g depend on whether the exoplanet is tidally-locked and how fast radiative cooling occurs on a global scale. Independent of whether the existence of Gliese 581g is confirmed, our study highlights the use of general circulation models to quantify the atmospheric circulation on potentially habitable, Earth-sized exoplanets, which will be the prime targets of exoplanet discovery and characterization campaigns in the next decade.
    Extrasolar planetEarthTidal lockingCoolingGliese 581 gAtmospheric circulationClimateStarHabitable zoneSurface gravity...
  • We discuss an extension of the standard model by fields not charged under standard model gauge symmetry in which the electroweak symmetry breaking is driven by the Higgs quartic coupling itself without the need for a negative mass term in the potential. This is achieved by a scalar field S with a large coupling to the Higgs field at the electroweak scale which is driven to very small values at high energies by the gauge coupling of a hidden symmetry under which S is charged. This model remains perturbative all the way to the Planck scale. The Higgs boson is fully SM-like in its couplings to fermions and gauge bosons. The only modified couplings are the effective cubic and quartic self-couplings of the Higgs boson, which are enhanced by 67% and 267% respectively.
    Higgs bosonStandard ModelElectroweak symmetry breakingHiggs boson massGauge coupling constantGauge symmetryPlanck scaleScalar fieldHiggs quartic couplingVacuum expectation value...