- Time dilation (Time dilation)

by Manoj Agravat19 Jul 2016 19:41 - Baryon number (Baryon number)

by Manoj Agravat10 Jul 2016 08:30 - Causal inference (Causal inference)

by Manoj Agravat20 Jul 2016 13:19 - Full-duplex (Full-duplex)

by Muhammad R. A. Khandaker08 Mar 2016 12:48 - Effective field theory (Effective field theory)

by Manoj Agravat15 Jul 2016 21:12 - Neutrino Minimal Standard Model (Neutrino Minimal Standard Model)

by Prof. Mikhail Shaposhnikov10 Jan 2011 23:58 - Geometric flattening (Geometric flattening)

by Dr. Ganna Ivashchenko05 Dec 2010 22:14 - Fingers of God (Fingers of God)

by Dr. Ganna Ivashchenko18 May 2011 22:42 - Andreev reflection (Andreev reflection)

by Prof. Carlo Beenakker08 Dec 2010 13:33 - Kaluza-Klein dark matter (Kaluza-Klein dark matter)

by Dr. Geraldine Servant05 Dec 2010 22:13

- In this paper, the consequences of introducing a deformed Snyder-Kepler potential in the Schwarzchild metric are investigated. After this modification, it is obtained a dynamically depending horizon with different penetration radius for massive particles and light rays in radial orbits. In the case of circular orbits, they all remain untouched.Black holeHorizonCircular orbitRadial velocityPhase spacePlanck missionEffective potentialQuantum gravityPoisson bracketConserved quantities...
- We consider the main physical notions and phenomena described by the author in his mathematical theory of thermodynamics. The new mathematical model yields the equation of state for a wide class of classical gases consisting of non-polar molecules provided that the spinodal, the critical isochore and the second virial coefficient are given. As an example, the spinodal, the critical isochore and the second virial coefficient are taken from the Van-der-Waals model. For this specific example, the isotherms constructed on the basis of the author's model are compared to the Van-der-Waals isotherms, obtained from completely different considerations.LiquidsDegree of freedomEarthViscositySecond Virial CoefficientCritical valueCritical pointCritical temperaturePhase transitionsTurbulence...
- In 1922, Kottler put forward the program to remove the gravitational potential, the metric of spacetime, from the fundamental equations in physics as far as possible. He successfully applied this idea to Newton's gravitostatics and to Maxwell's electrodynamics, where Kottler recast the field equations in premetric form and specified a metric-dependent constitutive law. We will discuss the basics of the premetric approach and some of its beautiful consequences, like the division of universal constants into two classes. We show that classical electrodynamics can be developed without a metric quite straightforwardly: the Maxwell equations, together with a local and linear response law for electromagnetic media, admit a consistent premetric formulation. Kottler's program succeeds here without provisos. In Kottler's approach to gravity, making the theory relativistic, two premetric quasi-Maxwellian field equations arise, but their field variables, if interpreted in terms of general relativity, do depend on the metric. However, one can hope to bring the Kottler idea to work by using the teleparallelism equivalent of general relativity, where the gravitational potential, the coframe, can be chosen in a premetric way.ElectrodynamicsGeneral relativitySpeed of lightTwo-formDifferential form of degree threeAxionConstitutive relationGravitational fieldsPhysical dimensionsBirefringence...
- In our work we give the examples using Fermat's Last Theorem for solving some problems from algebra, geometry and number theoryFermat's Last TheoremNumber theorySquare-freeAlgebraic numberElliptic curveFundamental theorem of arithmeticComplex numberAlgebraGeometryPolynomial...
- In this paper, we present a new approach to the convolved Fibonacci numbers arising from the generating function of them and give some new and explicit identities for the convolved Fibonacci numbers.PolynomialDifferential equations
- In the present paper we study, in a mathematically non-formal way, the validity of the Fermat's Last Theorem (FLT) by generalizing the usual procedure of extracting the square root of non convenient objects initially introduced by P. A. M. Dirac in the study of the linear relativistic wave equation.Fermat's Last TheoremRelativistic wave equationsFourth powerHamiltonianPermutationObjectDirac equationAlgebraLanguage...
- The internal structure of gas giant planets may be more complex than the commonly assumed core-envelope structure with an adiabatic temperature profile. Different primordial internal structures as well as various physical processes can lead to non-homogenous compositional distributions. A non-homogenous internal structure has a significant impact on the thermal evolution and final structure of the planets. In this paper, we present alternative structure and evolution models for Jupiter and Saturn allowing for non-adiabatic primordial structures and the mixing of heavy elements by convection as these planets evolve. We present the evolution of the planets accounting for various initial composition gradients, and in the case of Saturn, include the formation of a helium-rich region as a result of helium rain. We investigate the stability of regions with composition gradients against convection, and find that the helium shell in Saturn remains stable and does not mix with the rest of the envelope. In other cases, convection mixes the planetary interior despite the existence of compositional gradients, leading to the enrichment of the envelope with heavy elements. We show that non-adiabatic structures (and cooling histories) for both Jupiter and Saturn are feasible. The interior temperatures in that case are much higher that for standard adiabatic models. We conclude that the internal structure is directly linked to the formation and evolution history of the planet. These alternative internal structures of Jupiter and Saturn should be considered when interpreting the upcoming Juno and Cassini data.SaturnPlanetJupiterThermalisationTemperature profileHomogenizationCoolingOpacityLuminosityAlbedo...
- Since its emergence, GNI has become the new paradigm in numerical solution of ODEs, while making significant inroads into numerical PDEs. As often, yesterday's revolutionaries became the new establishment. This is an excellent moment to pause and take stock. Have all the major challenges been achieved, all peaks scaled, leaving just a tidying-up operation? Is there still any point to GNI as a separate activity or should it be considered as a victim of its own success and its practitioners depart to fields anew - including new areas of activity that have been fostered or enabled by GNI?HamiltonianSymplectizationGeometric integratorRunge-Kutta methodsManifoldGenerating functionalClassificationDiscretizationTangent bundleCotangent bundle...
- Constrained mechanical multibody systems arise in many important applications like robotics, vehicle and machinery dynamics and biomechanics of locomotion of humans. These systems are described by the Euler-Lagrange equations which are index-three differential-algebraic equations(DAEs) and hence difficult to treat numerically. The purpose of this paper is to propose a novel technique to solve the Euler-Lagrange equations efficiently. This technique applies the Adomian decomposition method (ADM) directly to these equations. The great advantage of our technique is that it neither applies complex transformations to the equations nor uses index-reductions to obtain the solution. Furthermore, it requires solving only linear algebraic systems with a constant nonsingular coefficient matrix at each iteration. The technique developed leads to a simple general algorithm that can be programmed in Maple or Mathematica to simulate real application problems. To illustrate the effectiveness of the proposed technique and its advantages, we apply it to solve an example of the Euler- Lagrange equations that describes a two-link planar robotic system.Adomian decomposition methodEuler-Lagrange equationRoboticsWolfram MathematicaOrdinary differential equationsRankExact solutionOrientationError propagationInitial value problem...
- A ridge function is a function of several variables that is constant along certain directions in its domain. Using classical dimensional analysis, we show that many physical laws are ridge functions; this fact yields insight into the structure of physical laws and motivates further study into ridge functions and their properties. We also connect dimensional analysis to modern subspace-based techniques for dimension reduction, including active subspaces in deterministic approximation and sufficient dimension reduction in statistical regression.StatisticsDimension reductionRegressionTurbulenceTurbulent flowSufficient dimension reductionReynolds numberFinite differenceRankLaminar flow...
- The topic of these notes could be easily expanded into a full one-semester course. Nevertheless, we shall try to give some flavour along with theoretical bases of spectral and pseudo-spectral methods. The main focus is made on Fourier-type discretizations, even if some indications on how to handle non-periodic problems via Tchebyshev and Legendre approaches are made as well. The applications presented here are diffusion-type problems in accordance with the topics of the PhD school.Ordinary differential equationsSpectral methodBrownian motionPartial differential equationHeat equationOptimizationExact solutionFlavourMonte Carlo methodDiffusion coefficient...
- This paper provides an overview on tools from potential theory on the sphere and some applications in geoscience.Green's functionFundamental solutionBoundary value problemEarthCompletenessVorticityDirichlet problemStream functionRegularizationMagnetization...
- Maxwell's Demon, 'a being whose faculties are so sharpened that he can follow every molecule in its course', has been the centre of much debate about its abilities to violate the second law of thermodynamics. Landauer's hypothesis, that the Demon must erase its memory and incur a thermodynamic cost, has become the standard response to Maxwell's dilemma, and its implications for the thermodynamics of computation reach into many areas of quantum and classical computing. It remains, however, still a hypothesis. Debate has often centred around simple toy models of a single particle in a box. Despite their simplicity, the ability of these systems to accurately represent thermodynamics (specifically to satisfy the second law) and whether or not they display Landauer Erasure, has been a matter of ongoing argument. The recent Norton-Ladyman controversy is one such example. In this paper we introduce a programming language to describe these simple thermodynamic processes, and give a formal operational semantics and program logic as a basis for formal reasoning about thermodynamic systems. We formalise the basic single-particle operations as statements in the language, and then show that the second law must be satisfied by any composition of these basic operations. This is done by finding a computational invariant of the system. We show, furthermore, that this invariant requires an erasure cost to exist within the system, equal to kTln2 for a bit of information: Landauer Erasure becomes a theorem of the formal system. The Norton-Ladyman controversy can therefore be resolved in a rigorous fashion, and moreover the formalism we introduce gives a set of reasoning tools for further analysis of Landauer erasure, which are provably consistent with the second law of thermodynamics.ErasureEntropySecond law of thermodynamicsCompressibilityComputer LanguageFormal systemProgramming LanguageKelvinComputational scienceStatistics...
- "The von Neumann vicious circle" means that non-von Neumann computer architectures cannot be developed because of the lack of widely available and effective non-von Neumann languages. New languages cannot be created because of lack of conceptual foundations for non-von Neumann architectures. The reason is that programming languages are high-level abstract isomorphic copies of von Neumann computer architectures. This constitutes the current paradigm in Computer Science. The paradigm is equivalent to the predominant view that computations on higher order objects (functionals) can be done only symbolically, i.e. by term rewriting. The paper is a short introduction to the papers arXiv:1501.03043 and arXiv:1510.02787 trying to break the paradigm by introducing a framework that may be seen as a higher order functional HDL (Hardware Description Language).ArchitectureComputational scienceProgramming LanguageIsomorphismFunctional programmingRepresentative functionLanguageObjectCompilersSurveys...
- Running distributed applications in the cloud involves deployment. That is, distribution and configuration of application services and middleware infrastructure. The considerable complexity of these tasks resulted in the emergence of declarative JSON-based domain-specific deployment languages to develop deployment programs. However, existing deployment programs unsafely compose artifacts written in different languages, leading to bugs that are hard to detect before run time. Furthermore, deployment languages do not provide extension points for custom implementations of existing cloud services such as application-specific load balancing policies. To address these shortcomings, we propose CPL (Cloud Platform Language), a statically-typed core language for programming both distributed applications as well as their deployment on a cloud platform. In CPL, application services and deployment programs interact through statically typed, extensible interfaces, and an application can trigger further deployment at run time. We provide a formal semantics of CPL and demonstrate that it enables type-safe, composable and extensible libraries of service combinators, such as load balancing and fault tolerance.ConcurrenceScalaElasticityOptimizationMarketSoftware errorsInformation flowBig dataModularityProgramming Language...
- To support growing massive parallelism, functional components and also the capabilities of current processors are changing and continue to do so. Todays computers are built upon multiple processing cores and run applications consisting of a large number of threads, making runtime thread management a complex process. Further, each core can support multiple, concurrent thread execution. Hence, hardware and software support for threads is more and more needed to improve peak-performance capacity, overall system throughput, and has therefore been the subject of much research. This paper surveys, many of the proposed or currently available solutions for executing, distributing and managing threads both in hardware and software. The nature of current applications is diverse. To increase the system performance, all programming models may not be suitable to harness the built-in massive parallelism of multicore processors. Due to the heterogeneity in hardware, hybrid programming model (which combines the features of shared and distributed model) currently has become very promising. In this paper, first, we have given an overview of threads, threading mechanisms and its management issues during execution. Next, we discuss about different parallel programming models considering to their explicit thread support. We also review the programming models with respect to their support to shared-memory, distributed-memory and heterogeneity. Hardware support at execution time is very crucial to the performance of the system, thus different types of hardware support for threads also exist or have been proposed, primarily based on widely used programming models. We also further discuss on software support for threads, to mainly increase the deterministic behavior during runtime. Finally, we conclude the paper by discussing some common issues related to the thread management.ConcurrenceHybridizationSurveys
- To concisely and effectively demonstrate the capabilities of our program transformation system Loo.py, we examine a transformation path from two real-world Fortran subroutines as found in a weather model to a single high-performance computational kernel suitable for execution on modern GPU hardware. Along the transformation path, we encounter kernel fusion, vectorization, prefetch- ing, parallelization, and algorithmic changes achieved by mechanized conversion between imperative and functional/substitution- based code, among a number more. We conclude with performance results that demonstrate the effects and support the effectiveness of the applied transformations.Multidimensional ArrayDegree of freedomOptimizationHigh Performance ComputingPythonArray programmingSchedulingArchitectureSpectral element methodA dwarfs...
- Dataflow matrix machines are a powerful generalization of recurrent neural networks. They work with multiple types of linear streams and multiple types of neurons, including higher-order neurons which dynamically update the matrix describing weights and topology of the network in question while the network is running. It seems that the power of dataflow matrix machines is sufficient for them to be a convenient general purpose programming platform. This paper explores a number of useful programming idioms and constructions arising in this context.Recurrent neural networkArchitectureOrientationVector spaceOptimizationBipartite networkProgramming LanguageBinary numberLabVIEWKeyphrase...
- Jython is a Java-based Python implementation and the most seamless way to integrate Python and Java. It achieves high efficiency by compiling Python code to Java bytecode and thus letting Java's JIT optimize it - an approach that enables Python code to call Java functions or to subclass Java classes. It enables Python code to leverage Java's multithreading features and utilizes Java's built-in garbage collection (GC). However, it currently does not support CPython's C-API and thus does not support native extensions like NumPy and SciPy. Since most scientific code depends on such extensions, it is not runnable with Jython. Jython Native Interface (JyNI) is a compatibility layer that aims to provide CPython's native C extension API on top of Jython. JyNI is implemented using the Java Native Interface (JNI) and its native part is designed to be binary compatible with existing extension builds [...].PythonApplication programming interfaceNumPyCountingSciPyEcosystemsArchitectureEmbeddingRegressionInterference...
- High-level scripting languages are in many ways polar opposites to GPUs. GPUs are highly parallel, subject to hardware subtleties, and designed for maximum throughput, and they offer a tremendous advance in the performance achievable for a significant number of computational problems. On the other hand, scripting languages such as Python favor ease of use over computational speed and do not generally emphasize parallelism. PyCUDA is a package that attempts to join the two together. This chapter argues that in doing so, a programming environment is created that is greater than just the sum of its two parts. We would like to note that nearly all of this chapter applies in unmodified form to PyOpenCL, a sister project of PyCUDA, whose goal it is to realize the same concepts as PyCUDA for OpenCL.Multidimensional ArrayKeyphraseArithmeticHybridizationPythonMATLABSouth ecliptic poleComplex numberSciPyNumPy...
- The inability to predict lasting languages and architectures led us to develop OCCA, a C++ library focused on host-device interaction. Using run-time compilation and macro expansions, the result is a novel single kernel language that expands to multiple threading languages. Currently, OCCA supports device kernel expansions for the OpenMP, OpenCL, and CUDA platforms. Computational results using finite difference, spectral element and discontinuous Galerkin methods show OCCA delivers portable high performance in different architectures and platforms.KeyphraseFinite differenceMultidimensional ArrayStencilVoidSaturnian satellitesDiscontinuous Galerkin methodNumerical methodsQuadratureLoop group...
- HTML5 WebSocket protocol brings real time communication in web browsers to a new level. Daily, new products are designed to stay permanently connected to the web. WebSocket is the technology enabling this revolution. WebSockets are supported by all current browsers, but it is still a new technology in constant evolution. WebSockets are slowly replacing older client-server communication technologies. As opposed to comet-like technologies WebSockets' remarkable performances is a result of the protocol's fully duplex nature and because it doesn't rely on HTTP communications. To begin with this paper studies the WebSocket protocol and different WebSocket servers implementations. This first theoretic part focuses more deeply on heterogeneous implementations and OpenCL. The second part is a benchmark of a new promising library. The real-time engine used for testing purposes is SocketCluster. SocketCluster provides a highly scalable WebSocket server that makes use of all available cpu cores on an instance. The scope of this work is reduced to vertical scaling of SocketCluster.Communication
- In this paper we introduce IPMACC, a framework for translating OpenACC applications to CUDA or OpenCL. IPMACC is composed of set of translators translating OpenACC for C applications to CUDA or OpenCL. The framework uses the system compiler (e.g. nvcc) for generating final accelerator's binary. The framework can be used for extending the OpenACC API, executing OpenACC applications, or obtaining CUDA or OpenCL code which is equivalent to OpenACC code. We verify correctness of our framework under several benchmarks included from Rodinia Benchmark Suit and CUDA SDK. We also compare the performance of CUDA version of the benchmarks to OpenACC version which is compiled by our framework. By comparing CUDA and OpenACC versions, we discuss the limitations of OpenACC in achieving a performance near to highly-optimized CUDA version.Multidimensional ArrayConcurrenceBinary starImage ProcessingOperator systemPartial differential equationEuclidean distanceBackpropagationMachine learningNeural network...
- Designing hardware is a time-consuming and complex process. Realization of both, embedded and high-performance applications can benefit from a design process on a higher level of abstraction. This helps to reduce development time and allows to iteratively test and optimize the hardware design during development, as common in software development. We present our tool, OCLAcc, which allows the generation of entire FPGA-based hardware accelerators from OpenCL and discuss the major novelties of OpenCL 2.0 and how they can be realized in hardware using OCLAcc.Optimization
- Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software using these hardware accelerators introduces additional challenges for the developer such as exposing additional parallelism, dealing with different hardware designs and using multiple development frameworks in order to use devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between host application and different hardware accelerators. DKS handles the communication between the host and device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated in OPAL (Object-oriented Parallel Accelerator Library) to speed up a parallel FFT based Poisson solver and Monte Carlo simulations for particle matter interaction used for proton therapy degrader modeling. DKS was also used together with Minuit2 for parameter fitting, where $\chi^2$ and $max-log-likelihood$ functions were offloaded to the hardwareaccelerator. The concepts of the DKS, first results together with plans for the future will be shown in this paper.Fast Fourier transformOpacitySchedulingMonte Carlo methodArchitecturePositronMuonRutherford scatteringSouth ecliptic poleHigh Performance Computing...
- Commodity video-gaming hardware (consoles, graphics cards, tablets, etc.) performance has been advancing at a rapid pace owing to strong consumer demand and stiff market competition. Gaming hardware devices are currently amongst the most powerful and cost-effective computational technologies available in quantity. In this article, we evaluate a sample of current generation video-gaming hardware devices for scientific computing and compare their performance with specialized supercomputing general purpose graphics processing units (GPGPUs). We use the OpenCL SHOC benchmark suite, which is a measure of the performance of compute hardware on various different scientific application kernels, and also a popular public distributed computing application, Einstein@Home in the field of gravitational physics for the purposes of this evaluation.High Performance ComputingMarketFieldUnits...
- OpenCL, along with CUDA, is one of the main tools used to program GPGPUs. However, it allows running the same code on multi-core CPUs too, making it a rival for the long-established OpenMP. In this paper we compare OpenCL and OpenMP when developing and running compute-heavy code on a CPU. Both ease of programming and performance aspects are considered. Since, unlike a GPU, no memory copy operation is involved, our comparisons measure the code generation quality, as well as thread management efficiency of OpenCL and OpenMP. We evaluate the performance of these development tools under two conditions: a large number of short-running compute-heavy parallel code executions, when more thread management is performed, and a small number of long-running parallel code executions, when less thread management is required. The results show that OpenCL and OpenMP each win in one of the two conditions. We argue that while using OpenMP requires less setup, OpenCL can be a viable substitute for OpenMP from a performance point of view, especially when a high number of thread invocations is required. We also provide a number of potential pitfalls to watch for when moving from OpenMP to OpenCL.MeasurementPotential
- We introduce SparkCL, an open source unified programming framework based on Java, OpenCL and the Apache Spark framework. The motivation behind this work is to bring unconventional compute cores such as FPGAs/GPUs/APUs/DSPs and future core types into mainstream programming use. The framework allows equal treatment of different computing devices under the Spark framework and introduces the ability to offload computations to acceleration devices. The new framework is seamlessly integrated into the standard Spark framework via a Java-OpenCL device programming layer which is based on Aparapi and a Spark programming layer that includes new kernel function types and modified Spark transformations and actions. The framework allows a single code base to target any type of compute core that supports OpenCL and easy integration of new core types into a Spark cluster.AcceleronTransformationsAction
- Designing programming environments for physical simulation is challenging because simulations rely on diverse algorithms and geometric domains. These challenges are compounded when we try to run efficiently on heterogeneous parallel architectures. We present Ebb, a domain-specific language (DSL) for simulation, that runs efficiently on both CPUs and GPUs. Unlike previous DSLs, Ebb uses a three-layer architecture to separate (1) simulation code, (2) definition of data structures for geometric domains, and (3) runtimes supporting parallel architectures. Different geometric domains are implemented as libraries that use a common, unified, relational data model. By structuring the simulation framework in this way, programmers implementing simulations can focus on the physics and algorithms for each simulation without worrying about their implementation on parallel computers. Because the geometric domain libraries are all implemented using a common runtime based on relations, new geometric domains can be added as needed, without specifying the details of memory management, mapping to different parallel architectures, or having to expand the runtime's interface. We evaluate Ebb by comparing it to several widely used simulations, demonstrating comparable performance to hand-written GPU code where available, and surpassing existing CPU performance optimizations by up to 9$\times$ when no GPU code exists.ArchitectureVegaOptimizationRegularizationData structuresElasticityMultidimensional ArrayArithmeticPartial differential equationRelational model...
- In this essay we study various notions of projective space (and other schemes) over $\mathbb{F}_{1^\ell}$, with $\mathbb{F}_1$ denoting the field with one element. Our leading motivation is the "Hiden Points Principle," which shows a huge deviation between the set of rational points as closed points defined over $\mathbb{F}_{1^\ell}$, and the set of rational points defined as morphisms $\texttt{Spec}(\mathbb{F}_{1^\ell}) \mapsto \mathcal{X}$. We also introduce, in the same vein as Kurokawa [13], schemes of $\mathbb{F}_{1^\ell}$-type, and consider their zeta functions.MorphismMonoidCountingGraphAutomorphismZeta functionField-with-one-elementPolynomial ringVector spaceArithmetic...
- We provide a compendium of results at the level of matrix elements for a systematic study of dark matter scattering and annihilation. We identify interactions that yield spin-dependent and spin-independent scattering and specify whether the interactions are velocity- and/or momentum-suppressed. We identify the interactions that lead to s-wave or p-wave annihilation, and those that are chirality-suppressed. We also list the interaction structures that can interfere in scattering and annihilation processes. Using these results, we point out situations in which deviations from the standard lore are obtained.Dark matterInterferenceStandard ModelSpin independentScattering matrixDark matter interactionKinematicsPseudovectorForm factorDark matter annihilation...
- We present deep Near-infrared (NIR) images of a sample of 19 intermediate-redshift ($0.3<z<1.0$) radio-loud active galactic nuclei (AGN) with powerful relativistic jets ($L_{1.4GHz} >10^{27}$ WHz$^{-1}$), previously classified as flat-spectrum radio quasars. We also compile host galaxy and nuclear magnitudes for blazars from literature. The combined sample (this work and compilation) contains 100 radio-loud AGN with host galaxy detections and a broad range of radio luminosities $L_{1.4GHz} \sim 10^{23.7} - 10^{28.3}$~WHz$^{-1}$, allowing us to divide our sample into high-luminosity blazars (HLBs) and low-luminosity blazars (LLBs). The host galaxies of our sample are bright and seem to follow the $\mu_{e}$-$R_{eff}$ relation for ellipticals and bulges. The two populations of blazars show different behaviours in the \mnuc - \mbulge plane, where a statistically significant correlation is observed for HLBs. Although it may be affected by selection effects, this correlation suggests a close coupling between the accretion mode of the central supermassive black hole and its host galaxy, that could be interpreted in terms of AGN feedback. Our findings are consistent with semi--analytical models where low--luminosity AGN emit the bulk of their energy in the form of radio jets, producing a strong feedback mechanism, and high--luminosity AGN are affected by galaxy mergers and interactions, which provide a common supply of cold gas to feed both nuclear activity and star formation episodes.Host galaxyActive Galactic NucleiLuminosityBlazarBL LacertaeFlat spectrum radio quasarPoint spread functionRelativistic jetBlack holeMilky Way...
- We study how outflows of gas launched from a central galaxy undergoing repeated starbursts propagate through the circumgalactic medium (CGM), using the simulation code RAMSES. We assume that the outflow from the disk can be modelled as a rapidly moving bubble of hot gas at $\mathrm{\sim1\;kpc}$ above disk, then ask what happens as it moves out further into the halo around the galaxy on $\mathrm{\sim 100\;kpc}$ scales. To do this we run 60 two-dimensional simulations scanning over parameters of the outflow. Each of these is repeated with and without radiative cooling, assuming a primordial gas composition to give a lower bound on the importance of cooling. In a large fraction of radiative-cooling cases we are able to form rapidly outflowing cool gas from in situ cooling of the flow. We show that the amount of cool gas formed depends strongly on the 'burstiness' of energy injection; sharper, stronger bursts typically lead to a larger fraction of cool gas forming in the outflow. The abundance ratio of ions in the CGM may therefore change in response to the detailed historical pattern of star formation. For instance, outflows generated by star formation with short, intense bursts contain up to 60 per cent of their gas mass at temperatures $<5 \times 10^4\,\mathrm{K}$; for near-continuous star formation the figure is $\lesssim$ 5 per cent. Further study of cosmological simulations, and of idealised simulations with e.g., metal-cooling, magnetic fields and/or thermal conduction, will help to understand the precise signature of bursty outflows on observed ion abundances.CoolingStar formationCircumgalactic mediumMilky WayOf starsHot gasRadiative coolingAbundanceCooling timescaleVirial radius...
- Electron heat conduction is explored with particle-in-cell simulations and analytic modeling in a high $\beta$ system relevant to the intracluster medium of galaxy clusters. Linear wave theory reveals that whistler waves are driven unstable by electron heat flux even when the heat flux is weak. The resonant interaction of electrons with these waves plays a critical role in controlling the impact of the waves on the heat flux. In a 1D model only electrons moving opposite in direction to the heat flux resonate with the waves and electron heat flux is only modestly reduced. In a 2D system transverse whistlers also resonate with electrons propagating in the direction of the heat flux and resonant overlap leads to strong suppression of electron heat flux. The results suggest that electron heat conduction might be strongly suppressed in galaxy clusters.Intra-cluster mediumInstabilityCluster of galaxiesParticle-in-cellPhase spaceThermalisationCyclotronAnisotropyTwo-dimensional systemActive Galactic Nuclei...
- We have studied the behaviour of stellar streams in the Aquarius fully cosmological N-body simulations of the formation of Milky Way halos. In particular, we have characterised the streams in angle/frequency spaces derived using an approximate but generally well-fitting spherical potential. We have also run several test-particle simulations to understand and guide our interpretation of the different features we see in the Aquarius streams. Our goal is both to establish which deviations of the expected action-angle behaviour of streams exist because of the approximations made on the potential, but also to derive to what degree we can use these coordinates to model streams reliably. We have found that many of the Aquarius streams wrap in angle space along relatively straight lines, and also in frequency space. On the other hand, from our controlled simulations we have been able to establish that deviations from spherical symmetry, the use of incorrect potentials and the inclusion of self-gravity lead to streams in angle space to still be along relatively straight lines but also to depict wiggly behaviour whose amplitude increases as the approximation to the true potential becomes worse. In frequency space streams typically become thicker and somewhat distorted. Therefore, our analysis explains most of the features seen in the approximate angle and frequency spaces for the Aquarius streams with the exception of their somewhat `noisy' and `patchy' morphologies. These are likely due to the interactions with the large number of dark matter subhalos present in the cosmological simulations. Since the measured angle-frequency misalignments of the Aquarius streams can largely be attributed to using the wrong (spherical) potential, determining the mass growth history of these halos will only be feasible once the true potential has been determined robustly.Aquarius simulationNavarro-Frenk-White profileAquarius StreamDark matter particleMass distributionN-body simulationMilky WayPhase spacePhase mixingStellar halo...
- We propose a non-perturbative regularization of four dimensional chiral gauge theories. In our formulation, we consider a Dirac fermion in six dimensions with two different mass terms having domain-wall profiles in the fifth and the sixth directions, respectively. A Weyl fermion appears as a localized mode at the junction of two different domain-walls. One domain-wall naturally exhibits the Stora-Zumino chain of the anomaly descent equations, starting from the axial U(1) anomaly in six-dimensions to the gauge anomaly in four-dimensions. Another domain-wall mediates a similar inflow of the global anomalies. The anomaly free condition is equivalent to requiring that the axial U(1) anomaly and the parity anomaly are canceled among the six-dimensional Dirac fermions. Since our formulation is a massive vector-like theory, a non-perturbative regularization is possible on a lattice. Putting the gauge field at the four-dimensional junction and extending it to the bulk using the Yang-Mills gradient flow, as recently proposed by Grabowska and Kaplan, we define the four-dimensional path integral of the target chiral gauge theory, which is free from the sign problem.Domain wallChiral gauge theoryAnomaly descent equationDirac fermionRegularizationDimensional regularizationAbelian anomalyPath integralWeyl fermionGauge field...
- The study of high energy collisions between heavy nuclei is a field unto itself, distinct from nuclear and particle physics. A defining aspect of heavy ion physics is the importance of a bulk, self-interacting system with a rich space-time substructure. I focus on the issue of timescales in heavy ion collisions, starting with proof from low-energy collisions that femtoscopy can, indeed, measure very long timescales. I then discuss the relativistic case, where detailed measurements over three orders of magnitude in energy reveal a timescale increase that might be due to a first-order phase transition. I discuss also consistency in evolution timescales as determined from traditional longitudinal sizes and a novel analysis using shape information.Two-point correlation functionHeavy ion collisionRelativistic Heavy Ion ColliderAnisotropyQuantum chromodynamicsIntensityInterferometrySuper Proton SynchrotronPionLarge Hadron Collider...
- Recently, Bollob\'as, Janson and Riordan introduced a family of random graph models producing inhomogeneous graphs with $n$ vertices and $\Theta(n)$ edges whose distribution is characterized by a kernel, i.e., a symmetric measurable function $\ka:[0,1]^2 \to [0,\infty)$. To understand these models, we should like to know when different kernels $\ka$ give rise to `similar' graphs, and, given a real-world network, how `similar' is it to a typical graph $G(n,\ka)$ derived from a given kernel $\ka$. The analogous questions for dense graphs, with $\Theta(n^2)$ edges, are answered by recent results of Borgs, Chayes, Lov\'asz, S\'os, Szegedy and Vesztergombi, who showed that several natural metrics on graphs are equivalent, and moreover that any sequence of graphs converges in each metric to a graphon, i.e., a kernel taking values in $[0,1]$. Possible generalizations of these results to graphs with $o(n^2)$ but $\omega(n)$ edges are discussed in a companion paper [arXiv:0708.1919]; here we focus only on graphs with $\Theta(n)$ edges, which turn out to be much harder to handle. Many new phenomena occur, and there are a host of plausible metrics to consider; many of these metrics suggest new random graph models, and vice versa.GraphRandom graphIsomorphismBranching processInvolutePoisson distributionGiant componentPoisson processDegree distributionBipartite network...
- There is still much debate surrounding how the most massive, central galaxies in the local universe have assembled their stellar mass, especially the relative roles of in-situ growth versus later accretion via mergers. In this paper, we set firmer constraints on the evolutionary pathways of the most massive central galaxies by making use of empirical estimates on their abundances and stellar ages. The most recent abundance matching and direct measurements strongly favour that a substantial fraction of massive galaxies with Mstar>3x10^11 Msun reside at the centre of clusters with mass Mhalo>3x10^13 Msun. Spectral analysis supports ages >10 Gyrs, corresponding to a formation redshift z_form >2. We combine these two pieces of observationally-based evidence with the mass accretion history of their host dark matter haloes. We find that in these massive haloes, the stellar mass locked up in the central galaxy is comparable to, if not greater than, the total baryonic mass at z_form. These findings indicate that either only a relatively minor fraction of their present-day stellar mass was formed in-situ at z_form, or that these massive, central galaxies form in the extreme scenario where almost all of the baryons in the progenitor halo are converted into stars. Interestingly, the latter scenario would not allow for any substantial size growth since the galaxy's formation epoch either via mergers or expansion. We show our results hold irrespective of systematic uncertainties in stellar mass, abundances, galaxy merger rates, stellar initial mass function, star formation rate and dark matter accretion histories.Stellar massMassive galaxiesStar formationMilky WayEarly-type galaxyDark matter haloStar formation rateAbundanceHalo abundance matchingStar...
- Gravitational lensing has long been considered as a valuable tool to determine the total mass of galaxy clusters. The shear profile as inferred from the statistics of ellipticity of background galaxies allows to probe the cluster intermediate and outer regions thus determining the virial mass estimate. However, the mass sheet degeneracy and the need for a large number of background galaxies motivate the search for alternative tracers which can break the degeneracy among model parameters and hence improve the accuracy of the mass estimate. Lensing flexion, i.e. the third derivative of the lensing potential, has been suggested as a good answer to the above quest since it probes the details of the mass profile. We investigate here whether this is indeed the case considering jointly using weak lensing, magnification and flexion. We use a Fisher matrix analysis to forecast the relative improvement in the mass accuracy for different assumptions on the shear and flexion signal - to - noise (S/N) ratio also varying the cluster mass, redshift, and ellipticity. It turns out that the error on the cluster mass may be reduced up to a factor 2 for reasonable values of the flexion S/N ratio. As a general result, we get that the improvement in mass accuracy is larger for more flattened haloes, but extracting general trends is a difficult because of the many parameters at play. We nevertheless find that flexion is as efficient as magnification to increase the accuracy in both mass and concentration determination.Signal to noise ratioFisher information matrixVirial cluster massEllipticityReduced shearBackground galaxyWeak lensingMass profileVirial massLikelihood function...
- Directional detection of Galactic Dark Matter is a promising search strategy for discriminating genuine WIMP events from background ones. Technical progress on gaseous detectors and read-outs has permitted the design and construction of competitive experiments. However, to take full advantage of this powerful detection method, one need to be able to extract information from an observed recoil map to identify a WIMP signal. We present a comprehensive formalism, using a map-based likelihood method allowing to recover the main incoming direction of the signal and its significance, thus proving its galactic origin. This is a blind analysis intended to be used on any directional data. Constraints are deduced in the ($\sigma_n, m_\chi$) plane and systematic studies are presented in order to show that, using this analysis tool, unambiguous dark matter detection can be achieved on a large range of exposures and background levels.Weakly interacting massive particleDark matterSolar motionDirectional detection of galactic dark matterForm factorSunConstellationsStatisticsGalactic CenterExclusion limit...
- Directional detection of galactic Dark Matter offers a unique opportunity to identify Weakly Interacting Massive Particle (WIMP) events as such. Depending on the unknown WIMP-nucleon cross section, directional detection may be used to : exclude Dark Matter, discover galactic Dark Matter with a high significance or constrain WIMP and halo properties. We review the discovery reach of Dark Matter directional detection.Dark matterWeakly interacting massive particleVelocity dispersionDirectional detection of galactic dark matterDark matter haloDark Matter Density ProfileAnisotropySpin independentHalo modelNeutralino...
- We present the first detailed simulations of the head-tail effect relevant to directional Dark Matter searches. Investigations of the location of the majority of the ionization charge as being either at the beginning half (tail) or at the end half (head) of the nuclear recoil track were performed for carbon and sulphur recoils in 40 Torr negative ion carbon disulfide and for fluorine recoils in 100 Torr carbon tetrafluoride. The SRIM simulation program was used, together with a purpose-written Monte Carlo generator, to model production of ionizing pairs, diffusion and basic readout geometries relevant to potential real detector scenarios, such as under development for the DRIFT experiment. The results clearly indicate the existence of a head-tail track asymmetry but with a magnitude critically influenced by two competing factors: the nature of the stopping power and details of the range straggling. The former tends to result in the tail being greater than the head and the latter the reverse.IonizationDark matterIon trackTime projection chamberMonte Carlo methodWeakly interacting massive particleIonization energyAlpha particleDisulfideBinary star...
- A low pressure time projection chamber for the detection of WIMPs is discussed. Discrimination against Compton electron background in such a device should be very good, and directional information about the recoil atoms would be obtainable. If a full 3-D reconstruction of the recoil tracks can be achieved, Monte Carlo studies indicate that a WIMP signal could be identified with high confidence from as few as 30 detected WIMP-nucleus scattering events.Weakly interacting massive particleIonizationTime projection chamberMonte Carlo methodStatisticsAnnual modulation of dark matter signalMagnetCalibrationGalactic rotationEarth orbit...
- The direction dependence of the event rate in WIMP direct detection experiments provides a powerful tool for distinguishing WIMP events from potential backgrounds. We use a variety of (non-parametric) statistical tests to examine the number of events required to distinguish a WIMP signal from an isotropic background when the uncertainty in the reconstruction of the nuclear recoil direction is included in the calculation of the expected signal. We consider a range of models for the Milky Way halo, and also study rotational symmetry tests aimed at detecting non-sphericity/isotropy of the Milky Way halo. Finally we examine ways of detecting tidal streams of WIMPs. We find that if the senses of the recoils are known then of order ten events will be sufficient to distinguish a WIMP signal from an isotropic background for all of the halo models considered, with the uncertainties in reconstructing the recoil direction only mildly increasing the required number of events. If the senses of the recoils are not known the number of events required is an order of magnitude larger, with a large variation between halo models, and the recoil resolution is now an important factor. The rotational symmetry tests require of order a thousand events to distinguish between spherical and significantly triaxial halos, however a deviation of the peak recoil direction from the direction of the solar motion due to a tidal stream could be detected with of order a hundred events, regardless of whether the sense of the recoils is known.Weakly interacting massive particleHalo modelStatisticsAnisotropyTidal streamSunMilky Way haloNull hypothesisEarthSolar neighborhood...
- Directional detection of non-baryonic Dark Matter is a promising search strategy for discriminating WIMP events from neutrons, the ultimate background for dark matter direct detection. This strategy requires both a precise measurement of the energy down to a few keV and 3D reconstruction of tracks down to a few mm. The MIMAC (MIcro-tpc MAtrix of Chambers) collaboration has developed in the last years an original prototype detector based on the direct coupling of large pixelized micromegas with a special developed fast self-triggered electronics showing the feasibility of a new generation of directional detectors. The first bi-chamber prototype has been installed at Modane, underground laboratory in June 2012. The first undergournd background events, the gain stability and calibration are shown. The first spectrum of nuclear recoils showing 3D tracks coming from the radon progeny is presented.Dark matterIonizationWeakly interacting massive particleCalibrationIonization energyNon-baryonic dark matterQuenchingGalactic haloSolar systemDirectional detection of galactic dark matter...
- Directional detection of galactic Dark Matter is a promising search strategy for discriminating genuine WIMP events from background ones. However, to take full advantage of this powerful detection method, one need to be able to extract information from an observed recoil map to identify a WIMP signal. We present a comprehensive formalism, using a map-based likelihood method allowing to recover the main incoming direction of the signal, thus proving its galactic origin, and the corresponding significance. Constraints are then deduced in the (sigma_n, m_chi) plane.Weakly interacting massive particleDirectional detection of galactic dark matterConstellationsStatisticsSolar motionDark matter haloAnisotropyGalactic haloSolar systemHalo model...
- Cosmological observations indicate that most of the matter in the Universe is Dark Matter. Dark Matter in the form of Weakly Interacting Massive Particles (WIMPs) can be detected directly, via its elastic scattering off target nuclei. Most current direct detection experiments only measure the energy of the recoiling nuclei. However, directional detection experiments are sensitive to the direction of the nuclear recoil as well. Due to the Sun's motion with respect to the Galactic rest frame, the directional recoil rate has a dipole feature, peaking around the direction of the Solar motion. This provides a powerful tool for demonstrating the Galactic origin of nuclear recoils and hence unambiguously detecting Dark Matter. Furthermore, the directional recoil distribution depends on the WIMP mass, scattering cross section and local velocity distribution. Therefore, with a large number of recoil events it will be possible to study the physics of Dark Matter in terms of particle and astrophysical properties. We review the potential of directional detectors for detecting and characterizing WIMPs.Weakly interacting massive particleDark matterSunEarthNeutrinoLow mass WIMPTime projection chamberLaboratory dark matter searchStatisticsAnnual modulation of dark matter signal...
- The MIMAC experiment is a $\mu$-TPC matrix project for directional dark matter search. Directional detection is a strategy based on the measurement of the WIMP flux anisotropy due to the solar system motion with respect to the dark matter halo. The main purpose of MIMAC project is the measurement of the energy and the direction of nuclear recoils in 3D produced by elastic scattering of WIMPs. Since June 2012 a bi-chamber prototype is operating at the Modane underground laboratory. In this paper, we report the first ionization energy and 3D track observations of nuclear recoils produced by the radon progeny. This measurement shows the capability of the MIMAC detector and opens the possibility to explore the low energy recoil directionality signature.Ionization energyWeakly interacting massive particleIonizationDark matterCalibrationSolar systemDark matter haloElastic scatteringAnisotropyIsotope...
- Recent N-Body simulations are in favor of the presence of a co-rotating Dark Disk that might contribute significantly (10%-50%) to the local Dark Matter density. Such substructure could have dramatic effect on directional detection. Indeed, in the case of a null lag velocity, one expects an isotropic WIMP velocity distribution arising from the Dark Disk contribution, which might weaken the strong angular signature expected in directional detection. For a wide range of Dark Disk parameters, we evaluate in this Letter the effect of such dark component on the discovery potential of upcoming directional detectors. As a conclusion of our study, using only the angular distribution of nuclear recoils, we show that Dark Disk models as suggested by recent N-Body simulations will not affect significantly the Dark Matter reach of directional detection, even in extreme configurations.Dark matter diskWeakly interacting massive particleDark matterVelocity dispersionN-body simulationStatisticsMilky WayAnisotropyLocal dark matter densityStandard Halo model...