Recently bookmarked papers

with concepts:
  • In this paper, we propose Proq, a runtime assertion scheme for testing and debugging quantum programs on a quantum computer. The predicates in Proq are represented by projections (or equivalently, closed subspaces of the state space), following Birkhoff-von Neumann quantum logic. The satisfaction of a projection by a quantum state can be directly checked upon a small number of projective measurements rather than a large number of repeated executions. On the theory side, we rigorously prove that checking projection-based assertions can help locate bugs or statistically assure that the semantic function of the tested program is close to what we expect, for both exact and approximate quantum programs. On the practice side, we consider hardware constraints and introduce several techniques to transform the assertions, making them directly executable on the measurement-restricted quantum computers. We also propose to achieve simplified assertion implementation using local projection technique with soundness guaranteed. We compare Proq with existing quantum program assertions and demonstrate the effectiveness and efficiency of Proq by its applications to assert two ingenious quantum algorithms, the Harrow-Hassidim-Lloyd algorithm and Shor's algorithm.
    ProgrammingQubitQuantum programmingUnitary transformationProjection operatorConfidence intervalQuantum algorithmsShor's algorithmQuantum logicQuantum computation...
  • We review some of the recent efforts in devising and engineering bosonic qubits for superconducting devices, with emphasis on the Gottesman-Kitaev-Preskill (GKP) qubit. We present some new results on decoding repeated GKP error correction using finitely-squeezed GKP ancilla qubits, exhibiting differences with previously studied stochastic error models. We discuss circuit-QED ways to realize CZ gates between GKP qubits and we discuss different scenario's for using GKP and regular qubits as building blocks in a scalable superconducting surface code architecture.
    QubitAncillaHamiltonianArchitectureQuantum error correctionEngineeringWavefunctionPhase spaceRotating wave approximationQuantum electrodynamics...
  • The polar decomposition for a matrix $A$ is $A=UB$, where $B$ is a positive Hermitian matrix and $U$ is unitary (or, if $A$ is not square, an isometry). This paper shows that the ability to apply a Hamiltonian $\pmatrix{ 0 & A^\dagger \cr A & 0 \cr} $ translates into the ability to perform the transformations $e^{-iBt}$ and $U$ in a deterministic fashion. We show how to use the quantum polar decomposition algorithm to solve the quantum Procrustes problem, to perform pretty good measurements, to find the positive Hamiltonian closest to any Hamiltonian, and to perform a Hamiltonian version of the quantum singular value transformation.
    Polar decompositionHamiltonianSingular valueIsometryAlgorithmsTransformationsMeasurement...
  • We propose a runtime architecture that can be used in the development of a quantum programming language and its programming environment. The proposed runtime architecture enables dynamic interaction between classical and quantum data following the restriction that a quantum computer is available in the cloud as a batch computer, with no interaction with the classical computer during its execution. It is done by leaving the quantum code generation for the runtime and introducing the concept of futures for quantum measurements. When implemented in a quantum programming language, those strategies aim to facilitate the development of quantum applications, especially for beginning programmers and students. Being suitable for the current Noisy Intermediate-Scale Quantum (NISQ) Computers, the runtime architecture is also appropriate for simulation and future Fault-Tolerance Quantum Computers.
    QubitProgramming LanguageArchitectureQuantum programmingQuantum circuitProgrammingQuantum measurementQuantum computationSuperpositionQuantum gates...
  • The recent simulations showed that the whistler heat flux instability, which presumably produces the most of quasi-parallel coherent whistler waves in the solar wind, is not efficient in regulating the electron heat conduction. In addition, recent spacecraft measurements indicated that some fraction of coherent whistler waves in the solar wind may propagate anti-parallel to the electron heat flux, being produced due to a perpendicular temperature anisotropy of suprathermal electrons. We present analysis of properties of parallel and anti-parallel whistler waves unstable at electron heat fluxes and temperature anisotropies of suprathermal electrons typical of the pristine solar wind. Assuming the electron population consisting of counter-streaming dense thermal core and tenuous suprathermal halo populations, we perform a linear stability analysis to demonstrate that anti-parallel whistler waves are expected to have smaller frequencies, wave numbers and growth rates compared to parallel whistler waves. The stability analysis is performed over a wide range of parameters of core and halo electron populations. Using the quasi-linear scaling relation we show that anti-parallel whistler waves saturate at amplitudes of one order of magnitude smaller than parallel whistler waves, which is at about $10^{-3}\;B_0$ in the pristine solar wind. The analysis shows that the presence of anti-parallel whistler waves in the pristine solar wind is more likely to be obscured by turbulent magnetic field fluctuations, because of lower frequencies and smaller amplitudes compared to parallel whistler waves. The presented results will be also valuable for numerical simulations of the electron heat flux regulation in the solar wind.
    Solar windTemperature anisotropyVelocity distribution functionHalo populationInstabilityAstronomical UnitCyclotronNumerical simulationScaling lawElectron temperature...
  • We present Karl G. Jansky Very Large Array (VLA) and Atacama Large Millimetre Array (ALMA) observations of SDSS J0924+0219, a z = 1.524 radio-quiet lensed quasar with an intrinsic radio flux density of about 3 micro-Jy. The four lensed images are clearly detected in the radio continuum and the CO(5-4) line, whose centroid is at z = 1.5254 +/- 0.0001, with a marginal detection in the submillimetre continuum. The molecular gas displays ordered motion, in a structure approximately 1--2.5 kpc in physical extent, with typical velocities of 50-100 km/s. Our results are consistent with the radio emission being emitted from the same region, but not with a point source of radio emission. SDSS J0924+0219 shows an extreme anomaly in the flux ratios of the two merging images in the optical continuum and broad emission lines, suggesting the influence of microlensing by stars in the lensing galaxy. We find the flux ratio in the radio, submillimetre continuum and CO lines to be slightly greater than 1 but much less than that in the optical, which can be reproduced with a smooth galaxy mass model and an extended source. Our results, supported by a microlensing simulation, suggest that the most likely explanation for the optical flux anomaly is indeed microlensing.
    Atacama Large Millimeter ArrayGravitational microlensingExperimental anomalyVery Large ArraySloan Digital Sky SurveyCO lineQuasarGravitational lens galaxyStar formationPoint source...
  • As an efficient and scalable graph neural network, GraphSAGE has enabled an inductive capability for inferring unseen nodes or graphs by aggregating subsampled local neighborhoods and by learning in a mini-batch gradient descent fashion. The neighborhood sampling used in GraphSAGE is effective in order to improve computing and memory efficiency when inferring a batch of target nodes with diverse degrees in parallel. Despite this advantage, the default uniform sampling suffers from high variance in training and inference, leading to sub-optimum accuracy. We propose a new data-driven sampling approach to reason about the real-valued importance of a neighborhood by a non-linear regressor, and to use the value as a criterion for subsampling neighborhoods. The regressor is learned using a value-based reinforcement learning. The implied importance for each combination of vertex and neighborhood is inductively extracted from the negative classification loss output of GraphSAGE. As a result, in an inductive node classification benchmark using three datasets, our method enhanced the baseline using the uniform sampling, outperforming recent variants of a graph neural network in accuracy.
    GraphClassificationReinforcement learningNeural networkEmbeddingInferenceUniform distributionEntropyProteinData science...
  • We systematically study models with light scalar and pseudoscalar dark matter candidates and their potential signals at the LHC. First, we derive cosmological bounds on models with the Standard Model Higgs mediator and with a new weak-scale mediator. Next, we study two processes inspired by the indirect and direct detection process topologies, now happening inside the LHC detectors. We find that LHC can observe very light dark matter over a huge mass range if it is produced in mediator decays and then scatters with the detector material to generate jets in the nuclear recoil.
    Dark matterLarge Hadron ColliderHiggs portalHiggs bosonPseudoscalarDeep inelastic scatteringLight dark matterQCD jetDark matter particle massHiggs boson decay...
  • Tremendous ongoing theory efforts are dedicated to developing new methods for QCD calculations. Qualitative rather than incremental advances are needed to fully exploit data still to be collected at the LHC. The maximally supersymmetric Yang-Mills theory (${\mathcal N}=4$ sYM) shares with QCD the gluon sector, which contains the most complicated Feynman graphs, but at the same time has many special properties, and is believed to be solvable exactly. It is natural to ask what we can learn from advances in ${\mathcal N}=4$ sYM for addressing difficult problems in QCD. With this in mind, we review here several remarkable developments and highlights of recent results in ${\mathcal N}=4$ sYM. This includes all-order results for certain scattering amplitudes, novel symmetries, surprising geometrical structures of loop integrands, novel tools for the calculation of Feynman integrals, and bootstrap methods. While several insights and tools have already been carried over to QCD and have contributed to state-of-the-art calculations for LHC physics, we argue that there is a host of further fascinating ideas waiting to be explored.
    Scattering amplitudeAnomalous dimensionPath integralWilson loopConformal symmetryFeynman diagramsUnitarityQuantum field theoryKinematicsLarge Hadron Collider...
  • We present new measurements of the UV spectral slope $\beta$ for galaxies at $z=6-9$ in the Frontier Field cluster MACSJ0416.1-2403 and its parallel field, to an unprecedented level of low stellar mass. To calculate $\beta$, we fit synthetic stellar population models to the observed spectral energy distribution and calculate its value by fitting a power law to the best-fit spectrum. This is the first derivation of rest-frame UV colours of galaxies extending out to $z=9$ for the Frontier Fields program that probes magnitudes as faint as $M\mathrm{_{UV}=-13.5}$. We find no correlation between $\beta$ and rest-frame UV magnitude $M_{1500}$ at all redshifts but we find a strong correlation between $\beta$ and stellar mass, with lower mass galaxies exhibiting bluer UV slopes. At $z=7$ we find that the bluest value of our sample is $\beta=-2.32\pm0.31$, which is redder than previously reported values at similar redshifts in the literature, whereas at $z\sim9$ we find that our bluest data point has a value of $\beta=-2.63\pm0.12$. Thus, we find no evidence for extreme stellar populations or evidence for Pop III stars in low-luminosity galaxies at $z>6$. Additionally, we find a strong correlation between $\beta$ and SFR such that galaxies with low SFRs exhibit bluer slopes, which appear to get bluer with increasing redshift at a given SFR. We also find a star formation main sequence up to $z = 9$ with a rising SFRs with increasing stellar mass. All of these relations show that $\beta$ values correlate with a process that drives both the overall star formation rate and stellar mass assembly. Furthermore, as we also observe no trend between $\beta$ and specific star formation rate (sSFR), this suggests that whatever is setting $\beta$ is not a local process but a global one driven by the scale of the galaxy.
    GalaxyStellar massStellar populationsSpectral energy distributionLuminosityHubble Frontier FieldsPhotometryPopulation IIIMain sequence starStar formation...
  • Horizon Run 5 (HR5) is a cosmological hydrodynamics simulation which captures the properties of the Universe on a Gpc scale while achieving a resolution of 1 kpc. This enormous dynamic range allows us to simultaneously capture the physics of the cosmic web on very large scales and account for the formation and evolution of dwarf galaxies on much smaller scales. Inside the simulation box we zoom-in on a high-resolution cuboid region with a volume of 1049x114x114 Mpc^3. The sub-grid physics chosen to model galaxy formation includes radiative heating/cooling, reionization, star formation, supernova feedback, chemical evolution tracking the enrichment of oxygen and iron, the growth of supermassive black holes and active galactic nuclei (AGN) feedback in the form of a dual jet-heating mode. For this simulation we implemented a hybrid MPI-OpenMP version of the RAMSES code, specifically targeted for modern many-core many thread parallel architectures. For the post-processing, we extended the Friends-of-Friend (FoF) algorithm and developed a new galaxy finder to analyse the large outputs of HR5. The simulation successfully reproduces many observations, such as the cosmic star formation history, connectivity of galaxy distribution and stellar mass functions. The simulation also indicates that hydrodynamical effects on small scales impact galaxy clustering up to very large scales near and beyond the baryonic acoustic oscillation (BAO) scale. Hence, caution should be taken when using that scale as a cosmic standard ruler: one should carefully understand the corresponding biases. The simulation is expected to be an invaluable asset for the interpretation of upcoming deep surveys of the Universe.
    GalaxyBlack holeStarFriends of friends algorithmDark matterMetallicityGalaxy FormationStar formationMilky WayHorizon Run simulation...
  • Model-based reinforcement learning strategies are believed to exhibit more significant sample complexity than model-free strategies to control dynamical systems,such as quadcopters.This belief that Model-based strategies that involve the use of well-trained neural networks for making such high-level decisions always give better performance can be dispelled by making use of Model-free policy search methods.This paper proposes the use of a model-free random searching strategy,called Augmented Random Search(ARS),which is a better and faster approach of linear policy training for continuous control tasks like controlling a Quadcopters flight.The method achieves state-of-the-art accuracy by eliminating the use of too much data for the training of neural networks that are present in the previous approaches to the task of Quadcopter control.The paper also highlights the performance results of the searching strategy used for this task in a strategically designed task environment with the help of simulations.Reward collection performance over 1000 episodes and agents behavior in flight for augmented random search is compared with that of the behavior for reinforcement learning state-of-the-art algorithm,called Deep Deterministic policy gradient(DDPG).Our simulations and results manifest that a high variability in performance is observed in commonly used strategies for sample efficiency of such tasks but the built policy network of ARS-Quad can react relatively accurately to step response providing a better performing alternative to reinforcement learning strategies.
    Reinforcement learningNeural networkDeterministic policySimulationsAlgorithmsNetworksDynamical systems...
  • We study a Dark Matter (DM) model in which the dominant coupling to the standard model occurs through a neutrino-DM-scalar coupling. The new singlet scalar will generically have couplings to nuclei/electrons arising from renormalizable Higgs portal interactions. As a result the DM particle $X$ can convert into a neutrino via scattering on a target nucleus $\mathcal{N}$: $ X + \mathcal{N} \rightarrow \nu + \mathcal{N}$, leading to striking signatures at direct detection experiments. Similarly, DM can be produced in neutrino scattering events at neutrino experiments: $ \nu + \mathcal{N} \rightarrow X + \mathcal{N}$, predicting spectral distortions at experiments such as COHERENT. Furthermore, the model allows for late kinetic decoupling of dark matter with implications for small-scale structure. At low masses, we find that COHERENT and late kinetic decoupling produce the strongest constraints on the model, while at high masses the leading constraints come from DM down-scattering at XENON1T and Borexino. Future improvement will come from CE$\nu$NS data, ultra-low threshold direct detection, and rare kaon decays.
    Dark matterNeutrinoKinetic decouplingBorexinoLate kinetic decouplingDark matter particle massXENON1TSmall scale structureDark matter particleEarth...
  • Decoherence describes the tendency of quantum sub-systems to dynamically lose their quantum character. This happens when the quantum sub-system of interest interacts and becomes entangled with an environment that is traced out. For ordinary macroscopic systems, electromagnetic and other interactions cause rapid decoherence. However, dark matter (DM) may have the unique possibility of exhibiting naturally prolonged macroscopic quantum properties due to its weak coupling to its environment, particularly if it only interacts gravitationally. In this work, we compute the rate of decoherence for light DM in the galaxy, where a local density has its mass, size, and location in a quantum superposition. The decoherence is via the gravitational interaction of the DM overdensity with its environment, provided by ordinary matter. We focus on relatively robust configurations: DM perturbations that involve an overdensity followed by an underdensity, with no monopole, such that it is only observable at relatively close distances. We use non-relativistic scattering theory with a Newtonian potential generated by the overdensity to determine how a probe particle scatters off of it and thereby becomes entangled. As an application, we consider light scalar DM, including axions. In the galactic halo, we use diffuse hydrogen as the environment, while near the earth, we use air as the environment. For an overdensity whose size is the typical DM de Broglie wavelength, we find that the decoherence rate in the halo is higher than the present Hubble rate for DM masses $m_a \lesssim 5 \times 10^{-7}$eV and in earth based experiments it is higher than the classical field coherence rate for $m_a \lesssim 10^{-6}$eV. When spreading of the states occurs, the rates can become much faster, as we quantify. Also, we establish that DM BECs decohere very rapidly and so are very well described by classical field theory.
    Dark matterSuperpositionAxionWave packetEarthMilky WayAxion massImpact parameterDe Broglie wavelengthMagnetic monopole...
  • DEAP-3600 is a single-phase liquid argon detector aiming to directly detect Weakly Interacting Massive Particles (WIMPs), located at SNOLAB (Sudbury, Canada). After analyzing data taken during the first year of operation, a null result was used to place an upper bound on the WIMP-nucleon spin-independent, isoscalar cross section. This study reinterprets this result within a Non-Relativistic Effective Field Theory framework, and further examines how various possible substructures in the local dark matter halo may affect these constraints. Such substructures are hinted at by kinematic structures in the local stellar distribution observed by the Gaia satellite and other recent astronomical surveys. These include the Gaia Sausage (or Enceladus), as well as a number of distinct streams identified in recent studies. Limits are presented for the coupling strength of the effective contact interaction operators $\mathcal{O}_1$, $\mathcal{O}_3$, $\mathcal{O}_5$, $\mathcal{O}_8$, and $\mathcal{O}_{11}$, considering isoscalar, isovector, and xenonphobic scenarios, as well as the specific operators corresponding to millicharge, magnetic dipole, electric dipole, and anapole interactions. The effects of halo substructures on each of these operators are explored as well, showing that the $\mathcal{O}_5$ and $\mathcal{O}_8$ operators are particularly sensitive to the velocity distribution, even at dark matter masses above 100 GeV/$c^2$.
    Dark matterWeakly interacting massive particleDEAPSpin independentKinematicsLaboratory dark matter searchDark matter particle massStellar distributionSaturnian satellitesDark matter halo...
  • We report measurements of annual and diurnal modulations of the cosmic-ray muon rate in the Yangyang underground laboratory (Y2L) using 952 days of COSINE-100 data acquired between September 2016 and July 2019. A correlation of the muon rate with the atmospheric temperature is observed and its amplitude on the muon rate is determined. The effective atmospheric temperature and muon rate variations are positively correlated with a measured effective temperature coefficient of $\alpha_{T}$ = 0.80 $\pm$ 0.11. This result is consistent with a model of meson production in the atmosphere. We also searched for a diurnal modulation in the underground muon rate by comparing one-hour intervals. No significant diurnal modulation of the muon rate was observed.
    MuonMuon rateOcean tidesEffective temperatureCosmic ray muonAnnual modulation of dark matter signalWeakly interacting massive particleMultidimensional ArrayKaonPion...
  • These are the notes for a series of lectures given at the Clay Mathematical Institute Summer School in Buzios, July 11 - August 7, 2010. We review some of the recent aspects of scaling limits of random trees and planar maps, in particular via their relations with bijective enumeration and Gromov-Hausdorff convergence.
    Scaling limitGraphMetric spaceRandom walkBrownian motionGromov-Hausdorff convergenceQuotient spaceGalton-Watson processQuantum gravityTime-reversal symmetry...
  • We study simple random walk on the class of random planar maps which can be encoded by a two-dimensional random walk with i.i.d.\ increments or a two-dimensional Brownian motion via a "mating-of-trees" type bijection. This class includes the uniform infinite planar triangulation (UIPT), the infinite-volume limits of random planar maps weighted by the number of spanning trees, bipolar orientations, or Schnyder woods they admit, and the $\gamma$-mated-CRT map for $\gamma \in (0,2)$. For each of these maps, we obtain an upper bound for the Green's function on the diagonal, an upper bound for the effective resistance to the boundary of a metric ball, an upper bound for the return probability of the random walk to its starting point after $n$ steps, and a lower bound for the graph-distance displacement of the random walk, all of which are sharp up to polylogarithmic factors. When combined with work of Lee (2017), our bound for the return probability shows that the spectral dimension of each of these random planar maps is a.s.\ equal to 2, i.e., the (quenched) probability that the simple random walk returns to its starting point after $2n$ steps is $n^{-1+o_n(1)}$. Our results also show that the amount of time that it takes a random walk to exit a metric ball is at least its volume (up to a polylogarithmic factor). In the special case of the UIPT, this implies that random walk typically travels at least $n^{1/4 - o_n(1)}$ units of graph distance in $n$ units of time. The matching upper bound for the displacement is proven by Gwynne and Hutchcroft (2018). These two works together resolve a conjecture of Benjamini and Curien (2013) in the UIPT case. Our proofs are based on estimates for the mated-CRT map (which come from its relationship to SLE-decorated Liouville quantum gravity) and a strong coupling of the mated-CRT map with the other random planar map models.
    Random walkGraphBrownian motionDirichlet's energyEmbeddingGreen's functionGaussian free fieldQuantum gravitySpanning treeHarmonic function...
  • This text is a survey (Bourbaki seminar) on the paper "Liouville quantum gravity and KPZ" By B.Duplantier and S.Sheffield. The study of statistical physics models in two dimensions (d=2) at their critical point is in general a significantly hard problem (not to mention the d=3 case). In the eighties, three physicists, Knizhnik, Polyakov et Zamolodchikov (KPZ) came up in \cite{\KPZ} with a novel and far-reaching approach in order to understand the critical behavior of these models. Among these, one finds for example random walks, percolation as well as the Ising model. The main underlying idea of their approach is to study these models along a two-step procedure as follows: a/ First of all, instead of considering the model on some regular lattice of the plane (such as $\Z^2$ for example), one defines it instead on a well-chosen "random planar lattice". Doing so corresponds to studying the model in its {\it quantum gravity} form. In the case of percolation, the appropriate choice of random lattice matches with the so-called planar maps. b/ Then it remains to get back to the actual {\it Euclidean} setup. This is done thanks to the celebrated {\bf KPZ formula} which gives a very precise correspondence between the geometric properties of models in their quantum gravity formulation and their analogs in the Euclidean case. The nature and the origin of such a powerful correspondence remained rather mysterious for a long time. In fact, the KPZ formula is still not rigorously established and remains a conjectural correspondence. The purpose of this survey is to explain how the recent work of Duplantier and Sheffield enables to explain some of the mystery hidden behind this KPZ formula. To summarize their contribution in one sentence, their work implies a beautiful interpretation of the KPZ correpondence through a uniformization of the random lattice, seen as a Riemann surface.
    Ising modelGraphUniversality classPartition functionMetric spaceRandom measureGaussian free fieldScaling limitQuantum gravityUniform distribution...
  • We prove that the Tutte embeddings (a.k.a. harmonic/embeddings) of certain random planar maps converge to $\gamma$-Liouville quantum gravity ($\gamma$-LQG). Specifically, we treat mated-CRT maps, which are discretized matings of correlated continuum random trees, and $\gamma$ ranges from $0$ to $2$ as one varies the correlation parameter. We also show that the associated space-filling path on the embedded map converges to space-filling SLE$_{\kappa}$ for $\kappa =16/\gamma^2$ (in the annealed sense) and that simple random walk on the embedded map converges to Brownian motion (in the quenched sense). Our arguments also yield analogous statements for the Smith (square tiling) embedding of the mated-CRT map. This work constitutes the first proof that a discrete conformal embedding of a random planar map converges to LQG. Many more such statements have been conjectured. Since the mated-CRT map can be viewed as a coarse-grained approximation to other random planar maps (the UIPT, tree-weighted maps, bipolar-oriented maps, etc.), our results indicate a potential approach for proving that embeddings of these maps converge to LQG as well. To prove the main result, we establish several (independently interesting) theorems about LQG surfaces decorated by space-filling SLE. There is a natural way to use the SLE curve to divide the plane into `cells' corresponding to vertices of the mated-CRT map. We study the law of the shape of the origin-containing cell, in particular proving moments for the ratio of its squared diameter to its area. We also give bounds on the degree of the origin-containing cell and establish a form of ergodicity for the entire configuration. Ultimately, we use these properties to show (using a general theorem proved in a separate paper) that random walk on these cells converges to a time change of Brownian motion, which in turn leads to the Tutte embedding result.
    EmbeddingSchramm-Loewner evolutionBrownian motionRandom walkGraphQuantum gravityCovariance matrixQuenchingHarmonic functionScaling limit...
  • We prove that large random triangulations of types I, II, and III with a simple boundary under the critical Boltzmann weight converge to the Brownian disk.
    OrientationScaling limitGraphEuler's formulaRandom walkEmbeddingMetric spaceAndromeda IIIAndromeda IIPercolation...
  • We survey the theory and applications of mating-of-trees bijections for random planar maps and their continuum analog: the mating-of-trees theorem of Duplantier, Miller, and Sheffield (2014). The latter theorem gives an encoding of a Liouville quantum gravity (LQG) surface decorated by a Schramm-Loewner evolution (SLE) curve in terms of a pair of correlated linear Brownian motions. We assume minimal familiarity with the theory of SLE and LQG. Mating-of-trees theory enables one to reduce problems about SLE and LQG to problems about Brownian motion and leads to deep rigorous connections between random planar maps and LQG. Applications discussed in this article include scaling limit results for various functionals of decorated random planar maps, estimates for graph distances and random walk on (not necessarily uniform) random planar maps, computations of the Hausdorff dimensions of sets associated with SLE, scaling limit results for random planar maps conformally embedded in the plane, and special symmetries for $\sqrt{8/3}$-LQG which allow one to prove its equivalence with the Brownian map.
    Schramm-Loewner evolutionGaussian free fieldEmbeddingBrownian motionGraphScaling limitRandom walkSpanning treeStatistical mechanicsPercolation...
  • We set the foundation for a series of works aimed at proving strong relations between uniform random planar maps and Liouville quantum gravity (LQG). Our method relies on a bijective encoding of site-percolated planar triangulations by certain 2D lattice paths. Our bijection parallels in the discrete setting the \emph{mating-of-trees} framework of LQG and Schramm-Loewner evolutions (SLE) introduced by Duplantier, Miller, and Sheffield. Combining these two correspondences allows us to relate uniform site-percolated triangulations to $\sqrt{8/3}$-LQG and SLE$_6$. In particular, we establish the convergence of several functionals of the percolation model to continuous random objects defined in terms of $\sqrt{8/3}$-LQG and SLE$_6$. For instance, we show that the exploration tree of the percolation converges to a branching SLE$_6$, and that the collection of percolation cycles converges to the conformal loop ensemble CLE$_6$. We also prove convergence of counting measure on the pivotal points of the percolation. Our results play an essential role in several other works, including a program for showing convergence of the conformal structure of uniform triangulations and works which study the behavior of random walk on the uniform infinite planar triangulation.
    PercolationSchramm-Loewner evolutionSpanning treeEmbeddingScaling limitCountingBrownian motionRandom walkGraphLévy process...
  • We present an analysis of Murchison Widefield Array radio telescope data from $\omega$ Cen, possibly a stripped dwarf spheroidal galaxy core captured by our Galaxy. Recent interpretations of Fermi-LAT $\gamma$-ray data by Brown {\it et al.} (2019) and Reynoso-Cordova {\it et al.} (2019) suggest that $\omega$ Cen may contain significant Dark Matter. We utilise their best-fit Dark Matter annihilation models, and an estimate of the magnetic field strength in $\omega$ Cen, to calculate the expected radio synchrotron signal from annihilation, and show that one can usefully rule out significant parts of the magnetic field - diffusion coefficient plane using the current observational limits. Improvement by a factor of 10-100 on these limits could constrain the models even more tightly.
    Dark matter annihilationMurchison Widefield ArraySynchrotronDark matterGalaxyMagnetic field strengthMilky WayStripe phasesDiffusion coefficientSurface brightness...
  • An analytical model for fully developed three-dimensional incompressible turbulence was recently proposed in the hydrodynamics community, based on the concept of multiplicative chaos. It consists of a random field represented by means of a stochastic integral, which, with only a few parameters, shares many properties with experimental and numerical turbulence, including in particular energy transfer through scales (the cascade) and intermittency (non-Gaussianity) which is most conveniently controlled with a single parameter. Here, we propose three models extending this approach to MHD turbulence. Our formulae provide physically motivated 3D models of a turbulent velocity field and magnetic field coupled together. Besides its theoretical value, this work is meant to provide a tool for observers: a dozen of physically meaningful free parameters enter the description, which is useful to characterize astrophysical data.
    TurbulenceScalar fieldChaosVorticityMagnetohydrodynamic turbulenceDegree of freedomDissipationRandom FieldNumerical simulationCurrent density...
  • When magnetohydrodynamic turbulence evolves in the presence of a large-scale mean magnetic field, an anisotropy develops relative to that preferred direction. The well-known tendency is to develop stronger gradients perpendicular to the magnetic field, relative to the direction along the field. This anisotropy of the spectrum is deeply connected with anisotropy of estimated timescales for dynamical processes, and requires reconsideration of basic issues such as scale locality and spectral transfer. Here analysis of high-resolution three-dimensional simulations of unforced magnetohydrodynamic turbulence permits quantitative assessment of the behavior of theoretically relevant timescales in Fourier wavevector space. We discuss the distribution of nonlinear times, Alfv\'en times, and estimated spectral transfer rates. Attention is called to the potential significance of special regions of the spectrum, such as the two-dimensional limit and the "critical balance" region. A formulation of estimated spectral transfer in terms of a suppression factor supports a conclusion that the quasi two-dimensional fluctuations (characterized by strong nonlinearities) are not a singular limit, but may be in general expected to make important contributions.
    AnisotropyTurbulenceMagnetohydrodynamic turbulenceMagnetohydrodynamicsDissipationTriple correlationAttentionHelicitySolar windLong-range magnetic fields...
  • We revisit techniques for performing cosmological simulations with both baryons and cold dark matter when each fluid has different initial conditions, as is the case at the end of the radiation era. Most simulations do not reproduce the linear prediction for the difference between the cold dark matter and baryon perturbations. We show that this is due to the common use of offset regular grids when setting up the particle initial conditions. The correct behaviour can be obtained without any loss of simulation resolution by using a Lagrangian glass for the baryon particles. We further show that the difference between cold dark matter and baryons may affect predictions for the Lyman-alpha forest flux power spectrum at the 5% level, potentially impacting current cosmological constraints.
    Cold dark matterGlassTransfer functionLagrangian glassFlux power spectrumDark matterCosmological hydrodynamical simulationsSoftening lengthGravitational forceAdaptive gravitational softening...
  • This paper considers a game-theoretic formulation of the covert communications problem with finite blocklength, where the transmitter (Alice) can randomly vary her transmit power in different blocks, while the warden (Willie) can randomly vary his detection threshold in different blocks. In this two player game, the payoff for Alice is a combination of the coding rate to the receiver (Bob) and the detection error probability at Willie, while the payoff for Willie is the negative of his detection error probability. Nash equilibrium solutions to the game are obtained, and shown to be efficiently computable using linear programming. For less covert requirements, our game-theoretic approach can achieve significantly higher coding rates than uniformly distributed transmit powers. We then consider the situation with an additional jammer, where Alice and the jammer can both vary their powers. We pose a two player game where Alice and the jammer jointly comprise one player, with Willie the other player. The use of a jammer is shown in numerical simulations to lead to further significant performance improvements.
    Linear optimizationNash equilibriumJammingNumerical simulationSignal to noise ratioAlice and BobDiscretizationLikelihood functionGame theoryLikelihood-ratio test...
  • In this paper we are interested in the quantitative analysis of the compaction ratio for two classical families of trees: recursive trees and plane binary increasing tree. These families are typical representatives of tree models with a small depth. More formally, asymptotically, for a random tree with $n$ nodes, its depth is of order $\log n$. Once a tree of size $n$ is compacted by keeping only one occurrence of all fringe subtrees appearing in the tree the resulting graph contains only $O(n / \log n)$ nodes. This result must be compared to classical results of compaction in the families of simply generated trees, where the analog result states that the compacted structure is of size of order $n / \sqrt{\log n}$. We end the paper with an experimental quantitative study, based on a prototype implementation of compacted binary search trees, that are modeled by plane binary increasing trees.
    Random treeGraph
  • SC-LDPC codes with sub-block locality can be decoded locally at the level of sub-blocks that are much smaller than the full code block, thus providing fast access to the coded information. The same code can also be decoded globally using the entire code block, for increased data reliability. In this paper, we pursue the analysis and design of such codes from both finite-length and asymptotic lenses. This mixed approach has rarely been applied in designing SC codes, but it is beneficial for optimizing code graphs for local and global performance simultaneously. Our proposed framework consists of two steps: 1) designing the local code for both threshold and cycle counts, and 2) designing the coupling of local codes for best cycle count in the global design.
    GraphOptimizationAharonov-Bohm effectBinary Erasure ChannelSparsityCalibration Index FileMutual informationDiamondBipartite networkBalanced matrix...
  • We present a cosmological hydrodynamical simulation of a 10^13 Msun galaxy group and its environment (out to 10 times the virial radius) carried out using the EAGLE model of galaxy formation. Exploiting a novel technique to increase the resolution of the dark matter calculation independently of that of the gas, the simulation resolves dark matter haloes and subhaloes of mass 5x10^6 Msun . It is therefore useful for studying the abundance and properties of the haloes and subhaloes targeted in strong lensing tests of the cold dark matter model. We estimate the halo and subhalo mass functions and discuss how they are affected both by the inclusion of baryons in the simulation and by the environment. We find that the halo and subhalo mass functions have lower amplitude in the hydrodynamical simulation than in its dark matter only counterpart. This reflects the reduced growth of haloes in the hydrodynamical simulation due to the early loss of gas by reionisation and galactic winds and, additionally, in the case of subhaloes, disruption by enhanced tidal effects within the host halo due to the presence of a massive central galaxy. The distribution of haloes is highly anisotropic reflecting the filamentary character of mass accretion onto the cluster. As a result, there is significant variation in the number of structures with viewing direction. The median number of structures near the centre of the halo, when viewed in projection, is reduced by a factor of two when baryons are included.
    Dark matter subhaloSimulations of structure formationGravitational lensingHydrodynamical simulationsSubhalo mass functionDark matterCosmological hydrodynamical simulationsTidal effectsEAGLE simulation projectAccretion...
  • Dark matter density is formally infinite at the location of caustic surfaces, where dark matter sheet folds in phase space. The caustics separate multi-stream regions with different number of streams. Volume elements change the parity by turning inside out when passing through the caustic stage. Being measure-zero structures, identification of caustics via matter density fields is usually restricted to fine-grained simulations. Instead a generic purely geometric algorithm can be employed to identify caustics directly by using triangulation of Lagrangian sub-manifold x(q, t) where x and q are Eulerian and Lagrangian coordinates obtained in N-body simulations. The caustic surfaces are approximated by a set of triangles with vertices being the particles in the simulation. It is demonstrated that finding a dark matter halo is quite feasible by building its owtermost convex caustic. Neither more specific assumptions about the geometry of the boundary nor ad hoc parameters are needed. The halo boundary in our idealized but undoubtably generic simulation is neither spherical nor ellipsoidal but rather remarkably asymmetrical. The analysis of the kinetic and potential energies of individual particles and the halo as a whole along with an examination of the two-dimensional phase space has shown that the halo is gravitationally bound.
    Phase space causticFlip-flopDark matterN-body simulationZeldovich approximationPhase spaceDark matter haloCoarse grainingRadial velocityCutoff scale...
  • CMB lensing is a promising, novel way to measure galaxy cluster masses that can be used, e.g., for mass calibration in galaxy cluster counts analyses. Understanding the statistics of the galaxy cluster mass observable obtained with such measurements is essential if their use in subsequent analyses is not to lead to biased results. We study the statistics of a CMB lensing galaxy cluster mass observable for a Planck-like experiment with mock observations obtained from an N-body simulation. We quantify the bias, intrinsic scatter, and deviations from log-normality associated with this observable following two different approaches, one in which the signal due to the cluster and nearby correlated large-scale structure is isolated, and another one in which the variation due to uncorrelated large-scale structure is also taken into account. We briefly discuss how some of our results change for experiments with higher angular resolution and lower noise levels, such as the current generation of surveys obtained with ground-based, large-aperture telescopes.
    Large scale structureCMB lensingIntrinsic scatterVirial cluster massStatistical estimatorMatched filterNavarro-Frenk-White profileWeak lensing mass estimateCluster of galaxiesStatistics...
  • In this paper, we propose a novel machine learning architecture for facial reenactment. In particular, contrary to the model-based approaches or recent frame-based methods that use Deep Convolutional Neural Networks (DCNNs) to generate individual frames, we propose a novel method that (a) exploits the special structure of facial motion (paying particular attention to mouth motion) and (b) enforces temporal consistency. We demonstrate that the proposed method can transfer facial expressions, pose and gaze of a source actor to a target video in a photo-realistic fashion more accurately than state-of-the-art methods.
    AttentionGround truthDeep convolutional neural networksGenerative Adversarial NetArchitectureNeural networkMachine learningConvolution Neural NetworkStructure-from-MotionPrincipal component...
  • Supporting recommendations with personalized and relevant explanations increases trust and perceived quality, and helps users make better decisions. Prior work attempted to generate a synthetic review or review segment as an explanation, but they were not judged convincing in evaluations by human users. We propose T-RECS, a multi-task learning Transformer-based model that jointly performs recommendation with textual explanations using a novel multi-aspect masking technique. We show that human users significantly prefer the justifications generated by T-RECS than those generated by state-of-the-art techniques. At the same time, experiments on two datasets show that T-RECS slightly improves on the recommendation performance of strong state-of-the-art baselines. Another feature of T-RECS is that it allows users to react to a recommendation by critiquing the textual explanation. The system updates its user model and the resulting recommendations according to the critique. This is based on a novel unsupervised critiquing method for single- and multi-step critiquing with textual explanations. Experiments on two real-world datasets show that T-RECS is the first to obtain good performance in adapting to the preferences expressed in multi-step critiquing.
    KeyphraseRankingLatent spaceNatural languageAttentionCollaborative filteringRankMean squared errorAutoencoderMulti-task learning...
  • Modern speech enhancement algorithms achieve remarkable noise suppression by means of large recurrent neural networks (RNNs). However, large RNNs limit practical deployment in hearing aid hardware (HW) form-factors, which are battery powered and run on resource-constrained microcontroller units (MCUs) with limited memory capacity and compute capability. In this work, we use model compression techniques to bridge this gap. We define the constraints imposed on the RNN by the HW and describe a method to satisfy them. Although model compression techniques are an active area of research, we are the first to demonstrate their efficacy for RNN speech enhancement, using pruning and integer quantization of weights/activations. We also demonstrate state update skipping, which reduces the computational load. Finally, we conduct a perceptual evaluation of the compressed models to verify audio quality on human raters. Results show a reduction in model size and operations of 11.9$\times$ and 2.9$\times$, respectively, over the baseline for compressed models, without a statistical difference in listening preference and only exhibiting a loss of 0.55dB SDR. Our model achieves a computational latency of 2.39ms, well within the 10ms target and 351$\times$ better than previous work.
    Recurrent neural networkQuantizationLong short term memoryMain sequence starOptimizationInferenceForm factorHyperparameterArchitectureArithmetic...
  • Named entity recognition (NER) from text has been a widely studied problem and usually extracts semantic information from text. Until now, NER from speech is mostly studied in a two-step pipeline process that includes first applying an automatic speech recognition (ASR) system on an audio sample and then passing the predicted transcript to a NER tagger. In such cases, the error does not propagate from one step to another as both the tasks are not optimized in an end-to-end (E2E) fashion. Recent studies confirm that integrated approaches (e.g., E2E ASR) outperform sequential ones (e.g., phoneme based ASR). In this paper, we introduce a first publicly available NER annotated dataset for English speech and present an E2E approach, which jointly optimizes the ASR and NER tagger components. Experimental results show that the proposed E2E approach outperforms the classical two-step approach. We also discuss how NER from speech can be used to handle out of vocabulary (OOV) words in an ASR system.
    F1 scoreArchitectureSpeech recognitionRecurrent neural networkConvolution Neural NetworkGround truthLab-on-a-chipDeep learningTraining setFully connected layer...
  • We combine neural networks with genetic algorithms to find parsimonious models that describe the time evolution of a point particle subjected to an external potential. The genetic algorithm is designed to find the simplest, most interpretable network compatible with the training data. The parsimonious neural network (PNN) can numerically integrate classical equations of motion with negligible energy drifts and good time reversibility, significantly outperforming a generic feed-forward neural network. Our PNN is immediately interpretable as the position Verlet algorithm, a non-trivial integrator whose justification originates from Trotter's theorem.
    Neural networkGenetic algorithmTime-reversal symmetryEnergy driftTraining setClassical mechanicsAlgorithmsParticlesPotentialNetworks...
  • Designing DNA and protein sequences with improved or novel function has the potential to greatly accelerate synthetic biology. Machine learning models that accurately predict biological fitness from sequence are becoming a powerful tool for molecular design. Activation maximization offers a simple design strategy for differentiable models: one-hot coded sequences are first approximated by a continuous representation which is then iteratively optimized with respect to the predictor oracle by gradient ascent. While elegant, this method is limited by technical challenges, as it suffers from vanishing gradients and may cause predictor pathologies leading to poor convergence. Here, we build on a previously proposed straight-through approximation method to optimize through discrete sequence samples. By normalizing nucleotide logits across positions and introducing an adaptive entropy variable, we remove bottlenecks arising from overly large or skewed sampling parameters. This results in a markedly improved algorithm with up to 100-fold faster convergence. Moreover, our method finds improved fitness optima compared to existing methods, including the original algorithm without normalization and global optimization heuristics such as Simulated Annealing. We demonstrate our improved method by designing DNA and enzyme sequences for six deep learning predictors, including a protein structure predictor (trRosetta).
    ProteinNucleotidesInteraction neural networksOptimizationDNAPhoton Wave MechanicsEntropyDeep learningSimulated annealingStatistical estimator...
  • Deep speaker embedding models have been commonly used as a building block for speaker diarization systems; however, the speaker embedding model is usually trained according to a global loss defined on the training data, which could be sub-optimal for distinguishing speakers locally in a specific meeting session. In this work we present the first use of graph neural networks (GNNs) for the speaker diarization problem, utilizing a GNN to refine speaker embeddings locally using the structural information between speech segments inside each session. The speaker embeddings extracted by a pre-trained model are remapped into a new embedding space, in which the different speakers within a single session are better separated. The model is trained for linkage prediction in a supervised manner by minimizing the difference between the affinity matrix constructed by the refined embeddings and the ground-truth adjacency matrix. Spectral clustering is then applied on top of the refined embeddings. We show that the clustering performance of the refined speaker embeddings outperforms the original embeddings significantly on both simulated and real meeting data, and our system achieves the state-of-the-art result on the NIST SRE 2000 CALLHOME database.
    EmbeddingGraph Neural NetworkGraphGround truthSpectral clusteringGraph Convolutional NetworkShort-range entanglementAdjacency matrixTraining setFully connected layer...
  • Recent research has made great progress in realizing neural style transfer of images, which denotes transforming an image to a desired style. Many users start to use their mobile phones to record their daily life, and then edit and share the captured images and videos with other users. However, directly applying existing style transfer approaches on videos, i.e., transferring the style of a video frame by frame, requires an extremely large amount of computation resources. It is still technically unaffordable to perform style transfer of videos on mobile phones. To address this challenge, we propose MVStylizer, an efficient edge-assisted photorealistic video style transfer system for mobile phones. Instead of performing stylization frame by frame, only key frames in the original video are processed by a pre-trained deep neural network (DNN) on edge servers, while the rest of stylized intermediate frames are generated by our designed optical-flow-based frame interpolation algorithm on mobile phones. A meta-smoothing module is also proposed to simultaneously upscale a stylized frame to arbitrary resolution and remove style transfer related distortions in these upscaled frames. In addition, for the sake of continuously enhancing the performance of the DNN model on the edge server, we adopt a federated learning scheme to keep retraining each DNN model on the edge server with collected data from mobile clients and syncing with a global DNN model on the cloud server. Such a scheme effectively leverages the diversity of collected data from various mobile clients and efficiently improves the system performance. Our experiments demonstrate that MVStylizer can generate stylized videos with an even better visual quality compared to the state-of-the-art method while achieving 75.5$\times$ speedup for 1920$\times$1080 videos.
    Mobile phoneDeep Neural NetworksFederated learningAlgorithmsResolution...
  • The answer-agnostic question generation is a significant and challenging task, which aims to automatically generate questions for a given sentence but without an answer. In this paper, we propose two new strategies to deal with this task: question type prediction and copy loss mechanism. The question type module is to predict the types of questions that should be asked, which allows our model to generate multiple types of questions for the same source sentence. The new copy loss enhances the original copy mechanism to make sure that every important word in the source sentence has been copied when generating questions. Our integrated model outperforms the state-of-the-art approach in answer-agnostic question generation, achieving a BLEU-4 score of 13.9 on SQuAD. Human evaluation further validates the high quality of our generated questions. We will make our code public available for further research.
    AttentionKeyphraseHidden stateEmbeddingLong short term memoryMulti-layer PerceptronWord embeddingStatisticsStochastic gradient descentNatural language generation...
  • Medical health records and clinical summaries contain a vast amount of important information in textual form that can help advancing research on treatments, drugs and public health. However, the majority of these information is not shared because they contain private information about patients, their families, or medical staff treating them. Regulations such as HIPPA in the US, PHIPPA in Canada and GDPR regulate the protection, processing and distribution of this information. In case this information is de-identified and personal information are replaced or redacted, they could be distributed to the research community. In this paper, we present MASK, a software package that is designed to perform the de-identification task. The software is able to perform named entity recognition using some of the state-of-the-art techniques and then mask or redact recognized entities. The user is able to select named entity recognition algorithm (currently implemented are two versions of CRF-based techniques and BiLSTM-based neural network with pre-trained GLoVe and ELMo embedding) and masking algorithm (e.g. shift dates, replace names/locations, totally redact entity).
    ArchitectureMachine learningLong short term memoryConditional random fieldNeural networkSoftwareRecurrent neural networkEmbeddingPythonComputational linguistics...
  • We apply the Adversarial NLI dataset to train the NLI model and show that the model has the potential to enhance factual correctness in abstract summarization. We follow the work of Falke et al. (2019), which rank multiple generated summaries based on the entailment probabilities between an source document and summaries and select the summary that has the highest entailment probability. The authors' earlier study concluded that current NLI models are not sufficiently accurate for the ranking task. We show that the Transformer models fine-tuned on the new dataset achieve significantly higher accuracy and have the potential of selecting a coherent summary.
    Natural language inferenceAttentionRankingRankNeural networkArchitectureSemantic similarityTraining setMulti-task learningEntropy...
  • Numerous noise adaptation techniques have been proposed to address the mismatch problem in speech enhancement (SE) by fine-tuning deep-learning (DL)-based models. However, adaptation to a target domain can lead to catastrophic forgetting of the previously learnt noise environments. Because SE models are commonly used in embedded devices, re-visiting previous noise environments is a common situation in speech enhancement. In this paper, we propose a novel regularization-based incremental learning SE (SERIL) strategy, which can complement these noise adaptation strategies without having to access previous training data. The experimental results show that, when faced with a new noise domain, the SERIL model outperforms the unadapted SE model in various metrics: PESQ, STOI, eSTOI, and short-time spectral amplitude SDR. Meanwhile, compared with the traditional fine-tuning adaptive SE model, the SERIL model can significantly reduce the forgetting of previous noise environments by 52%. The promising results indicate that the SERIL model can effectively overcome the catastrophic forgetting problem and can be suitably deployed in real-world applications, where the noise environment changes frequently.
    RegularizationTraining setCurvatureLong short term memoryOptimizationDeep learningHyperparameterConvolution Neural NetworkRecurrent neural networkFully connected layer...
  • Text summarization is an NLP task which aims to convert a textual document into a shorter one while keeping as much meaning as possible. This pedagogical article reviews a number of recent Deep Learning architectures that have helped to advance research in this field. We will discuss in particular applications of pointer networks, hierarchical Transformers and Reinforcement Learning. We assume basic knowledge of Seq2Seq architecture and Transformer networks within NLP.
    ArchitectureAttentionDeep learningComputational linguisticsRecurrent neural networkReinforcement learningInferenceEmbeddingTraining setCompleteness...
  • We investigate a growing body of work that seeks to improve recommender systems through the use of review text. Generally, these papers argue that since reviews 'explain' users' opinions, they ought to be useful to infer the underlying dimensions that predict ratings or purchases. Schemes to incorporate reviews range from simple regularizers to neural network approaches. Our initial findings reveal several discrepancies in reported results, partly due to (e.g.) copying results across papers despite changes in experimental settings or data pre-processing. First, we attempt a comprehensive analysis to resolve these ambiguities. Further investigation calls for discussion on a much larger problem about the "importance" of user reviews for recommendation. Through a wide range of experiments, we observe several cases where state-of-the-art methods fail to outperform existing baselines, especially as we deviate from a few narrowly-defined settings where reviews are useful. We conclude by providing hypotheses for our observations, that seek to characterize under what conditions reviews are likely to be helpful. Through this work, we aim to evaluate the direction in which the field is progressing and encourage robust empirical evaluation.
    Data Pre-processingNeural networkPotentialDimensionsField...
  • Short-term traffic forecasting based on deep learning methods, especially recurrent neural networks (RNN), has received much attention in recent years. However, the potential of RNN-based models in traffic forecasting has not yet been fully exploited in terms of the predictive power of spatial-temporal data and the capability of handling missing data. In this paper, we focus on RNN-based models and attempt to reformulate the way to incorporate RNN and its variants into traffic prediction models. A stacked bidirectional and unidirectional LSTM network architecture (SBU-LSTM) is proposed to assist the design of neural network structures for traffic state forecasting. As a key component of the architecture, the bidirectional LSTM (BDLSM) is exploited to capture the forward and backward temporal dependencies in spatiotemporal data. To deal with missing values in spatial-temporal data, we also propose a data imputation mechanism in the LSTM structure (LSTM-I) by designing an imputation unit to infer missing values and assist traffic prediction. The bidirectional version of LSTM-I is incorporated in the SBU-LSTM architecture. Two real-world network-wide traffic state datasets are used to conduct experiments and published to facilitate further traffic prediction research. The prediction performance of multiple types of multi-layer LSTM or BDLSTM models is evaluated. Experimental results indicate that the proposed SBU-LSTM architecture, especially the two-layer BDLSTM network, can achieve superior performance for the network-wide traffic prediction in both accuracy and robustness. Further, comprehensive comparison results show that the proposed data imputation mechanism in the RNN-based models can achieve outstanding prediction performance when the model's input data contains different patterns of missing values.
    Long short term memoryImputationRecurrent neural networkArchitectureTime SeriesDeep learningNeural networkHidden layerMean squared errorTraffic network...
  • In this paper, we propose a novel generative network (SegAttnGAN) that utilizes additional segmentation information for the text-to-image synthesis task. As the segmentation data introduced to the model provides useful guidance on the generator training, the proposed model can generate images with better realism quality and higher quantitative measures compared with the previous state-of-art methods. We achieved Inception Score of 4.84 on the CUB dataset and 3.52 on the Oxford-102 dataset. Besides, we tested the self-attention SegAttnGAN which uses generated segmentation data instead of masks from datasets for attention and achieved similar high-quality results, suggesting that our model can be adapted for the text-to-image synthesis task.
    AttentionGenerative Adversarial NetLong short term memoryEmbeddingSelf-attention modelWord embeddingNatural languageKeyphraseInferenceGaussian noise...