Recently bookmarked papers

with concepts:
  • We study a class of topological materials which in their momentum-space band structure exhibit three-fold degeneracies known as triple points. Specifically, we investigate and classify triple points occurring along high-symmetry lines of $\mathcal{P}\mathcal{T}$-symmetric crystalline solids with negligible spin-orbit coupling. By employing the recently discovered non-Abelian band topology, we argue that a rotation-symmetry-breaking strain transforms a certain class of triple points into multi-band nodal links. Although multi-band nodal-line compositions were previously theoretically conceived, a practical condensed-matter platform for their manipulation and inspection has hitherto been missing. By reviewing the known triple-point materials in the considered symmetry class, and by performing first-principles calculations to predict new ones, we identify suitable candidates for the realization of multi-band nodal links. In particular, we find that Li$_2$NaN is an ideal compound to study this phenomenon, where the band nodes facilitate largely tunable density of states and optical conductivity with doping and strain, respectively. The multi-band linking is expected to equip the nodal rings with monopole charges, making such triple-point materials a versatile platform to probe the non-Abelian band topology.
    Triple pointQuaternionsHamiltonianFermi levelBrillouin zoneDensity of statesSymmetry breakingBerry phaseTerminationSpin-orbit interaction...
  • Adaptive algorithms based on in-network processing over networks are useful for online parameter estimation of historical data (e.g., noise covariance) in predictive control and machine learning areas. This paper focuses on the distributed noise covariance matrices estimation problem for multi-sensor linear time-invariant (LTI) systems. Conventional noise covariance estimation approaches, e.g., auto-covariance least squares (ALS) method, suffers from the lack of the sensor's historical measurements and thus produces high variance of the ALS estimate. To solve the problem, we propose the distributed auto-covariance least squares (D-ALS) algorithm based on the batch covariance intersection (BCI) method by enlarging the innovations from the neighbors. The accuracy analysis of D-ALS algorithm is given to show the decrease of the variance of the D-ALS estimate. The numerical results of cooperative target tracking tasks in static and mobile sensor networks are demonstrated to show the feasibility and superiority of the proposed D-ALS algorithm.
    CovarianceSensor networkLeast squaresCovariance matrixMean squared errorStatistical estimatorMachine learningSteady stateStatisticsBayesian...
  • Ultrasound modulated bioluminescence tomography (UMBLT) is an imaging method which can be formulated as a hybrid inverse source problem. In the regime where light propagation is modeled by a radiative transfer equation, previous approaches to this problem require large numbers of optical measurements [10]. Here we propose an alternative solution for this inverse problem which requires only a single optical measurement in order to reconstruct the isotropic source. Specifically, we derive two inversion formulae based on Neumann series and Fredholm theory respectively, and prove their convergence under sufficient conditions. The resulting numerical algorithms are implemented and experimented to reconstruct both continuous and discontinuous sources in the presence of noise.
    Neumann seriesInverse problemsBoundary value problemAnisotropyDiscretizationTrapezoidal ruleCompact operatorBounded operatorRadiative transfer equationDs meson...
  • It has been recently claimed that KOIs-268.01, 303.01, 1888.01, 1925.01, 2728.01 & 3320.01 are exomoon candidates, based on an analysis of their transit timing. Here, we perform an independent investigation, which is framed in terms of three questions: 1) Are there significant excess TTVs? 2) Is there a significant periodic TTV? 3) Is there evidence for a non-zero moon mass? We applied rigorous statistical methods to these questions alongside a re-analysis of the Kepler photometry and find that none of the KOIs satisfy these three tests. Specifically, KOIs-268.01 & 3220.01 pass none of the tests and KOIs-303.01, 1888.01 & 1925.01 pass a single test each. Only KOI-2728.01 satisfies two, but fails the cross-validation test for predictions. Further, detailed photodynamical modeling reveals that KOI-2728.01 favours a negative radius moon (as does KOI-268.01). We also note that we find a significant photoeccentric for KOI-1925.01 indicating an eccentric orbit of e>(0.62+/-0.06). For comparison, we applied the same tests to Kepler-1625b, which reveals that 1) and 3) are passed, but 2) cannot be checked with the cross-validation method used here, due to the limited number of available epochs. In conclusion, we find no compelling evidence for exomoons amongst the six KOIs. Despite this, we're able to derive exomoon mass upper limits versus semi-major axis, with KOI-3220.01 leading to particularly impressive constraints of Ms/Mp < 0.4% [2 sigma] at a similar relative semi-major to that of the Earth-Moon.
    Kepler Objects of InterestLight curvePlanetPhotometryStatisticsElliptical orbitEccentricityBayesian information criterionTime SeriesSemimajor axis...
  • Searches for young gas giant planets at wide separations have so far focused on techniques appropriate for compact (Jupiter sized) planets. Here we point out that protoplanets born through Gravitational Instability (GI) may remain in an initial pre-collapse phase for as long as the first $ 10^5-10^7$ years after formation. These objects are hundreds of times larger than Jupiter and their atmospheres are too cold ($T\sim$ tens of K) to emit in the NIR or H$\alpha$ via accretion shocks. However, it is possible that their dust emission can be detected with ALMA, even around Class I and II protoplanetary discs. In this paper we produce synthetic observations of these protoplanets. We find that making a detection in a disc at 140 parsecs would require a few hundred minutes of ALMA band 6 observation time. Protoplanets with masses of 3-5 $M_J$ have the highest chance of being detected; less massive objects require unreasonably long observation times (1000 minutes) while more massive ones collapse into giant planets before $10^5$ years. We propose that high resolution surveys of young ($10^5-10^6$ years), massive and face on discs offer the best chance for observing protoplanets. Such a detection would help to place constraints on the protoplanet mass spectrum, explain the turnover in the occurrence frequency of gas giants with system metallicity and constrain the prevalence of GI as a planet formation mechanism. Consistent lack of detection would be evidence against GI as a common planet formation mechanism.
    ProtoplanetAtacama Large Millimeter ArrayAstronomical UnitPlanetSignal to noise ratioGas giantPlanet formationGiant planetMetallicityAccretion...
  • The goal of this work is to study the formation of rocky planets by dry pebble accretion from self-consistent dust-growth models. In particular, we aim at computing the maximum core mass of a rocky planet that can sustain a thin H-He atmosphere to account for the second peak of the Kepler's size distribution. We simulate planetary growth by pebble accretion inside the iceline. The pebble flux is computed self-consistently from dust growth by solving the advection-diffusion equation for a representative dust size. Dust coagulation, drift, fragmentation and sublimation at the water iceline are included. The disc evolution is computed for $\alpha$-discs with photoevaporation from the central star. The planets grow from a moon-mass embryo by silicate pebble accretion and gas accretion. We analyse the effect of a different initial disc mass, $\alpha$-viscosity, disc metallicity and embryo location. Finally, we compute atmospheric mass-loss due to evaporation. We find that inside the iceline, the fragmentation barrier determines the size of pebbles, which leads to different planetary growth patterns for different disc viscosities. Within the iceline the pebble isolation mass typically decays to values below 5 M$_{\oplus}$ within the first million years of disc evolution, limiting the core masses to that value. After computing atmospheric-mass loss, we find that planets with cores below $\sim$4 M$_{\oplus}$ get their atmospheres completely stripped, and a few 4-5 M$_{\oplus}$, cores retain a thin atmosphere that places them in the gap/second peak of the Kepler size distribution. Overall, we find that rocky planets form only in low-viscosity discs ($\alpha \lesssim 10^{-4}$). When $\alpha \geq 10^{-3}$, rocky objects do not grow beyond Mars-mass. The most typical outcome of dry pebble accretion is terrestrial planets with masses spanning from Mars to $\sim$4 M$_{\oplus}$.
    PlanetAccretionViscositySuper-earthRocky planetsMass accretion rateSilicateFragmentationOpacityPhotoevaporation...
  • The existence of a Radius Valley in the Kepler size distribution stands as one of the most important observational constraints to understand the origin and composition of exoplanets with radii between that of Earth and Neptune. The goal of this work is to provide insights into the existence of the Radius Valley from, first, a pure formation point of view, and second, a combined formation-evolution model. We run global planet formation simulations including the evolution of dust by coagulation, drift and fragmentation; and the evolution of the gaseous disc by viscous accretion and photoevaporation. A planet grows from a moon-mass embryo by either silicate or icy pebble accretion, depending on its position with respect to the water ice-line. We account for gas accretion and type-I/II migration. We perform an extensive parameter study evaluating a wide range in disc properties and embryo's initial location. We account for photoevaporation driven mass-loss after formation. We find that due to the change in dust properties at the water ice-line, rocky cores form typically with $\sim$3 M$_{\oplus}$ and have a maximum mass of $\sim$5 M$_{\oplus}$, while icy cores peak at $\sim$10 $M_{\oplus}$, with masses lower than 5 M$_{\oplus}$ being scarce. When neglecting the gaseous envelope, rocky and icy cores account naturally for the two peaks of the Kepler size distribution. The presence of massive envelopes for cores more massive than $\sim$10 M$_{\oplus}$ inflates the radii of those planets above 4 R$_{\oplus}$. While the first peak of the Kepler size distribution is undoubtedly populated by bare rocky cores, the second peak can host water-rich planets with thin H-He atmospheres. Some envelope-loss mechanism should operate efficiently at short orbital periods to explain the presence of $\sim$10-40 M$_{\oplus}$ planets falling in the second peak of the size distribution.
    PlanetAccretionPhotoevaporationExtrasolar planetEvaporationPlanetesimalNeptuneRocky planetsPlanet formationFragmentation...
  • Recent observations have indicated a strong connection between compact ($a \lesssim 0.5$ au) super-Earth and mini-Neptune systems and their outer ($a \gtrsim$ a few au) giant planet companions. We study the dynamical evolution of such inner systems subject to the gravitational effect of an unstable system of outer giant planets, focussing on systems whose end configurations feature only a single remaining outer giant. In contrast to similar studies which used on N-body simulations with specific (and limited) parameters or scenarios, we implement a novel hybrid algorithm which combines N-body simulations with secular dynamics with aims of obtaining analytical understanding and scaling relations. We find that the dynamical evolution of the inner planet system depends crucially on $N_{\mathrm{ej}}$, the number of mutual close encounters between the outer planets prior to eventual ejection/merger. When $N_{\mathrm{ej}}$ is small, the eventual evolution of the inner planets can be well described by secular dynamics. For larger values of $N_{\mathrm{ej}}$, the inner planets gain orbital inclination, inclination and eccentricity in a stochastic fashion analogous to Brownian motion. We develop a theoretical model, and compute scaling laws for the final orbital parameters of the inner system. We show that our model can account for the observed eccentric super-Earths/mini-Neptunes with inclined cold Jupiter companions, such as HAT-P-11, Gliese 777 and $\pi$ Men.
    PlanetTerrestrial planetEccentricityOuter planetsGiant planetSuper-earthBoost factorJupiterCompanionN-body simulation...
  • Beyond the snow line of protoplanetary discs and inside the dense core of molecular clouds, the temperature of gas is low enough for water vapour to condense into amorphous ices on the surface of preexisting refractory dust particles. Recent numerical simulations and laboratory experiments suggest that condensation of the vapour promotes dust coagulation in such a cold region. However, in the numerical simulations, cohesion of refractory materials is often underestimated, while in the laboratory experiments, water vapour collides with surfaces at more frequent intervals compared to the real conditions. Therefore, to re-examine the role of water ice in dust coagulation, we carry out systematic investigation of available data on coagulation of water ice particles by making full use of appropriate theories in contact mechanics and tribology. We find that the majority of experimental data are reasonably well explained by lubrication theories, owing to the presence of a quasi-liquid layer (QLL). Only exceptions are the results of dynamic collisions between particles at low temperatures, which are, instead, consistent with the JKR theory, because QLLs are too thin to dissipate their kinetic energies. By considering the vacuum conditions in protoplanetary discs and molecular clouds, the formation of amorphous water ice on the surface of refractory particles does not necessarily aid their collisional growth as currently expected. While crystallisation of water ice around but outside the snow line eases coagulation of ice-coated particles, sublimation of water ice inside the snow line is deemed to facilitate coagulation of bare refractory particles.
    Molecular cloudWater vaporRefractoryMolecular dynamicsNumerical simulationCohesionSurface energyNanoparticleMelting pointSilicate...
  • Radial velocity (RV) searches for Earth-mass exoplanets in the habitable zone around Sun-like stars are limited by the effects of stellar variability on the host star. In particular, suppression of convective blueshift and brightness inhomogeneities due to photospheric faculae/plage and starspots are the dominant contribution to the variability of such stellar RVs. Gaussian process (GP) regression is a powerful tool for modeling these quasi-periodic variations. We investigate the limits of this technique using 800 days of RVs from the solar telescope on the HARPS-N spectrograph. These data provide a well-sampled time series of stellar RV variations. Into this data set, we inject Keplerian signals with periods between 100 and 500 days and amplitudes between 0.6 and 2.4 m s$^{-1}$. We use GP regression to fit the resulting RVs and determine the statistical significance of recovered periods and amplitudes. We then generate synthetic RVs with the same covariance properties as the solar data to determine a lower bound on the observational baseline necessary to detect low-mass planets in Venus-like orbits around a Sun-like star. Our simulations show that discovering such planets using current-generation spectrographs using GP regression will require more than 12 years of densely sampled RV observations. Furthermore, even with a perfect model of stellar variability, discovering a true exo-Venus with current instruments would take over 15 years. Therefore, next-generation spectrographs and better models of stellar variability are required for detection of such planets.
    PlanetRegressionHigh accuracy radial velocity planetary searchSunStellar variabilitySpectrographsVenusExtrasolar planetStarStatistical significance...
  • The notable claim of quantum supremacy presented by Google's team in 2019 consists of demonstrating the ability of a quantum circuit to generate, albeit with considerable noise, bitstrings from a distribution that is considered hard to simulate on classical computers. Verifying that the generated data is indeed from the claimed distribution and assessing the circuit's noise level and its fidelity is a purely statistical undertaking. The objective of this paper is to explain the relations between quantum computing and some of the statistical aspects involved in demonstrating quantum supremacy in terms that are accessible to statisticians, computer scientists, and mathematicians. Starting with the statistical analysis in Google's demonstration, which we explain, we offer improvements on their estimator of the fidelity, and different approaches to testing the distributions generated by the quantum computer. We propose different noise models, and discuss their implications. A preliminary study of the Google data, focusing mostly on circuits of 12 and 14 qubits is discussed throughout the paper.
    Google.comQubitStatistical estimatorQuantum computationQuantum error correctionQuantum circuitStatisticsConfidence intervalEntanglementEngineering...
  • My 2018 lecture at the ICA workshop in Singapore dealt with quantum computation as a meeting point of the laws of computation and the laws of quantum mechanics. We described a computational complexity argument against the feasibility of quantum computers: we identified a very low-level complexity class of probability distributions described by noisy intermediate-scale quantum computers, and explained why it would allow neither good-quality quantum error-correction nor a demonstration of "quantum supremacy," namely, the ability of quantum computers to make computations that are impossible or extremely hard for classical computers. We went on to describe general predictions arising from the argument and proposed general laws that manifest the failure of quantum computers. In October 2019, "Nature" published a paper describing an experimental work that took place at Google. The paper claims to demonstrate quantum (computational) supremacy on a 53-qubit quantum computer, thus clearly challenging my theory. In this paper, I will explain and discuss my work in the perspective of Google's supremacy claims.
    QubitGoogle.comQuantum computationQuantum error correctionEntanglementStatistical estimatorComplexity classQuantum circuitEngineeringPolynomial time...
  • We optimize fault-tolerant quantum error correction to reduce the number of syndrome bit measurements. Speeding up error correction will also speed up an encoded quantum computation, and should reduce its effective error rate. We give both code-specific and general methods, using a variety of techniques and in a variety of settings. We design new quantum error-correcting codes specifically for efficient error correction, e.g., allowing single-shot error correction. For codes with multiple logical qubits, we give methods for combining error correction with partial logical measurements. There are tradeoffs in choosing a code and error-correction technique. While to date most work has concentrated on optimizing the syndrome-extraction procedure, we show that there are also substantial benefits to optimizing how the measured syndromes are chosen and used. As an example, we design single-shot measurement sequences for fault-tolerant quantum error correction with the 16-qubit extended Hamming code. Our scheme uses 10 syndrome bit measurements, compared to 40 measurements with the Shor scheme. We design single-shot logical measurements as well: any logical Z measurement can be made together with fault-tolerant error correction using only 11 measurements. For comparison, using the Shor scheme a basic implementation of such a non-destructive logical measurement uses 63 measurements. We also offer ten open problems, the solutions of which could lead to substantial improvements of fault-tolerant error correction.
    Quantum error correctionQubitQuantum computationMeasurement...
  • Using a correspondence between the spectrum of the damped wave equation and non-self-adjoint Schroedinger operators, we derive various bounds on complex eigenvalues of the former. In particular, we establish a sharp result that the one-dimensional damped wave operator is similar to the undamped one provided that the L^1 norm of the (possibly complex-valued) damping is less than 2. It follows that these small dampings are spectrally undetectable.
    Wave equationSpectral analysisDissipationResolvent setOperator normDissipative operatorBounded operatorInstabilitySobolev spacePrincipal branch...
  • At cosmic dawn, the 21-centimeter signal from intergalactic hydrogen was driven by Lyman-$\alpha$ photons from some of the earliest stars, producing a spatial pattern that reflected the distribution of galaxies at that time. Due to the large foreground, it is thought that around redshift 20 it is only observationally feasible to detect 21-cm fluctuations statistically, yielding a limited, indirect probe of early galaxies. Here we show that 21-cm images at cosmic dawn should actually be dominated by large (tens of comoving megaparsecs), high contrast bubbles surrounding individual galaxies. We demonstrate this using a substantially upgraded semi-numerical simulation code that realistically captures the formation and 21-cm effects of the small galaxies expected during this era. Small number statistics associated with the rarity of early galaxies, combined with the multiple scattering of photons in the blue wing of the Lyman-$\alpha$ line, create the large bubbles and also enhance the 21-cm power spectrum by a factor of 2--7 and add to it a feature that measures the typical brightness of galaxies. These various signatures of discrete early galaxies are potentially detectable with planned experiments such as the Square Kilometer Array or the Hydrogen Epoch of Reionization Array, even if the early stars formed in dark matter halos with masses as low as $10^8\, M_\odot$, ten thousand times smaller than the Milky Way halo.
    Hydrogen 21 cm lineGalaxyCosmic DawnSquare Kilometre ArrayStar formationNumerical simulationPoisson fluctuations21-cm power spectrumStatisticsIntensity...
  • We revisit the confinement/deconfinement transition in $\mathcal{N}=4$ super Yang-Mills (SYM) theory and its relation to the Hawking-Page transition in gravity. Recently there has been substantial progress on counting the microstates of 1/16-BPS extremal black holes. However, there is presently a mismatch between the Hawking-Page transition and its avatar in $\mathcal{N}=4$ SYM. This led to speculations about the existence of new gravitational saddles that would resolve the mismatch. Here we exhibit a phenomenon in complex matrix models which we call "delayed deconfinement". It turns out that when the action is complex, due to destructive interference, tachyonic modes do not necessarily condense. We demonstrate this phenomenon in ordinary integrals, a simple unitary matrix model, and finally in the context of $\mathcal{N}=4$ SYM. Delayed deconfinement implies a first-order transition, in contrast to the more familiar cases of higher-order transitions in unitary matrix models. We determine the deconfinement line and find remarkable agreement with the prediction of gravity. On the way, we derive some results about the Gross-Witten-Wadia model with complex couplings. Our techniques apply to a wide variety of (SUSY and non-SUSY) gauge theories though in this paper we only discuss the case of $\mathcal{N}=4$ SYM.
    DeconfinementSaddle pointRandom matrix theorySuper Yang-Mills theoryPhase transitionsInstantonPartition functionCountingBlack holeSupersymmetric...
  • We give a unified treatment of dispersive sum rules for four-point correlators in conformal field theory. We call a sum rule dispersive if it has double zeros at all double-twist operators above a fixed twist gap. Dispersive sum rules have their conceptual origin in Lorentzian kinematics and absorptive physics (the notion of double discontinuity). They have been discussed using three seemingly different methods: analytic functionals dual to double-twist operators, dispersion relations in position space, and dispersion relations in Mellin space. We show that these three approaches can be mapped into one another and lead to completely equivalent sum rules. A central idea of our discussion is a fully nonperturbative expansion of the correlator as a sum over Polyakov-Regge blocks. Unlike the usual OPE sum, the Polyakov-Regge expansion utilizes the data of two separate channels, while having (term by term) good Regge behavior in the third channel. We construct sum rules which are non-negative above the double-twist gap; they have the physical interpretation of a subtracted version of superconvergence sum rules. We expect dispersive sum rules to be a very useful tool to study expansions around mean-field theory, and to constrain the low-energy description of holographic CFTs with a large gap. We give examples of the first kind of applications, notably, we exhibit a candidate extremal functional for the spin-two gap problem.
    Conformal field theoryKinematicsMean field theoryDispersion relationSpinEnergy...
  • The adoption of "human-in-the-loop" paradigms in computer vision and machine learning is leading to various applications where the actual data acquisition (e.g., human supervision) and the underlying inference algorithms are closely interwined. While classical work in active learning provides effective solutions when the learning module involves classification and regression tasks, many practical issues such as partially observed measurements, financial constraints and even additional distributional or structural aspects of the data typically fall outside the scope of this treatment. For instance, with sequential acquisition of partial measurements of data that manifest as a matrix (or tensor), novel strategies for completion (or collaborative filtering) of the remaining entries have only been studied recently. Motivated by vision problems where we seek to annotate a large dataset of images via a crowdsourced platform or alternatively, complement results from a state-of-the-art object detector using human feedback, we study the "completion" problem defined on graphs, where requests for additional measurements must be made sequentially. We design the optimization model in the Fourier domain of the graph describing how ideas based on adaptive submodularity provide algorithms that work well in practice. On a large set of images collected from Imgur, we see promising results on images that are otherwise difficult to categorize. We also show applications to an experimental design problem in neuroimaging.
    GraphWaveletImage ProcessingSparsityWavelet transformMachine learningRegion of interestOptimizationWord Mover DistanceGround truth...
  • This paper proposes a novel Extended Particle Swarm Optimization model (EPSO) that potentially enhances the search process of PSO for optimization problem. Evidently, gene expression profiles are significantly important measurement factor in molecular biology that is used in medical diagnosis of cancer types. The challenge to certain classification methodologies for gene expression profiles lies in the thousands of features recorded for each sample. A modified Wrapper feature selection model is applied with the aim of addressing the gene classification challenge by replacing its randomness approach with EPSO and PSO respectively. EPSO is initializing the random size of the population and dividing them into two groups in order to promote the exploration and reduce the probability of falling in stagnation. Experimentally, EPSO has required less processing time to select the optimal features (average of 62.14 sec) than PSO (average of 95.72 sec). Furthermore, EPSO accuracy has provided better classification results (start from 54% to 100%) than PSO (start from 52% to 96%).
    OptimizationFeature selectionGene expressionGeneMolecular biologyParticlesMeasurementProbability...
  • The mass spectra of singly charmed and bottom baryons, $\Lambda_{c/b}(1/2^\pm,3/2^-)$ and $\Xi_{c/b}(1/2^\pm,3/2^-)$, are investigated using a nonrelativistic potential model with a heavy quark and a light diquark. The masses of the scalar and pseudoscalar diquarks are taken from a chiral effective theory. The effect of $U_A(1)$ anomaly induces an inverse hierarchy between the masses of strange and non-strange pseudoscalar diquarks, which leads to a similar inverse mass ordering in $\rho$-mode excitations of singly heavy baryons.
    DiquarkPseudoscalarEffective theoryHeavy quarkCharmed baryonsLattice QCDNeutrino mass hierarchyExcited stateChiral symmetryLight quark...
  • Much of our understanding of critical phenomena is based on the notion of Renormalization Group (RG), but the actual determination of its fixed points is usually based on approximations and truncations, and predictions of physical quantities are often of limited accuracy. The RG fixed points can be however given a fully rigorous and non-perturbative characterization, and this is what is presented here in a model of symplectic fermions with a nonlocal ("long-range") kinetic term depending on a parameter $\varepsilon$ and a quartic interaction. We identify the Banach space of interactions, which the fixed point belongs to, and we determine it via a convergent approximation scheme. The Banach space is not limited to relevant interactions, but it contains all possible irrelevant terms with short-ranged kernels, decaying like a stretched exponential at large distances. As the model shares a number of features in common with $\phi^4$ or Ising models, the result can be used as a benchmark to test the validity of truncations and approximations in RG studies. The analysis is based on results coming from Constructive RG to which we provide a tutorial and self-contained introduction. In addition, we prove that the fixed point is analytic in $\varepsilon$, a somewhat surprising fact relying on the fermionic nature of the problem.
    Renormalization groupQuartic interactionCritical phenomenaIsing modelFermion...
  • We study the holographic map in AdS/CFT, as modeled by a quantum error correcting code with exact complementary recovery. We show that the map is determined by local conditional expectations acting on the operator algebras of the boundary/physical Hilbert space. Several existing results in the literature follow easily from this perspective. The Black Hole area law, and more generally the Ryu-Takayanagi area operator, arises from a central sum of entropies on the relative commutant. These entropies are determined in a state independent way by the conditional expectation. The conditional expectation can also be found via a minimization procedure, similar to the minimization involved in the RT formula. For a local net of algebras associated to connected boundary regions, we show the complementary recovery condition is equivalent to the existence of a standard net of inclusions -- an abstraction of the mathematical structure governing QFT superselection sectors given by Longo and Rehren. For a code consisting of algebras associated to two disjoint regions of the boundary theory we impose an extra condition, dubbed dual-additivity, that gives rise to phase transitions between different entanglement wedges. Dual-additive codes naturally give rise to a new split code subspace, and an entropy bound controls which subspace and associated algebra is reconstructable. We also discuss known shortcomings of exact complementary recovery as a model of holography. For example, these codes are not able to accommodate holographic violations of additive for overlapping regions. We comment on how approximate codes can fix these issues.
    EntropyCommutantAdS/CFT correspondenceVon Neumann algebraEntanglementHolographic principlePhase transitionsQuantum channelHomomorphismConformal field theory...
  • In the context of the solar atmosphere, we re-examine the role of of neutral and ionized species in dissipating the ordered energy of intermediate-mode MHD waves into heat. We solve conservation equations for the hydrodynamics and for hydrogen and helium ionization stages, along closed tubes of magnetic field. First, we examine the evolution of coronal plasma under conditions where coronal heating has abruptly ceased. We find that cool ($< 10^5$K) structures are formed lasting for several hours. MHD waves of modest amplitude can heat the plasma through ion-neutral collisions with sufficient energy rates to support the plasma against gravity. Then we examine a calculation starting from a cooler atmosphere. The calculation shows that warm ($> 10^4 $) K long ($>$ several Mm) tubes of plasma arise by the same mechanism. We speculate on the relevance of these solutions to observed properties of the Sun and similar stars whose atmospheres are permeated with emerging magnetic fields and stirred by convection. Perhaps this elementary process might help explain the presence of ``cool loops'' in the solar transition region and the production of broad components of transition region lines. The production of ionized hydrogen from such a simple and perhaps inevitable mechanism may be an important step towards finding the more complex mechanisms needed to generate coronae with temperatures in excess of 10$^6$K, independent of a star's metallicity.
    StarCoronaCoolingChromosphereSunIonizationSolar transition regionMHD wavesCoronal heatingRecombination...
  • Thomas Milton Liggett was a world renowned UCLA probabilist, famous for his monograph Interacting Particle Systems. He passed away peacefully on May 12, 2020. This is a perspective article in memory of both Tom Liggett the person and Tom Liggett the mathematician.
    GraphRandom walkContact processLattice (order)Stationary distributionPercolationPermutationSpitzer Space TelescopeDualityNearest-neighbor site...
  • Two proofs of the Koml\'os-Major-Tusn\'ady embedding theorems, one for the uniform empirical process and one for the simple symmetric random walk, are given. More precisely, what are proved are the univariate coupling results needed in the proofs, such as Tusn\'{a}dy's lemma. These proofs are modifications of existing proof architectures, one combinatorial (the original proof with many modifications, due to Cs\"{o}rg\~o, R\'{e}v\'{e}sz, Bretagnolle, Massart, Dudley, Carter, Pollard etc.) and one analytical (due to Sourav Chatterjee). There is one common idea to both proofs: we compare binomial and hypergeometric distributions among themselves, rather than with the Gaussian distribution. In the combinatorial approach, this involves comparing Binomial(n,1/2) distribution with the Binomial(4n,1/2) distribution, which mainly involves comparison between the corresponding binomial coefficients. In the analytical approach, this reduces Chatterjee's method to coupling nearest neighbour Markov chains on integers so that they stay close.
    ArchitectureHypergeometric distributionBinomial coefficientRandom walkEmbeddingGaussian distributionMarkov chain...
  • We present a learning-based method for synthesizing novel views of complex outdoor scenes using only unstructured collections of in-the-wild photographs. We build on neural radiance fields (NeRF), which uses the weights of a multilayer perceptron to implicitly model the volumetric density and color of a scene. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders. In this work, we introduce a series of extensions to NeRF to address these issues, thereby allowing for accurate reconstructions from unstructured image collections taken from the internet. We apply our system, which we dub NeRF-W, to internet photo collections of famous landmarks, thereby producing photorealistic, spatially consistent scene representations despite unknown and confounding factors, resulting in significant improvement over the state of the art.
    Multilayer perceptronField
  • We determine the smallest irreducible Brauer characters for finite quasi-simple orthogonal type groups in non-defining characteristic. Under some restrictions on the characteristic we also prove a gap result showing that the next larger irreducible Brauer characters have a degree roughly the square of those of the smallest non-trivial characters.
    UnipotentChandra X-ray ObservatoryOrthogonal groupSubgroupDecomposition matrixModular decompositionSpin groupCosetRankCounting...
  • We introduce the multiset partition algebra $\mathcal{M}_k(\xi)$ over $F[\xi]$, where $F$ is a field of characteristic $0$ and $k$ is a positive integer. When $\xi$ is specialized to a positive integer $n$, we establish the Schur-Weyl duality between the actions of resulting algebra $\mathcal{M}_k(n)$ and the symmetric group $S_n$ on $\text{Sym}^k(F^n)$. The construction of $\mathcal{M}_k(\xi)$ generalizes to any vector $\lambda$ of non-negative integers yielding the algebra $\mathcal{M}_{\lambda}(\xi)$ over $F[\xi]$ so that there is Schur-Weyl duality between the actions of $\mathcal{M}_{\lambda}(n)$ and $S_n$ on $\text{Sym}^{\lambda}(F^n)$. We find the generating function for the multiplicity of each irreducible representation of $S_n$ in $\text{Sym}^\lambda(F^n)$, as $\lambda$ varies, in terms of a plethysm of Schur functions. As consequences we obtain an indexing set for the irreducible representations of $\mathcal{M}_k(n)$, and the generating function for the multiplicity of an irreducible polynomial representation of $GL_n(F)$ when restricted to $S_n$. We show that $\mathcal{M}_\lambda(\xi)$ embeds inside the partition algebra $\mathcal{P}_{|\lambda|}(\xi)$. Using this embedding, over $F$, we prove that $\mathcal{M}_{\lambda}(\xi)$ is a cellular algebra, and $\mathcal{M}_{\lambda}(\xi)$ is semisimple when $\xi$ is not an integer or $\xi$ is an integer such that $\xi\geq 2|\lambda|-1$. We give an insertion algorithm based on Robinson-Schensted-Knuth correspondence realizing the decomposition of $\mathcal{M}_{\lambda}(n)$ as $\mathcal{M}_{\lambda}(n)\times \mathcal{M}_{\lambda}(n)$-module.
    GraphSchur-Weyl dualityEmbeddingIrreducible representationYoung tableauCellular algebraMultigraphPermutationIsomorphismAlgebra homomorphism...
  • The Murnaghan--Nakayama rule is a combinatorial rule for the character values of symmetric groups. We give a new combinatorial proof by explicitly finding the trace of the representing matrices in the standard basis of Specht modules. This gives an essentially bijective proof of the rule. A key lemma is an extension of a straightening result proved by the second author to skew-tableaux. Our module theoretic methods also give short proofs of Pieri's rule and Young's rule.
    Young tableauSpecht moduleDominance orderPermutationCosetSubgroupIsomorphismStatisticsHomomorphismEmpty Lattice Approximation...
  • We realize the $\mathrm{GL}_n(\mathbb{C})$-modules $S^k(S^m(\mathbb{C}^n))$ and $\Lambda^k(S^m(\mathbb{C}^n))$ as spaces of polynomial functions on $n\times k$ matrices. In the case $k=3$, we describe explicitly all the $\mathrm{GL}_n(\mathbb{C})$-highest weight vectors which occur in $S^3(S^m(\mathbb{C}^n))$ and in $\Lambda^3(S^m(\mathbb{C}^n))$ respectively.
    Young tableauSubgroupSchur polynomialAlgebra homomorphismNonnegativeAutomorphismPermutationIsotypic componentUnipotentRepresentation theory...
  • We consider the adjoint representation of a Hopf algebra $H$ focusing on the locally finite part, $H_{\text{adfin}}$, defined as the sum of all finite-dimensional subrepresentations. For virtually cocommutative $H$ (i.e., $H$ is finitely generated as module over a cocommutative Hopf subalgebra), we show that $H_{\text{adfin}}$ is a Hopf subalgebra of $H$. This is a consequence of the fact, proved here, that locally finite parts yield a tensor functor on the module category of any virtually pointed Hopf algebra. For general Hopf algebras, $H_{\text{adfin}}$ is shown to be a left coideal subalgebra. We also prove a version of Dietzmann's Lemma from group theory for Hopf algebras.
    Hopf algebraGroup algebraGroup theorySubgroupVector spaceTorsion tensorAlgebraTensorFieldAction...
  • We introduce special classes of irreducible representations of groups: thick representations and dense representations. Denseness implies thickness, and thickness implies irreducibility. We show that absolute thickness and absolute denseness are open conditions for representations. Thereby, we can construct the moduli schemes of absolutely thick representations and absolutely dense representations. We also describe several results and several examples on thick representations for developing theory of thick representations.
    Irreducible representationInvariant subspaceVector spaceIsomorphismRankInduced representationAbsolutely irreducibleSymplectizationFree groupMorphism...
  • The center $\mathscr{Z}_n(q)$ of the integral group algebra of the general linear group $GL_n(q)$ over a finite field admits a filtration with respect to the reflection length. We show that the structure constants of the associated graded algebras $\mathscr{G}_n(q)$ are independent of $n$, and this stability leads to a universal stable center with positive integer structure constants which governs the algebras $\mathscr{G}_n(q)$ for all $n$. Various structure constants of the stable center are computed and several conjectures are formulated. Analogous stability properties for symmetric groups and wreath products were established earlier by Farahat-Higman and the second author.
    Conjugacy classGroup algebraGraded algebraWreath productGalois fieldSubgroupEmbeddingCodimensionCohomologyVector space...
  • Let $H$ be an abelian subgroup of a finite group $G$ and $\pi$ the set of prime divisors of $|H|$. We prove that $|H O_{\pi}(G)/ O_{\pi}(G)|$ is bounded above by the largest character degree of $G$. A similar result is obtained when $H$ is nilpotent.
    SubgroupNilpotentAutomorphismNormal subgroupPermutationNilpotent groupGroup of Lie typeSolvable groupSemidirect productThe Classical Groups...
  • We first prove, for pairs consisting of a simply connected complex reductive group together with a connected subgroup, the equivalence between two different notions of Gelfand pairs. This partially answers a question posed by Gross, and allows us to use a criterion due to Aizenbud and Gourevitch, and based on Gelfand-Kazhdan's theorem, to study the Gelfand property for complex symmetric pairs. This criterion relies on the regularity of the pair and its descendants. We introduce the concept of a pleasant pair, as a means to prove regularity, and study, by recalling the classification theorem, the pleasantness of all complex symmetric pairs. On the other hand, we prove a method to compute all the descendants of a complex symmetric pair by using the extended Satake diagram, which we apply to all pairs. Finally, as an application, we prove that eight out of the twelve exceptional complex symmetric pairs, together with the infinite family $(\textrm{Spin}_{4q+2}, \textrm{Spin}_{4q+1})$, satisfy the Gelfand property, and state, in terms of the regularity of certain symmetric pairs, a sufficient condition for a conjecture by van Dijk and a reduction of a conjecture by Aizenbud and Gourevitch.
    Gelfand pairSubgroupReductive groupAutomorphismClassificationMaximal torusTorusWeyl groupVector spaceRank...
  • The Littlewood-Richardson coefficients describe the decomposition of tensor products of irreducible representations of a simple Lie algebra into irreducibles. Assuming the number of factors is large, one gets a measure on the space of weights. This limiting measure was extensively studied by many authors. In particular, Kerov computed the corresponding density in a special case in type A and Kuperberg gave a formula for the general case. The goal of this paper is to give a short, self-contained and pure Lie theoretic proof of the formula for the density of the limiting measure. Our approach is based on the link between the limiting measure induced by the Littlewood-Richardson coefficients and the measure defined by the weight multiplicities of the tensor products.
    Tensor productCartan matrixIrreducible representationRankWeyl groupKilling formGaussian distributionNonnegativeIrreducible componentWeyl character formula...
  • We introduce a family of multivariate continuous-time pure birth and pure death chains, with birth and death rates defined in terms of the generalized binomial coefficients for multiplicity free actions. The state spaces for some of the introduced processes are some sets of partitions (equivalently, Young diagrams). The chains turn out to be the classical Markov processes obtained by restricting Biane's quantum Ornstein-Uhlenbeck semigroups to commutative C*-algebras related to Gelfand pairs built on Heisenberg groups.
    Binomial coefficientHeisenberg groupGelfand pairMarkov processJack functionSubgroupRankIntensityMultidimensional ArrayUnitary representation...
  • The inverses of indecomposable Cartan matrices are computed for finite-dimensional Lie algebras and Lie superalgebras over fields of any characteristic, and for hyperbolic (almost affine) complex Lie (super)algebras. We discovered three yet inexplicable new phenomena, of which (a) and (b) concern hyperbolic (almost affine) complex Lie (super)algebras, except for the 5 Lie superalgebras whose Cartan matrices have 0 on the main diagonal: (a) several of the inverses of Cartan matrices have all their elements negative (not just non-positive, as they should be according to an a priori characterization due to Zhang Hechun); (b) the 0s only occur on the main diagonals of the inverses; (c) the determinants of inequivalent Cartan matrices of the simple Lie (super)algebra may differ (in any characteristic). We interpret most of the results of Wei Yangjiang and Zou Yi Ming, Inverses of Cartan matrices of Lie algebras and Lie superalgebras, Linear Alg. Appl., 521 (2017) 283--298 as inverses of the Gram matrices of non-degenerate invariant symmetric bilinear forms on the (super)algebras considered, not of Cartan matrices, and give more adequate references. In particular, the inverses of Cartan matrices of simple Lie algebras were already published, starting with Dynkin's paper in 1952, see also Table 2 in Springer's book by Onishchik and Vinberg (1990).
    Cartan matrixRoot systemKilling formClassificationColumn vectorUndirected graphMaximal torusRepresentation theorySuperspaceAttention...
  • Let $V$ be a free module of rank $n$ over a commutative unital ring $k$. We prove that tensor space $V^{\otimes r}$ satisfies Schur--Weyl duality, regarded as a bimodule for the action of the group algebra of the Weyl group of $\mathrm{GL}(V)$ and the partition algebra $P_r(n)$ over $k$. We also prove a similar result for the half partition algebra.
    PermutationSchur-Weyl dualityEndomorphismRankBlock matrixIsomorphismWeyl groupPermutation matrixGroup algebraCommutant...
  • In this manuscript we show that two partial orders defined on the set of standard Young tableaux of shape $\alpha$ are equivalent. In fact, we give two proofs for the equivalence of the box order and the dominance order for {tableaux}. Both are algorithmic. The first of these proofs emphasizes links to the Bruhat order for the symmetric group and the second provides a more straightforward construction of the cover relations. This work is motivated by the known result that the equivalence of the two combinatorial orders leads to a description of the geometry of the representation space of invariant subspaces of nilpotent linear operators.
    Dominance orderNilpotentYoung tableauPermutationInvariant subspaceExact sequenceNonnegativeIrreducible componentSubgroupGraph...
  • The classical Peter-Weyl theorem describes the structure of the space of functions on a semi-simple algebraic group. On the level of characters (in type A) this boils down to the Cauchy identity for the products of Schur polynomials. We formulate and prove the analogue of the Peter-Weyl theorem for the current groups. In particular, in type A the corresponding characters identity is governed by the Cauchy identity for the products of q-Whittaker functions. We also formulate and prove a version of the Schur-Weyl theorem for current groups. The link between the Peter-Weyl and Schur-Weyl theorems is provided by the (current version of) Howe duality.
    IsomorphismPeter-Weyl theoremVector spaceHighest weight categoryDualityWhittaker functionSchur-Weyl dualityEmbeddingProjective coverCurrent algebra...
  • Schur-Weyl duality is a ubiquitous tool in quantum information. At its heart is the statement that the space of operators that commute with the tensor powers of all unitaries is spanned by the permutations of the tensor factors. In this work, we describe a similar duality theory for tensor powers of Clifford unitaries. The Clifford group is a central object in many subfields of quantum information, most prominently in the theory of fault-tolerance. The duality theory has a simple and clean description in terms of finite geometries. We demonstrate its effectiveness in several applications: (1) We resolve an open problem in quantum property testing by showing that "stabilizerness" is efficiently testable: There is a protocol that, given access to six copies of an unknown state, can determine whether it is a stabilizer state, or whether it is far away from the set of stabilizer states. We give a related membership test for the Clifford group. (2) We find that tensor powers of stabilizer states have an increased symmetry group. We provide corresponding de Finetti theorems, showing that the reductions of arbitrary states with this symmetry are well-approximated by mixtures of stabilizer tensor powers (in some cases, exponentially well). (3) We show that the distance of a pure state to the set of stabilizers can be lower-bounded in terms of the sum-negativity of its Wigner function. This gives a new quantitative meaning to the sum-negativity (and the related mana) -- a measure relevant to fault-tolerant quantum computation. The result constitutes a robust generalization of the discrete Hudson theorem. (4) We show that complex projective designs of arbitrary order can be obtained from a finite number (independent of the number of qudits) of Clifford orbits. To prove this result, we give explicit formulas for arbitrary moments of random stabilizer states.
    QubitCommutantPermutationWigner distribution functionDe Finetti's theoremSchur-Weyl dualityPhase spaceIsometryIsomorphismQuantum information...
  • We discuss the deformed function algebra of a simply connected reductive Lie group G over the complex numbers using a basis consisting of matrix elements of finite dimensional representations. This leads to a preferred deformation, meaning one where the structure constants of comultiplication are unchanged. The structure constants of multiplication are controlled by quantum 3j symbols. We then discuss connections earlier work on preferred deformations that involved Schur-Weyl duality.
    Schur-Weyl dualityIsomorphismIrreducible representationHopf algebraQuantizationHecke algebraUniversal enveloping algebraSchur's lemmaAutomorphismWeighted space...
  • We review Morita equivalence for finite type $k$-algebras $A$ and also a weakening of Morita equivalence which we call stratified equivalence. The spectrum of $A$ is the set of equivalence classes of irreducible $A$-modules. For any finite type $k$-algebra $A$, the spectrum of $A$ is in bijection with the set of primitive ideals of $A$. The stratified equivalence relation preserves the spectrum of $A$ and also preserves the periodic cyclic homology of $A$. However, the stratified equivalence relation permits a tearing apart of strata in the primitive ideal space which is not allowed by Morita equivalence. A key example illustrating the distinction between Morita equivalence and stratified equivalence is provided by affine Hecke algebras associated to extended affine Weyl groups.
    Morita equivalenceMorphismIsomorphismIrreducible representationVector spaceWeyl groupAffine Hecke algebraCyclic homologyRoot datumComplex number...
  • In this paper, we study a new cyclic sieving phenomenon on the set $\mathsf{SST}_n(\lambda)$ of semistandard Young tableaux with the cyclic action $\mathsf{c}$ arising from its $U_q(\mathfrak{sl}_n)$-crystal structure. We prove that if $\lambda$ is a Young diagram with $\ell(\lambda) < n$ and $\gcd( n, |\lambda| )=1$, then the triple $\left( \mathsf{SST}_n(\lambda), \mathsf{C}, q^{- \kappa(\lambda)} s_\lambda(1,q, \ldots, q^{n-1}) \right) $ exhibits the cyclic sieving phenomenon, where $\mathsf{C}$ is the cyclic group generated by $\mathsf{c}$. We further investigate a connection between $\mathsf{c}$ and the promotion $\mathsf{pr}$ and show the bicyclic sieving phenomenon given by $\mathsf{c}$ and $\mathsf{pr}^n$ for hook shape.
    Young tableauCrystal structureWeyl groupCartan matrixHomomorphismPermutationNonnegativeMultidimensional ArrayStatisticsSubgroup...
  • A real representation $\pi$ of a finite group may be regarded as a homomorphism to an orthogonal group $\Or(V)$. For symmetric groups $S_n$, alternating groups $A_n$, and products $S_n \times S_{n'}$ of symmetric groups, we give criteria for whether $\pi$ lifts to the double cover $\Pin(V)$ of $\Or(V)$, in terms of character values. From these criteria, we compute the second Stiefel-Whitney classes of these representations.
    Orthogonal groupHomomorphismStiefel-Whitney classReal representation...
  • We construct an infinite tower of irreducible calibrated representations of periplectic Brauer algebras on which the cup-cap generators act by nonzero matrices. As representations of the symmetric group, these are exterior powers of the standard representation (i.e. hook representations). Our approach uses the recently-defined degenerate affine periplectic Brauer algebra, which plays a role similar to that of the degenerate affine Hecke algebra in representation theory of the symmetric group. We write formulas for the representing matrices in the basis of Jucys--Murphy eigenvectors and we completely describe the spectrum of these representations. The tower formed by these representations provides a new, non-semisimple categorification of Pascal's triangle. Along the way, we also prove some basic results about calibrated representations of the degenerate affine periplectic Brauer algebra.
    Irreducible representationBrauer algebraJucys-Murphy elementGraphAffine Hecke algebraCategorificationRepresentation theory of the symmetric groupRepresentation theoryVector spaceInflation...
  • The partition algebra is an associative algebra with a basis of set-partition diagrams and multiplication given by diagram concatenation. It contains as subalgebras a large class of diagram algebras including the Brauer, planar partition, rook monoid, rook-Brauer, Temperley-Lieb, Motzkin, planar rook monoid, and symmetric group algebras. We give a construction of the irreducible modules of these algebras in two isomorphic ways: first, as the span of symmetric diagrams on which the algebra acts by conjugation twisted with an irreducible symmetric group representation and, second, on a basis indexed by set-partition tableaux such that diagrams in the algebra act combinatorially on tableaux. The first representation is analogous to the Gelfand model and the second is a generalization of Young's natural representation of the symmetric group on standard tableaux. The methods of this paper work uniformly for the partition algebra and its diagram subalgebras. As an application, we express the characters of each of these algebras as nonnegative integer combinations of symmetric group characters whose coefficients count fixed points under conjugation.
    Young tableauMonoidPermutationRankGroup algebraPlanar algebraBrauer algebraIsomorphismNonnegativeRegular representation...
  • Let $G$ be a finite group of odd order. We show that if $\chi$ is an irreducible primitive character of $G$ then for all primes $p$ dividing the order of $G$ there is a conjugacy class such that the $p-$part of $\chi(1)$ divides the size of that conjugacy class. We also show that for some classes of groups the entire degree of an irreducible primitive character $\chi$ divides the size of a conjugacy class.
    SubgroupConjugacy classNormal subgroupPrime numberCosetSymplectic formNilpotent groupNilpotentAttentionAutomorphism...