- Pauli matrices (Pauli matrices)

by Prof. Hasan Keleş16 Feb 2018 07:45 - Almost disjoint family (Almost disjoint family)

by Dr. Jonathan Verner08 Jan 2018 14:22 - SSFM (SSFM)

by Emmanouil Markoulakis19 Dec 2017 15:46 - Ferrolens (Ferrolens)

by Emmanouil Markoulakis19 Dec 2017 15:35 - Neutrino trident production (Neutrino trident production)

by Dr. Oleg Ruchayskiy21 Jul 2017 18:29 - Quantum point contact (Quantum point contact)

by Prof. Carlo Beenakker08 Jan 2011 20:32 - RKKY interaction (RKKY interaction)

by Dr. Vadim Cheianov31 Aug 2009 09:28 - Gravitational lensing (Gravitational lensing)

by Prof. Koen Kuijken05 Dec 2010 22:11 - Fingers of God (Fingers of God)

by Dr. Ganna Ivashchenko18 May 2011 22:42 - Kaluza-Klein dark matter (Kaluza-Klein dark matter)

by Dr. Geraldine Servant05 Dec 2010 22:13

- Although the prime numbers are deterministic, they can be viewed, by some measures, as pseudo-random numbers. In this article, we numerically study the pair statistics of the primes using statistical-mechanical methods, especially the structure factor $S(k)$ in an interval $M \leq p \leq M + L$ with $M$ large, and $L/M$ smaller than unity. We show that the structure factor of the prime-number configurations in such intervals exhibits well-defined Bragg-like peaks along with a small "diffuse" contribution. This indicates that the primes are appreciably more correlated and ordered than previously thought. Our numerical results definitively suggest an explicit formula for the locations and heights of the peaks. This formula predicts infinitely many peaks in any non-zero interval, similar to the behavior of quasicrystals. However, primes differ from quasicrystals in that the ratio between the location of any two predicted peaks is rational. We also show numerically that the diffuse part decays slowly as $M$ and $L$ increases. This suggests that the diffuse part vanishes in an appropriate infinite-system-size limit.Prime numberQuasicrystalStatisticsZeta functionTwo-point correlation functionCosmological mirror symmetryInfinitesimalFast Fourier transformSuperpositionPrecision...
- This review discusses the present experimental and theoretical status of rare flavour-changing neutral current $b$-quark decays at the begin of 2018. It includes a discussion of the experimental situation and details of the currently observed anomalies in measurements of flavour observables, including lepton flavour universality. Progress on the theory side, within and beyond the Standard Model theory is also discussed, together with potential New Physics interpretations of the present measurements.Branching ratioDecay rateFlavour Changing Neutral CurrentsFlavourLepton flavour universalityMeson decaysInvariant massInterferenceRare decayMaximum likelihood...
- We study the phenomenological constraints and consequences in the flavor sector, of introducing a new fourth generation $\mathcal{Z}_{2}$ odd vector-like lepton doublet along with a Standard Model (SM) singlet scalar and an $SU(2)_{L}$ singlet scalar leptoquark carrying electromagnetic charge of $+2/3$, both odd under a $\mathcal{Z}_{2}$. We show that with little fine tuning among the various Yukawa couplings in the new physics (NP) Lagrangian along with the CKM parameters, the model is able to push the theoretical value of $R(D^{*})^{th}$ from $0.252 \pm 0.003$ to $0.263 \pm 0.051$ and $R(D)^{th}$ from $0.300 \pm 0.008$ to $0.313 \pm 0.158$ compared to the SM value. Especially the NP contributions are able to reduce the discrepancy between experiment and theory of $R(D^{*})$ substantially compared to SM. This is quite impressive given that the model satisfy all other very stringent constrains coming from neutral meson oscillations and precision $Z$-pole data.Standard ModelLeptoquarkYukawa couplingCabibbo-Kobayashi-Maskawa matrixPrecisionLarge Hadron ColliderHeavy Flavor Averaging GroupDiscrete symmetryYukawa interactionCP violation...
- We address the presently reported significant flavor anomalies in the Kaon and $B$ meson systems such as the CP violating Kaon decay ($\epsilon'/\epsilon$) and lepton-flavor universality violation in $B$ meson decays ($R_{K^{(*)}},{R_{D^{(*)}}}$), by proposing flavorful and chiral vector bosons as the new physics constitution at around TeV scale. The chiral-flavorful vectors (CFVs) are introduced as a 63-plet of the global $SU(8)$ symmetry, identified as the one-family symmetry for left-handed quarks and leptons in the standard model (SM) forming the 8-dimensional vector. Thus the CFVs include massive gluons, vector leptoquarks, and $W',Z'$-type bosons, which are allowed to have flavorful couplings with left-handed quarks and leptons, and flavor-universal couplings to right-handed ones, where the latter arises from mixing with the SM gauge bosons. The flavor texture is assumed to possess a "minimal" structure to be consistent with the current flavor measurements on the $K$ and $B$ systems: thus the current $K$ and $B$ anomalies can simultaneously be interpreted by the presence of CFVs. Remarkably, we find that as long as both of the $\epsilon'/\epsilon$ and $B$ anomalies persist beyond the SM, the CFVs predict the enhanced $K^+ \to \pi^+ \nu \bar{\nu}$ and $K_L \to \pi^0 \nu \bar{\nu}$ decay rates compared to the SM values, which will readily be explored by the NA62 and KOTO experiments, and they will also be explored in new resonance searches at the Large Hadron Collider.Standard ModelMeson decaysBeyond the Standard ModelKaonTeV scaleDecay rateLeptoquarkKaon decayNA62 experimentVector boson...
- Very recently Belle Collaboration reported observation of a narrow state called $\Omega(2012)$ with mass [$2012.4 \pm 0.7$~(stat)~$\pm 0.6$(syst)]~MeV and width [$6.4^{+2.5}_{-2.0}$~(stat)~$\pm 1.6$~(syst)]~MeV. We calculate the mass and residue of $\Omega(2012)$ state by employing the QCD sum rule method. Comparison of the obtained results with the experimental data allows us to interpret this state as $1P$ orbital excitation of the ground state $\Omega$ baryon, i.e. with quantum numbers $J^P=\frac{3}{2}^-$.QCD sum rulesTwo-point correlation functionExcited stateLight quarkDegree of freedomPropagatorOperator product expansionBorel parameterRadiative decayHigher dimensional operators...
- The first measurement of the lifetime of the doubly charmed baryon $\Xi_{cc}^{++}$ is presented, with the signal reconstructed in the final state $\Lambda_c^+ K^- \pi^+ \pi^+$. The data sample used corresponds to an integrated luminosity of $1.7\,\mathrm{fb}^{-1}$, collected by the LHCb experiment in proton-proton collisions at a centre-of-mass energy of $13\mathrm{\,Te\kern -0.1em V}$. The $\Xi_{cc}^{++}$ lifetime is measured to be $0.256\,^{+0.024}_{-0.022}{\,\rm (stat)\,} \pm 0.014 {\,\rm(syst)}\mathrm{\,ps}$.Systematic errorLHCbCharmed baryonsInvariant massDecay productIntegrated luminositySoftwareProton-proton collisionsCERNComputer algebra system...
- The reactor antineutrino anomaly might be explained by the oscillation of reactor antineutrinos towards a sterile neutrino of eV mass. In order to explore this hypothesis, the STEREO experiment measures the antineutrino energy spectrum in six different detector cells covering baselines between 9 and 11 meters from the compact core of the ILL research reactor. In this article, results from 66 days of reactor-on and 138 days of reactor-off are reported. A novel method to extract the antineutrino rates has been developed based on the distribution of the pulse shape discrimination parameter. The test of a new oscillation toward a sterile neutrino is performed by comparing ratios of cells, independent of absolute normalization and of the prediction of the reactor spectrum. The results are found compatible with the null oscillation hypothesis and the best fit of the reactor antineutrino anomaly is excluded at 97.5\% C.L.Sterile neutrinoAntineutrinoInverse beta decayMonte Carlo methodSolar Terrestrial Relations ObservatoryPhotomultiplier tubesReactor neutrino anomalyNeutrinoReactor antineutrinoNuisance parameter...
- Lepton number violation processes can be induced by the Majorana neutrino exchange, which provide evidence for the Majorana nature of neutrinos. In addition to the natural explanation of the small neutrino masses, Type-I seesaw mechanism predicts the existence of Majorana neutrinos. The aim of this work is to study the B meson rare decays $B^{+} \to K^{(*)+}\mu^+\mu^-$ in the standard model and its extensions, and then to investigate the same-sign decay processes $B^{+}\to K^{(*)-}\mu^{+}\mu^+$. The corresponding dilepton invariant mass distributions are predicted. It is found that the dilepton angular distributions illustrate the properties of new interactions induced by the Majorana neutrinos.Majorana neutrinoLepton number violationBranching ratioNeutrinoLHCbB mesonRare decayForm factorMeson decaysInvariant mass...
- The significance of the observed tensions in the angular observables in B -> K* mu+ mu- are dependent on the theory estimation of the hadronic contributions to these decays. Therefore, we discuss in detail the various available approaches for taking into account the long-distance hadronic effects and examine how the different estimations of these contributions result in distinct significance of the new physics interpretation of the observed anomalies. Furthermore, besides the various theory estimations of the non-factorisable contributions we consider a general parameterisation which is fully consistent with the analyticity structure of the amplitudes. We make a statistical comparison to find whether the most favoured explanation of the anomalies is new physics or underestimated hadronic effects within this general parametrisation. Moreover, assuming the source of the anomalies to be new physics, there is a priori no reason to believe that - in the effective field theory language - only one type of operator is responsible for the tensions. We thus perform a global fit where all the Wilson coefficients which can effectively receive new physics contributions are considered, allowing for lepton flavour universality breaking effects as well as contributions from chirality flipped and scalar and pseudoscalar operators.Wilson coefficientsStandard ModelPseudoscalarForm factorHelicityLepton flavour universalityBranching ratioFlavourLight-cone sum rulesMuon...
- We show that stringent limits on leptoquarks that couple to first-generation quarks and left-handed electrons or muons can be derived from the spectral shape of the charged-current Drell-Yan process ($p p \to \ell^\pm \nu$) at Run 2 of the LHC. We identify and examine all six leptoquark species that can generate such a monolepton signal, including both scalar and vector leptoquarks, and find cases where the leptoquark exchange interferes constructively, destructively or not at all with the Standard Model signal. When combined with the corresponding leptoquark-mediated neutral-current ($p p \to \ell^+ \ell^-$) process, we find the most stringent limits obtained to date, outperforming bounds from pair production and atomic parity violation. We show that, with 3000 fb$^{-1}$ of data, combined measurements of the transverse mass in $p p \to \ell^\pm \nu$ events and invariant mass in $p p \to \ell^+ \ell^-$ events can probe masses between 8 TeV and 18 TeV, depending on the species of leptoquark, for electroweak-sized couplings. In light of such robust sensitivities, we strongly encourage the LHC experiments to interpret Drell-Yan (dilepton and monolepton) events in terms of leptoquarks, alongside usual scenarios like $Z'$ bosons and contact interactions.LeptoquarkInterferenceDrell-Yan processPair productionElectroweakPartonInvariant massMuonParity violationLarge Hadron Collider...
- We examine the parameter space of supersymmetric models with $R$-parity violating interactions of the form $\lambda' L Q D^c$ to explain the various anomalies observed in $b \rightarrow s \ell \ell$ transitions. To generate the appropriate operator in the low energy theory, we are led to a region of parameter space where loop contributions dominate. In particular, we concentrate on parameters for which diagrams involving winos, which have not been previously considered, give large contributions. Many different potentially constraining processes are analyzed, including $\tau \rightarrow \mu \mu \mu$, $B_s-\bar{B}_s$ mixing, $B \rightarrow K^{(*)} \nu \bar{\nu}$, $Z$ decays to charged leptons, and direct LHC searches. We find that it is possible to explain the anomalies, but it requires large values of $\lambda'$, which lead to relatively low Landau poles.Loop integralBranching ratioStandard ModelLandau poleLarge Hadron ColliderPhotonicsCharged leptonSupersymmetricParity violating interactionNeutrino mass...
- When a new heavy particle is discovered at the LHC, it will be interesting to study its decays into Standard Model particles using an effective field-theory framework. We point out that the proper effective theory can not be constructed as an expansion in local, higher-dimensional operators; rather, it must be based on non-local operators defined in soft-collinear effective theory (SCET). For the interesting case where the new resonance is a gauge-singlet spin-0 boson, which is the first member of a new sector governed by a mass scale $M$, we show how a consistent scale separation between $M$ and the electroweak scale $v$ is achieved up to next-to-next-to-leading order in the expansion parameter $\lambda\sim v/M$. The Wilson coefficients in the effective Lagrangian depend in a non-trivial way on the mass of the new resonance and the masses of yet undiscovered heavy particles. Large logarithms of the ratio $M/v$ can be systematically resummed using the renormalization group. We develop a SCET toolbox, with which it is straightforward to construct the relevant effective Lagrangians for new heavy particles with other charges and spin.Soft-collinear effective theoryWilson coefficientsEffective LagrangianStandard ModelAnomalous dimensionEffective field theoryGauge fieldEffective theoryScale of new physicsElectroweak scale...
- In the light of the recent discovery of the $\Xi^{++}_{cc}$ by LHCb collaboration, we study the stable doubly heavy tetraquarks. These states are compact exotic hadrons which can be approximated as the diquark-anti-diquark correlations. In the flavor SU(3) symmetry, they form an SU(3) triplet or anti-sextet. The spectra of the stable doubly heavy tetraquark states are predicted by Sakharov-Zeldovich formula. We find that the $T^+_{cc\bar{u}\bar{d}}({\bf 3})$ is about 16MeV below the $DD^*$ threshold, while $T^{-}_{bb\bar{u}\bar{d}}({\bf 3})$ is about 73MeV below the $BB^*$ threshold. We then study the semileptonic and nonleptonic weak decays of the stable doubly heavy tetraquark states. The decay amplitudes are parametrized in terms of SU(3)-irreducible amplitudes. Ratios between decay widths of different channels are also derived. At the end, we collect the Cabibbo allowed two-body and three-body decay channels, which are most promising to search for the stable doubly heavy tetraquark states at LHCb and Belle II experiments.TetraquarkDecay widthHamiltonianDiquarkDecay channelsCharmed mesonWeak decayLight quarkFeynman diagramsLHCb...
- There are four models of new physics (NP) that can potentially simultaneously explain the $b \to s \mu^+ \mu^-$ and $b \to c \tau^- {\bar\nu}$ anomalies. They are the S3, U3 and U1 leptoquarks (LQs), and a triplet of standard-model-like vector bosons (VBs). Under the theoretical assumption that the NP couples only to the third generation in the weak basis, previous analyses found that, when constraints from other processes are taken into account, the S3, U3 and VB models cannot explain the B anomalies, but U1 is viable. In this paper, we reanalyze these models, but without any assumption about their couplings. We find that, even in this most general case, S3 and U3 are excluded. For the U1 model, we find that constraints from the semileptonic lepton-flavour-violating (LFV) processes $B \to K \mu \tau$, $\tau \to \mu \phi$ and $\Upsilon \to \mu \tau$, which have been largely ignored previously, are very important. Because of the LFV constraints, the pattern of couplings of the U1 LQ is similar to that obtained with the above theoretical assumption. Also, the LFV constraints render unimportant those constraints obtained using the renormalization group equations. As for the VB model, it is excluded if the above theoretical assumption is made due to the additional constraints from $B_s^0$-${\bar B}_s^0$ mixing, $\tau\to 3\mu$ and $\tau \to \mu \nu {\bar\nu}$. By contrast, we find a different set of NP couplings that both explains the $b \to s \mu^+ \mu^-$ anomaly and is compatible with all constraints. However, it does not reproduce the measured values of the $b \to c \tau^- {\bar\nu}$ anomalies -- it would be viable only if future measurements find that the central values of these anomalies are reduced. Even so, this VB model is excluded by the LHC bounds on high-mass resonant dimuon pairs. This conclusion is reached without any assumptions about the NP couplings.Standard ModelRenormalisation group equationsBranching ratioLepton flavour conservationLepton flavour universalityLarge Hadron ColliderHigh massLeptoquarkCabibbo-Kobayashi-Maskawa matrixEffective field theory...
- Measurements of the $R_{D^*}\equiv\mathrm{Br}(B\rightarrow \tau\nu D^*)/\mathrm{Br}(B\rightarrow e\nu D^*)$ parameter remain in tension with the standard model prediction, despite recent results helping to close the gap. The standard model prediction it is compared with considers the $D^*$ as an external particle, even though what is detected in experiments is a $D\pi$ pair it decays into, from which it is reconstructed. We argue that the experimental result must be compared with the theoretical prediction considering the full 4-body decay $(B\rightarrow l\nu D^* \to l\nu D\pi)$. We show that the longitudinal degree of freedom of the off-shell $D^*$ helps to further close the disagreement gap with experimental data. We find values for the ratio $R_{D\pi}^l \equiv {\mathrm{Br}(B\rightarrow \tau\nu_\tau D \pi)}/\mathrm{Br}(B\rightarrow l\nu_l D\pi)$ of $R_{D\pi}^e=0.271\pm0.003$ and $R_{D\pi}^\mu=0.273\pm0.003$, where the uncertainty comes from the uncertainty of the form factors parameters. Comparing against $R_{D\pi}$ reduces the gap with the latest LHCb result from $0.94\sigma$ to $0.37\sigma$, while the gap with the latest Belle result is reduced from $0.40\sigma$ to just $0.04\sigma$ and with the world average results from $3.4\sigma$ to $2.2\sigma$.Form factorStandard ModelInterferencePropagatorMuonLHCbDecay widthDegree of freedomBranching ratioVector meson...
- The equations of motion for the Standard Model Effective Field Theory (SMEFT) differ from those in the Standard Model. Corrections due to local contact operators modify the equations of motion and impact matching results at sub-leading order in the operator expansion. As a consequence, a matching coefficient in $\mathcal{L}^{(n)}$ (for operators of dimension $n$) can be dependent on the basis choice for $\mathcal{L}^{(m<n)}$. We report the SMEFT equations of motion with corrections due to $\mathcal{L}^{(5,6)}$. We demonstrate the effect of these corrections when matching to sub-leading order by considering the interpretation of recently reported $B \to K^{(*)} \ell^+ \ell^-$ lepton universality anomalies in the SMEFT.Standard ModelEffective field theoryFlavourWilson coefficientsLepton universality of gauge couplingsHigher dimensional operatorsPropagatorHiggs bosonHermitian operatorCP-odd...
- We address the $B$-physics anomalies within a two scalar leptoquark model. The low-energy flavor structure of our set-up originates from two $SU(5)$ operators that relate Yukawa couplings of the two leptoquarks. The proposed scenario has a UV completion, can accommodate all measured lepton flavor universality ratios in $B$-meson decays, is consistent with related flavor observables, and is compatible with direct searches at the LHC. We provide prospects for future discoveries of the two light leptoquarks at the LHC and predict several yet-to-be-measured flavor observables.Large Hadron ColliderLeptoquarkStandard ModelYukawa couplingGrand unification theoryUV completionMeson decaysGUT scaleMixing angleMass eigen state...
- A partition of a positive integer $n$ is a representation of $n$ as a sum of a finite number of positive integers (called parts). A trapezoidal number is a positive integer that has a partition whose parts are a decreasing sequence of consecutive integers, or, more generally, whose parts form a finite arithmetic progression. This paper reviews the relation between trapezoidal numbers, partitions, and the set of divisors of a positive integer. There is also a complete proof of a theorem of Sylvester that produces a stratification of the partitions of an integer into odd parts and partitions into disjoint trapezoids.Durfee squareArithmetic progressionGraphMultidimensional ArrayDivisor functionStratificationEuler's theoremNonnegativeFormal power seriesAlgorithms...
- The special features of CP violation in the Standard Model are presented. The significance of measuring CP violation in B, K and D decays is explained. The predictions of the Standard Model for CP asymmetries in B decays are analyzed in detail. Then, four frameworks of new physics are reviewed: (i) Supersymmetry provides an excellent demonstration of the power of CP violation as a probe of new physics. (ii) Left-right symmetric models are discussed as an example of an extension of the gauge sector. CP violation suggests that the scale of LRS breaking is low. (iii) The variety of extensions of the scalar sector are presented and their unique CP violating signatures are emphasized. (iv) Vector-like down quarks are presented as an example of an extension of the fermion sector. Their implications for CP asymmetries in B decays are highly interesting.CP violationStandard ModelCP asymmetryCabibbo-Kobayashi-Maskawa matrixSupersymmetricInterferenceSupersymmetryCP violating phaseElectric dipole momentNeutral decay...
- There is growing evidence for deviation from the standard model predictions in the ratios between semi-tauonic and semi-leptonic $B$ decays, known as the $R(D^{(*)})$ puzzle. If the source of this non-universality is new physics, it is natural to assume that it also breaks CP symmetry. In this paper we study the possibility of measuring CP violation in semi-tauonic $B$ decays, exploiting interference between excited charm mesons. Given the current values of $R(D^{(*)})$, we find that our proposed CP-violation observable could be as large as about 10%. We discuss the experimental advantages of our method and propose carrying it out at Belle II and LHCb.CP violationStandard ModelInterferenceCP asymmetryLHCbPhase spaceCharmed mesonHelicityHeavy Quark Effective TheoryInvariant mass...
- We perform the flavour $SU(3)$ analysis of the recently discovered $\Omega(2012)$ hyperon. We find that well known (four star) $\Delta(1700)$ resonance with quantum numbers of $J^P=3/2^-$ is a good candidate for the decuplet partner of $\Omega(2012)$ if the branching for the three-body decays of the latter is not too large $\le 70$\%. That implies that the quantum numbers of $\Omega(2012)$ are $I(J^P)=0(3/2^-)$. The predictions for the properties of still missing $\Sigma$ and $\Xi$ decuplet members are made. We also discuss the implications of the ${ \overline{ K} \Xi(1530)}$ molecular picture of $\Omega(2012)$. In the molecular picture $\Omega(2012)$ state can have quantum numbers of either $I(J^P)=0(3/2^-)$ or $I(J^P)=1(3/2^-)$. Crucial experimental tests to distinguish various pictures of $\Omega(2012)$ are suggested.HyperonIsospinDecay widthFlavourBound stateGlassStarDecay modeSystematic errorBranching ratio...
- We investigate the reach at the LHC to probe light sterile neutrinos with displaced vertices. We focus on sterile neutrinos $N$ with masses $m_{N} \sim $ (5-30) GeV, that are produced in rare decays of the Standard Model gauge bosons and decay inside the inner trackers of the LHC detectors. With a strategy that triggers on the prompt lepton accompanying the $N$ displaced vertex and considers charged tracks associated to it, we show that the 13 TeV LHC with $3000$/fb is able to probe active-sterile neutrino mixings down to $|V_{lN}|^2\approx 10^{-9}$, with $l=e,\mu$, which is an improvement of up to four orders of magnitude when comparing with current experimental limits from trileptons and proposed lepton-jets searches. In the case when $\tau$ mixing is present, mixing angles as low as $|V_{\tau N}|^2 \approx 10^{-8}$ can be accessed.Displaced verticesLarge Hadron ColliderSterile neutrinoSterile neutrino massStandard ModelATLAS Experiment at CERNActive-sterile neutrino mixingNeutrinoMixing angleFlavour...
- The angular distribution of the flavor-changing neutral current decay B$^+$$\to$ K$^+\mu^+\mu^-$ is studied in proton-proton collisions at a center-of-mass energy of 8 TeV. The analysis is based on data collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 20.5 fb$^{-1}$. The forward-backward asymmetry $A_{\mathrm{FB}}$ of the dimuon system and the contribution $F_{\mathrm{H}}$ from the pseudoscalar, scalar, and tensor amplitudes to the decay width are measured as a function of the dimuon mass squared. The measurements are consistent with the standard model expectations.Systematic errorMuonMonte Carlo methodStandard ModelProton-proton collisionsInvariant massDecay widthKinematicsForward-backward asymmetryIntegrated luminosity...
- Massive QCD at $\theta=\pi$ breaks CP spontaneously and admits domain walls whose dynamics and phases depend on the number of flavors and their masses. We discuss these issues within the Witten-Sakai-Sugimoto model of holographic QCD. Besides showing that this model reproduces all QCD expectations, we address two interesting claims in the literature. The first is about the possibility that the QCD domain-wall theory is fully captured by three-dimensional physics, only. The second regards the existence of quantum phases in certain Chern-Simons theories coupled to fundamental matter. Both claims are supported by the string theory construction.Domain wallQuark massPhase transitionsChern-Simons theoryGauge fieldChern-Simons termHolographic principleTachyonLevel-rank dualityOpen string theory...
- We investigate an interesting correlation among dark matter phenomenology, neutrino mass generation and GUT baryogenesis, based on the scotogenic model. The model contains additional right-handed neutrinos $N$ and a second Higgs doublet $\Phi$, both of which are odd under an imposed $Z_2$ symmetry. The neutral component of $\Phi$, i.e. the lightest of the $Z_2$-odd particles, is the dark matter candidate. Due to a Yukawa coupling involving $\Phi$, $N$ and the Standard Model leptons, the lepton asymmetry is converted into the dark matter asymmetry so that a non-vanishing $B-L$ asymmetry can arise from $(B-L)$-conserving GUT baryogenesis, leading to a nonzero baryon asymmetry after the sphalerons decouple. On the other hand, $\Phi$ can also generate neutrino masses radiatively. In other words, the existence of $\Phi$ as the dark matter candidate resuscitates GUT baryogenesis and realizes neutrino masses.Dark matterNeutrino massGrand unification theoryBaryogenesisYukawa couplingSphaleronBaryon asymmetry of the UniverseStandard ModelElectroweak phase transitionHiggs boson...
- Future 5G systems will need to support ultra-reliable low-latency communications scenarios. From a latency-reliability viewpoint, it is inefficient to rely on average utility-based system design. Therefore, we introduce the notion of guaranteeable delay which is the average delay plus three standard deviations of the mean. We investigate the trade-off between guaranteeable delay and throughput for point-to-point wireless erasure links with unreliable and delayed feedback, by bringing together signal flow techniques to the area of coding. We use tiny codes, i.e. sliding window by coding with just 2 packets, and design three variations of selective-repeat ARQ protocols, by building on the baseline scheme, i.e. uncoded ARQ, developed by Ausavapattanakun and Nosratinia: (i) Hybrid ARQ with soft combining at the receiver; (ii) cumulative feedback-based ARQ without rate adaptation; and (iii) Coded ARQ with rate adaptation based on the cumulative feedback. Contrasting the performance of these protocols with uncoded ARQ, we demonstrate that HARQ performs only slightly better, cumulative feedback-based ARQ does not provide significant throughput while it has better average delay, and Coded ARQ can provide gains up to about 40% in terms of throughput. Coded ARQ also provides delay guarantees, and is robust to various challenges such as imperfect and delayed feedback, burst erasures, and round-trip time fluctuations. This feature may be preferable for meeting the strict end-to-end latency and reliability requirements of future use cases of ultra-reliable low-latency communications in 5G, such as mission-critical communications and industrial control for critical control messaging.ErasureGraphS-shaped resonatorHidden Markov modelHidden stateNetwork CodingStatisticsColumn vectorClosed-form expressionDegree of freedom...
- We introduce the zip tree, a form of randomized binary search tree. One can view a zip tree as a treap (Seidel and Aragon 1996) in which priority ties are allowed and in which insertions and deletions are done by unmerging and merging paths ("unzipping" and "zipping") rather than by doing rotations. Alternatively, one can view a zip tree as a binary-tree representation of a skip list (Pugh 1990). Doing insertions and deletions by unzipping and zipping instead of by doing rotations avoids some pointer changes and can thereby improve efficiency. Representing a skip list as a binary tree avoids the need for nodes of different sizes and can speed up searches and updates. Zip trees are at least as simple as treaps and skip lists but offer improved efficiency. Their simplicity makes them especially amenable to concurrent operations.RankData structuresPrecisionTree-depthCountingUniform distributionRandom permutationPrecursorEuclid missionDuality...
- The hulls of linear and cyclic codes over finite fields have been of interest and extensively studied due to their wide applications. In this paper, the hulls of cyclic codes of length $n$ over the ring $\mathbb{Z}_4$ have been focused on. Their characterization has been established in terms of the generators viewed as ideals in the quotient ring $\mathbb{Z}_4[x]/\langle x^n-1\rangle$. An algorithm for computing the types of the hulls of cyclic codes of arbitrary odd length over $\mathbb{Z}_4$ has been given. The average $2$-dimension $E(n)$ of the hulls of cyclic codes of odd length $n$ over $\mathbb{Z}_4$ has been established. A general formula for $E(n)$ has been provided together with its upper and lower bounds. It turns out that $E(n)$ grows the same rate as $n$.Galois fieldPairwise coprimeQuotient ringLower and upperCoprimeVector spaceDimensionsPolynomialAlgorithmsTheory...
- Cyclic codes are an important class of linear codes, whose weight distribution have been extensively studied. Most previous results obtained so far were for cyclic codes with no more than three zeroes. Inspired by the works \cite{Li-Zeng-Hu} and \cite{gegeng2}, we study two families of cyclic codes over $\mathbb{F}_p$ with arbitrary number of zeroes of generalized Niho type, more precisely $\ca$ (for $p=2$) of $t+1$ zeroes, and $\cb$ (for any prime $p$) of $t$ zeroes for any $t$. We find that the first family has at most $(2t+1)$ non-zero weights, and the second has at most $2t$ non-zero weights. Their weight distribution are also determined in the paper.RankVandermonde determinantGalois fieldNumber theoryJet energy scalePrime numberBinomial coefficientCross-correlationMultiplicative groupGraph...
- We devise a unified framework for the design of canonization algorithms. Using hereditarily finite sets, we define a general notion of combinatorial objects that includes graphs, hypergraphs, relational structures, codes, permutation groups, tree decompositions, and so on. Our approach allows for a systematic transfer of the techniques that have been developed for isomorphism testing to canonization. We use it to design a canonization algorithm for general combinatorial objects. This result gives new fastest canonization algorithms with an asymptotic running time matching the best known isomorphism algorithm for the following types of objects: hypergraphs, hypergraphs of bounded color class size, permutation groups (up to permutational isomorphism) and codes that are explicitly given (up to code equivalence).IsomorphismCosetGraphHypergraphPermutationSubgroupPolynomial timeAdiabatic contraction of dark matterAutomorphismTree decomposition...
- We describe a replacement for RAID 6, based on a new linear, systematic code, which detects and corrects any combination of $E$ errors (unknown location) and $Z$ erasures (known location) provided that $Z+2E \leq 4$. We investigate some scenarios for error correction beyond the code's minimum distance, using list decoding. We describe a decoding algorithm with quasi-logarithmic time complexity, when parallel processing is used: $\approx O(\log N)$ where $N$ is the number of disks in the array (similar to RAID 6). By comparison, the error correcting code implemented by RAID 6 allows error detection and correction only when $(E,Z)=(1,0)$, $(0,1)$, or $(0,2)$. Hence, when in degraded mode (i.e when $Z \geq 1$), RAID 6 loses its ability for detecting and correcting random errors (i.e $E=0$, which is known as a silent data corruption). In contrast, the proposed code does not experience silent data corruption unless $Z \geq 3$. These properties, the relative simplicity of implementation, vastly improvement data protection, and low computational complexity of the decoding algorithm, make this code a natural successor to RAID 6. As our proposed code is based on the use of quintuple parity, then this justifies our proposal to call it PentaRAID.ErasureRankGalois fieldMultidimensional ArrayComputer algebra systemSymmetric polynomialVector spaceVandermonde determinantStripe phasesRandom error...
- In this paper, we compute and verify the positivity of the Li coefficients for the Dirichlet $L$-functions using an arithmetic formula established in Omar and Mazhouda, J. Number Theory 125 (2007) no.1, 50-58; J. Number Theory 130 (2010) no.4, 1109-1114. Furthermore, we formulate a criterion for the partial Riemann hypothesis and we provide some numerical evidence for it using new formulas for the Li coefficients.Riemann hypothesisArithmeticDirichlet L-functionNumber theoryLi's criterionRiemann zeta functionDirichlet seriesDirichlet characterLaguerre polynomialsEntire function...
- For a balanced cardcounting system we study the random variable of the true count after a number of cards are removed from the remaining deck and we prove a close formula for its standard deviation. As expected, the formula shows that the standard deviation increases with the number of cards removed. This creates a "standard deviation effect" with a two fold consequence: longer long run and presumably larger fluctuations of the bankroll, but a small gain in playing accuracy for the player sitting third base. The opposite happens for the player sitting first base. Thus the optimal position in casino blackjack in terms of shorter long run is first base.CountingRotation CurveKeyphraseStatisticsUniform distributionHypergeometric distributionClassificationGame theoryUnitsFluctuation...
- This is an article written in a popular science style, in which I will explain: (1) the famous Heisenberg uncertainty principle, expressing the experimental incompatibility of certain properties of micro-physical entities; (2) the Compton effect, describing the interaction of an electromagnetic wave with a particle; (3) the reasons of Bohr's complementarity principle, which will be understood as a principle of incompatibility; (4) the Einstein, Podolski and Rosen reality (or existence) criterion, and its subsequent revisitation by Piron and Aerts; (4) the mysterious non-spatiality of the quantum entities of a microscopic nature, usually referred to as non-locality. This didactical text requires no particular technical knowledge to be read and understood, although the reader will have to do her/his part, as conceptually speaking the discussion can become at times a little subtle. The text has been written having in mind one of the objectives of the Center Leo Apostel for Interdisciplinary Studies (CLEA): that of a broad dissemination of scientific knowledge. However, as it also presents notions that are not generally well-known, or well-understood, among professional physicists, its reading may also be beneficial to them.EPR paradoxUncertainty principleComplementarityPrecisionYouTubeQuantum mechanicsCompton effectMeasuring devicesQuantum theoryEarth (planet)...
- I feel compelled to respond to the frequent references to spooky action at a distance that often accompany reports of experiments investigating entangled quantum mechanical states. Most, but not all, of these articles have appeared in the popular press. As an experimentalist I have great admiration for such experiments and the concomitant advances in quantum information and quantum computing, but accompanying claims of action at a distance are quite simply nonsense. Some physicists and philosophers of science have bought into the story by promoting the nonlocal nature of quantum mechanics. In 1964, John Bell proved that classical hidden variable theories cannot reproduce the predictions of quantum mechanics unless they employ some type of action at a distance. I have no problem with this conclusion. Unfortunately, Bell later expanded his analysis and mistakenly deduced that quantum mechanics and by implication nature herself are nonlocal. In addition, some of these articles present Einstein in caricature, a tragic figure who neither understood quantum mechanics nor believed it to be an accurate theory of nature. Consequently, the current experiments have proven him wrong. This is also nonsense.Quantum mechanicsQuantum informationQuantum computationHidden variable theoryActionTheory...
- Euler showed that there are infinitely many triangular numbers that are three times another triangular number. In general, as we prove, it is an easy consequence of the Pell equation that for a given square-free m > 1, the relation D = mD' is satisfied by infinitely many pairs of triangular numbers D, D'. However, due to the erratic behavior of the fundamental solution to the Pell equation, this problem is more difficult for more general polygonal numbers. We will show that if one solution exists, then infinitely many exist. We give an example, however, showing that there are cases where no solution exists. Finally, we also show in this paper that, given m > n > 1 with obvious exceptions, the simultaneous relations P = mP', P = nP" has only finitely many possibilities not just for triangular numbers, but for triplets P, P', P" of polygonal numbers.Triangular numberFundamental solutionElliptic curveSquare-freeTorsion tensorCoprimeNumber theoryArchimedesSubgroupLagrange's theorem...
- By means of $q$-series, we prove that any natural number is a sum of an even square and two triangular numbers, and that each positive integer is a sum of a triangular number plus $x^2+y^2$ for some integers $x$ and $y$ with $x\not\equiv y (mod 2)$ or $x=y>0$. The paper also contains some other results and open conjectures on mixed sums of squares and triangular numbers.Triangular number
- Reminiscences about I. M. Gel'fand on the 100th anniversary of his birth, and about mathematical life in Moscow in the former Soviet Union.GelSeniorityCoolingPrivacyMolecular geneticsStatisticsEffective temperatureNumber theoryBolshoi N-body cosmological simulationCompanion...
- Comparing different neural network representations and determining how representations evolve over time remain challenging open questions in our understanding of the function of neural networks. Comparing representations in neural networks is fundamentally difficult as the structure of representations varies greatly, even across groups of networks trained on identical tasks, and over the course of training. Here, we develop projection weighted CCA (Canonical Correlation Analysis) as a tool for understanding neural networks, building off of SVCCA, a recently proposed method. We first improve the core method, showing how to differentiate between signal and noise, and then apply this technique to compare across a group of CNNs, demonstrating that networks which generalize converge to more similar representations than networks which memorize, that wider networks converge to more similar solutions than narrow networks, and that trained networks with identical topology but different learning rates converge to distinct clusters with diverse representations. We also investigate the representational dynamics of RNNs, across both training and sequential timesteps, finding that RNNs converge in a bottom-up pattern over the course of training and that the hidden state is highly variable over the course of a sequence, even when accounting for linear transforms. Together, these results provide new insights into the function of CNNs and RNNs, and demonstrate the utility of using CCA to understand representations.Recurrent neural networkNeural networkCanonical correlationHidden stateConvolutional neural networkEuclidean distanceOptimizationLong short term memoryAblationGeneralization error...
- Word embeddings are real-valued word representations able to capture lexical semantics and trained on natural language corpora. Models proposing these representations have gained popularity in the recent years, but the issue of the most adequate evaluation method still remains open. This paper presents an extensive overview of the field of word embeddings evaluation, highlighting main problems and proposing a typology of approaches to evaluation, summarizing 16 intrinsic methods and 12 extrinsic methods. I describe both widely-used and experimental methods, systematize information about evaluation datasets and discuss some key challenges.AcronymSemantic similarityWord vectorsComputational linguisticsNatural languageSemantic networkGraphPart-of-speechClassificationEmbedding...
- Naturalness is an extra-empirical quality that aims to assess plausibility of a theory. Finetuning measures are one way to quantify the task. However, knowing statistical distributions on parameters appears necessary for rigor. Such meta-theories are not known yet. A critical discussion of these issues is presented, including their possible resolutions in fixed points. Skepticism of naturalness's utility remains credible, as is skepticism to any extra-empirical theory assessment (SEETA) that claims to identify "more correct" theories that are equally empirically adequate. Specifically to naturalness, SEETA implies that one must accept all concordant theory points as a priori equally plausible, with the practical implication that a theory can never have its plausibility status diminished by even a "massive reduction" of its viable parameter space as long as a single theory point still survives. A second implication of SEETA suggests that only falsifiable theories allow their plausibility status to change, but only after discovery or after null experiments with total theory coverage. And a third implication of SEETA is that theory preference then becomes not about what theory is more correct but what theory is practically more advantageous, such as fewer parameters, easier to calculate, or has new experimental signatures to pursue.NaturalnessGauge coupling constantSupersymmetryRenormalisation group flowGrand unification theorySupersymmetricLarge Hadron ColliderGUT scaleInfrared fixed pointHiggs boson...
- In this work we analyze the performances of two of the most used word embeddings algorithms, skip-gram and continuous bag of words on Italian language. These algorithms have many hyper-parameter that have to be carefully tuned in order to obtain accurate word representation in vectorial space. We provide an accurate analysis and an evaluation, showing what are the best configuration of parameters for specific tasks.Continuous bag-of-wordWord embeddingMessier 5Computational linguisticsEmbeddingPart-of-speechWord vectorsSkip-gram modelArchitectureGoogle.com...
- This paper presents a method of universal coding based on the Narayana series. The rules necessary to make such coding possible have been found and the length of the resulting code has been determined to follow the Narayana count.Universe
- The Newcomb-Benford Law, which is also called the first digit phenomenon, has applications in diverse phenomena ranging from social and computer networks, engineering systems, natural sciences, and accounting. In forensics, it has been used to determine intrusion in a computer server based on the measured expectations of first digits of time varying values of data, and to check whether the information in a data base has been tampered with. There are slight deviations from the law in certain natural data, as in fundamental physical constants, and here we propose a more general bin distribution of which the Newcomb-Benford Law is a special case so that it can be used to provide a better fit to such data, and also open the door to a mathematical examination of the origins of such deviations.Physical constantsNetworks
- Eugene Wigner famously argued for the "unreasonable effectiveness of mathematics" for describing physics and other natural sciences in his 1960 essay. That essay has now led to some 55 years of (sometimes anguished) soul searching --- responses range from "So what? Why do you think we developed mathematics in the first place?", through to extremely speculative ruminations on the existence of the universe (multiverse) as a purely mathematical entity --- the Mathematical Universe Hypothesis. In the current essay I will steer an utterly prosaic middle course: Much of the mathematics we develop is informed by physics questions we are tying to solve; and those physics questions for which the most utilitarian mathematics has successfully been developed are typically those where the best physics progress has been made.Uncertainty principleQuantum field theoryMultiverseComplementarityIndex of refractionWavefunctionEvanescent waveQuantum decoherenceInfinitesimalWormhole...
- One aspect of Chebyshev's bias is the phenomenon that a prime number, $ q $, modulo another prime number, $ p$, experimentally seems to be slightly more likely to be a nonquadratic residue than a quadratic residue. We thought it would be interesting to model this residue bias as a "random" walk using Legendre symbol values as steps. Such a model would allow us to easily visualize the bias. In addition, we would be able to extend our model to other number fields. In this report, we first outline underlying theory and some motivations for our research. In the second section, we present our findings in the rational prime numbers. We found evidence that Chebyshev's bias, if modeled as a Legendre symbol $ \left(\frac{q}{p}\right)$ walk, may be somewhat reduced by only allowing $ q$ to iterate over primes with nonquadratic residue (mod $ 4$). In the final section, we extend our Legendre symbol walks to the Gaussian primes and present our main findings. Let $ \pi_1 = \alpha+\beta i$ and $ \pi_2 = \beta+\alpha i$. We observed strong ($ \pm$) correlations between Gaussian Legendre symbol walks for $ \left[\frac{a+bi}{\pi_1}\right]$ and $ \left[\frac{a+bi}{\pi_2}\right]$ where $ N(\pi_1) = N(\pi_2)$ and $ a+bi$ iterates over Gaussian primes in the first quadrant. We attempt an explanation of why, for some norms, the plots for $ \pi_1$ and $ \pi_2$ have strong positive correlation, while, for other norms, the plots have strong negative correlation. We hope to have written in a way that makes our observations accessible to readers without prior formal training in number theory.Prime numberQuadrantsArithmetic progressionRandom walkQuadratic reciprocityDirichlet characterCoprimePrime elementDirichlet seriesSage...
- Word embeddings have been found to provide meaningful representations for words in an efficient way; therefore, they have become common in Natural Language Processing sys- tems. In this paper, we evaluated different word embedding models trained on a large Portuguese corpus, including both Brazilian and European variants. We trained 31 word embedding models using FastText, GloVe, Wang2Vec and Word2Vec. We evaluated them intrinsically on syntactic and semantic analogies and extrinsically on POS tagging and sentence semantic similarity tasks. The obtained results suggest that word analogies are not appropriate for word embedding evaluation; task-specific evaluations appear to be a better option.Word embeddingPart-of-speechComputational linguisticsContinuous bag-of-wordSemantic similarityPerturbation theoryNatural languageEmbeddingLatent Semantic AnalysisSkip-gram model...
- We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a blackbox differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.Recurrent neural networkNeural networkHidden stateTime SeriesHidden layerOrdinary differential equationsInitial value problemLatent variableGenerative modelOptimization...
- For Majorana neutrino masses the lowest dimensional operator possible is the Weinberg operator at $d=5$. Here we discuss the possibility that neutrino masses originate from higher dimensional operators. Specifically, we consider all tree-level decompositions of the $d=9$, $d=11$ and $d=13$ neutrino mass operators. With renormalizable interactions only, we find 18 topologies and 66 diagrams for $d=9$, and 92 topologies plus 504 diagrams at the $d=11$ level. At $d=13$ there are already 576 topologies and 4199 diagrams. However, among all these there are only very few genuine neutrino mass models: At $d=(9,11,13)$ we find only (2,2,3) genuine diagrams and a total of (2,2,7) models. Here, a model is considered genuine at level $d$ if it automatically forbids lower order neutrino masses {\em without} the use of additional symmetries. We also briefly discuss how neutrino masses and angles can be easily fitted in these high-dimensional models.Neutrino massLoop integralStandard ModelNeutrinoLepton number violationWeinberg operator for neutrino massesHyperchargeVacuum expectation valueMajorana neutrino massesYukawa coupling...
- We study the scenario in which the Standard model is augmented by three generations of right-handed neutrinos and a scalar doublet. The newly introduced fields share an odd charge under a $\mathbb{Z}_2$ parity symmetry. This model, commonly known as "Scotogenic", was designed to provide a mechanism for active neutrino mass generation as well as a viable dark matter candidate. In this paper we consider a scenario in which the dark matter particle is at the keV-scale. Such particle is free from X-ray limits due to the unbroken parity symmetry that forbids the mixing between active and right-handed neutrinos. The active neutrino masses are radiatively generated from the new scalars and the two heavier right-handed states with $\sim \mathcal{O}(100)$ GeV masses. These heavy fermions can produce the observed baryon asymmetry of the Universe through the combination of Akhmedov-Rubakov-Smirnov mechanism and recently proposed scalar decays. To the best of our knowledge, this is the first time that these two mechanisms are shown to be successful in any radiative model. We identify the parameter space where the successful leptogenesis is compatible with the observed abundance of dark matter as well as the measurements from the neutrino oscillation experiments. Interestingly, combining dark matter production and successful leptogenesis gives rise to strict limits from big bang nucleosynthesis which do not allow the mass of dark matter to lie above $\sim 10$ keV, providing a phenomenological hint for considered low-scale dark matter. By featuring the keV-scale dark matter free from stringent X-ray limits, successful baryon asymmetry generation and non-zero active neutrino masses, the model is a direct analogue to the $\nu$MSM model proposed by Asaka, Blanchet and Shaposhnikov. Therefore we dub the presented framework as "The new $\nu$MSM" abbreviated as $\nu\nu$MSM.Dark matterSterile neutrinoNeutrino massBaryon asymmetry of the UniverseStandard ModelLeptogenesisNeutrino Minimal Standard ModelYukawa couplingBig bang nucleosynthesisLepton asymmetry...