The intricacies of realistic — namely: of classically controlled and (topologically) error-protected — quantum algorithms arguably make computer-assisted verification a practical necessity; and yet a satisfactory theory of classically-dependent quantum data types had been missing, certainly one that would be aware of topological error-protection.
To solve this problem we present Linear Homotopy Type Theory (LHoTT) as a programming and certification language for quantum computers with classical control and topologically protected quantum gates, focusing on (1.) its categorical semantics, which is a homotopy-theoretic extension of that of Proto-Quipper and a parameterized extension of Abramsky et al.'s quantum protocols, (2.) its expression of quantum measurement as a computational effect induced from dependent linear type formation and reminiscent of Lee at al.‘s dynamic lifting monad but recovering the interacting systems of Coecke et al.‘s ""classical structures"" monads.
Namely, we have recently shown [arxiv.org/abs/2303.02382] that classical dependent type theory in its novel but mature full-blown form of Homotopy Type Theory (HoTT) is naturally a certification language for realistic topological logic gates. But given that categorical semantics of HoTT is famously provided by parameterized homotopy theory, we had argued earlier for a quantum enhancement LHoTT of classical HoTT, now with semantics in parameterized stable homotopy theory. This linear homotopy type theory LHoTT has meanwhile been formally described [doi.org/10.14418/wes01.3.139]; here we explain it as the previously missing certified quantum language with monadic dynamic lifting.
Concretely, we observe that besides its support, inherited from HoTT, for topological logic gates, LHoTT intrinsically provides a system of monadic computational effects which realize what in algebraic topology is known as the ambidextrous form of Grothendieck’s “Motivic Yoga”; and we show how this naturally serves to code quantum circuits subject to classical control implemented via computational effects. Logically this emerges as a linearly-typed quantum version of epistemic modal logic inside LHoTT, which besides providing a philosophically satisfactory formulation of quantum measurement, makes the language validate the quantum programming language axioms proposed by Staton; notably the deferred measurement principle is verified by LHoTT.
Finally we indicate the syntax of a domain-specific programming language QS (an abbreviation both for “Quantum Systems” and for “ QS^0 -modules” aka spectra) which sugars LHoTT to a practical quantum programming language with all these features; and we showcase QS-pseudocode for simple forms of key algorithm classes, such as quantum teleportation, quantum error-correction and repeat-until-success quantum gates.
In this work, we provide device-independent (DI) self-testing of the unsharp instrument through the quantum violation of two Bell inequalities where the devices are uncharacterized and the dimension of the system remains unspecified. We introduce an elegant sum-of-squares approach to derive the dimension-independent optimal quantum violation of Bell inequalities which plays a crucial role. Note that the standard Bell test cannot self-test the post-measurement states and consequently cannot self-test unsharp instrument. The sequential Bell test possess the potential to self-test an unsharp instrument. We demonstrate that there exists a trade-off between the maximum sequential quantum violations of the Clauser–Horne–Shimony–Holt inequality, and they form an optimal pair that enables the DI self-testing of the entangled state, the observables, and the unsharpness parameter. Further, we extend our study to the case of elegant Bell inequality and we argue that it has two classical bounds—the local bound and the non-trivial preparation non-contextual bound, lower than the local bound. Based on the sharing of preparation contextuality by three independent sequential observers, we demonstrate the DI self-testing of two unsharpness parameters. Since an actual experimental scenario involves losses and imperfection, we demonstrate robustness of our certification to noise.
Communication games are one of the widely used tools that are designed to demonstrate quantum supremacy over classical resources in which two or more parties collaborate to perform an information processing task to achieve the highest success probability of winning the game. We propose a specific two-party communication game in the prepare-measure scenario that relies on an encoding-decoding task of specific information. We first demonstrate that quantum theory outperforms the classical preparation noncontextual theory and the optimal quantum success probability of such a communication game enables the semi-device-independent certification of qubit states and measurements. Further, we consider the sequential sharing of quantum preparation contextuality and show that, at most, two sequential observers can share the quantum advantage. The suboptimal quantum advantages for two sequential observers form an optimal pair which certifies a unique value of the unsharpness parameter of the first observer. Since the practical implementation inevitably introduces noise, we devised a scheme to demonstrate the robust certification of the states and unsharp measurement instruments of both the sequential observers.
We introduce a family of positive linear maps in the algebra of 3×3 complex matrices, which generalizes the seminal positive non-decomposable map originally proposed by Choi. Necessary and sufficient conditions for decomposability are derived and demonstrated. Specifically, the proposed maps evolve the circulant structure with 3 parameters into a doubly stochastic structure with 5 parameters offering a new method for the analysis of bound entangled states of two qutrits
We provide a machine learning (ML) based certification technique for many-particle interferences that includes validation of experimental data which is believed to be a hard problem in terms of classical simulatibility. We benchmark our results against the well-established statistical metrics. Through rigorous benchmarking, we pinpoint specific regimes in which our ML-based method outperforms the conventional techniques. Finally, we extend our analysis to a lossy architecture with uniform loss where the traditional statistical benchmarks from random matrix theory fall short. We demonstrate that our technique is robust against particle losses upto O(1/n^2) where n is the total particle count.
Device independent protocols represent a novel approach to information-processing tasks where minimal assumptions are made on the devices used. Typically, the security of such protocols rely on the fact that for certain nonlocal games, there are strategies that employ quantum entanglement that outperform any strategy that uses only classical resources. For the quantum computing cloud services offered by IBM, we implement a prediction-based ratio analysis to characterize how well the qubits can be accessed locally and independently from each other. More precisely, we perform a variety of Clauser-Horne-Shimony Holt experiments on the different IBMQ systems. Afterward, we conduct a statistical hypothesis test on the results to determine whether it is compatible with an underlying model that is no-signaling, that is, the data shows some evidence of crosstalk between qubits. Unlike standard randomized benchmarking, this approach does not rely on some unjustified assumptions like having gate-independent Markovian noise. Moreover, because we are testing for statistical evidence against no-signaling theories, we do not even assume that the devices strictly adhere to the laws of quantum theory. In particular, we do not assume that the correlations obtained necessarily satisfy Born's rule. Thus, our characterization of the local addressability of individual qubits is based almost solely on what can be deduced statistically from a finite set of observations.
Introduced by Pawłowski and Winter in 2012, Hyperbit Theory has emerged as a captivating alternative to quantum theory, offering new avenues for redefining entanglement and classical communication paradigms. In this presentation, I delve into a comprehensive reevaluation of Hyperbit Theory, unearthing critical operational limitations that cast doubt on its equivalence to quantum mechanics. Notably, the equivalence between Hyperbit Theory and Quantum Theory hinges on the receiver possessing inaccessible additional knowledge about the sender's laboratory. Through this research, we shed light on the constraints of hyperbits in information processing, while also illuminating the efficacy of quantum communication. This work advances our understanding at the intersection of classical and quantum communication, shedding light on the potential and limitations of both paradigms
We identify and study a particular class of distinguishability problems for quantum observables (POVMs), in which observables with permuted effects are involved, which we refer to as the labeling problem. Different ramifications of the problem is studied.
As with a Bell inequality, Hardy's paradox manifests a contradiction between the prediction given by quantum theory and local-hidden variable theories. In this work, we generalize Hardy's arguments to an arbitrary Bell scenario involving two observers. Our construction reduces to that of Meng et al. [Phys. Rev. A. 98, 062103 (2018)] and can be naturally interpreted as a demonstration of the failure of the transitivity of implications. Our generalization is equivalent to a ladder-proof-type argument for Hardy's paradox. Furthermore, it provably exhibits a higher success probability compared with existing proposals. Moreover, this advantage persists even if we allow imperfections in realizing zero-probability constraints in such paradoxes.
We investigate the objective properties of a simple harmonic oscillator interacting with the large collections of 1/2-spin environment based on an objective quantum state, a so-called spectrum broadcast structure (SBS). Assuming that the quantum state of the harmonic oscillator is close to a classical state, the time-dependent effective Hamiltonian is applied for spin-environment. The objectivity measures for the SBS, a decoherence factor and a generalized overlap (fidelity), are calculated based on the high frequency expansion.
Non-stabilizerness, also known as magic, is a crucial resource for quantum computation. As programmable devices grow in size and depth, there has been a collective effort to find (i) robust, and (ii) scalable witnesses of nonstabilizerness. We introduce a novel notion of non-stabilizerness defined for sets of quantum states, that we term set-magic. We show that recently introduced witnesses of coherence [Phys. Rev. A 101, 062110 (2020)] also constitute witnesses of n-qubit relational non-stabilizerness. We then show how violation of these inequalities can serve as a simple benchmark tests of non-stabilizerness for pre fault-tolerant quantum information processing devices.
Quantum resources may provide advantage over their classical counterparts in communication tasks. Here we consider a communication task where it is possible to exhibit an advantage of quantum communication. Our task is described as the following. There are three parties - the Manager, Alice, and Bob, The Manager sends a value of a random variable to Alice and at the same time Bob receives some partial information regarding that value. Initially, neither Alice nor Bob knows the input of the other, which is received from the Manager. The goal of the task is achieved if and only if the value of the random variable, sent to Alice by the Manager, is identified by Bob with success probability greater than half all the time. Here non-zero error probability is allowed. However, to help Bob, Alice sends a limited amount of classical or quantum information to him (cbit or qubit). We show that the goal of the task can be achieved when Alice sends a qubit. On the other hand, a cbit communication is not sufficient for achieving the goal. Thus, it establishes quantum advantage. We further determine the optimal success probabilities in the overall process. For applications, we connect our task with semi-device-independence and show how our task can be useful to detect quantumness of a communication in a secure way.
The amount of nonlocality, measured by the violation of a Bell inequality, is crucial to real-world implementation of quantum information processing tasks such as randomness certification and device-independent quantum key distribution. Maximizing inequality violation, or even certifying nonlocality in the presence of inefficient detectors presents a challenge since, in such cases, the occurrence of "no-click" events may be exploited by local-realistic models, a feature known as \emph{detection loophole}. Incorporating the "no-click" events in the measurement statistics via an assignment strategy results in a tilting of the Bell inequality to be violated to certify (detection) loophole-free nonlocality. Moreover, the quantum strategies that maximally violate these tilted forms of Bell inequalities attain the maximum (detection) loophole-free nonlocality with respect to the original Bell inequalities with imperfect detectors. In this work, we study such tilted versions of the CHSH inequality that arise from non-ideal detectors and the realizations maximally violating them. In the space defined by the detector efficiencies $\eta_A, \eta_B\in [0, 1]$, we find the region where nonlocal correlations can be realized with quantum systems. Within this region, we numerically study the lowest level at which Navascu\'es--Pironio--Ac\'in upper bounds coincide with the maximal violation. Surprisingly, we found cases for which increasingly high levels are needed. Furthermore, while the dependence on the efficiencies $\eta_A, \eta_B$ of both the maximal violation and the optimal quantum realizations attaining it turns out to be quite complicated, we nevertheless retrieve the optimal realizations attaining the maximal violation and show that they self-test. Apart from these practical contributions, our results shed new light on the geometry of the quantum set of nonlocal correlations in the simplest scenario, highlighting the complexity of its characterization.
Spin chains, the archetypal many-body quantum systems, are used for prototype quantum computing, ultra-precise sensing, or as quantum simulators. Hence, they attract theoretical and experimental attention, focusing on analyzing their dynamic and static properties, their structure, and the ``quantum content'', namely how -- and in what sense -- a particular chain is non-classical. We address these points and show how to generate a potent resource for quantum technologies, namely the many-body Bell correlations in spin chains, with controllable short-range two-body interactions. Subsequently, we classify the depth of produced Bell correlations. We identify a critical range necessary to generate many-body Bell correlations in the system and provide the physical mechanism behind this critical behaviour. Remarkably, the critical range is short and universal for every length of the chain, establishing the viability of this protocol. In contrast, the critical time, at which the many-body Bell correlations emerge in the system depends on its size, as our analysis reveals. Importantly, we show, that these Bell correlations are fully determined by just a single element of the density matrix, and can be measured by the existing state-tomography methods. The fully analytical findings presented here provide novel insight into the methodology of generating strong many-body non-classical correlations with short-range two-body interactions, offering promising prospects for quantum technologies.
Using tools from quantum information theory, we present a general theory of indistinguishability of identical bosons in experiments consisting of passive linear interferometers followed by particle number detection. Our results do neither rely on additional assumptions on the input state of the interferometer, such as, for instance, a fixed mode occupation, nor on any assumption on the degrees of freedom that potentially make the particles distinguishable. We identify the expectation value of the projector onto the N-particle symmetric subspace as an operationally meaningful measure of indistinguishability, and derive tight lower bounds on it that can be efficiently measured in experiments. Moreover, we present a consistent definition of distinguishability and characterize the corresponding set of states. In particular, we show that these states are diagonal in the computational basis up to a permutationally invariant unitary.
Reference: Englbrecht, M., Kraft, T., Dittel, C., Buchleitner, A., Giedke, G., Kraus, B. arXiv:2307.06626 (2023)
We study hidden nonlocality in a linear network with independent sources. In the usual paradigm of Bell nonlocality, there are certain states which exhibit nonlocality only after the application of suitable local filtering operations, which in turn are some special stochastic local operations assisted with classical communication (SLOCC). In the present work, we introduce the notion of hidden non n-locality. The notion is detailed using a bilocal network. We provide instances of hidden non bilocality and non trilocality, where we notice quite intriguingly that non bilocality is observed even when one of the sources distributes a mixed two-qubit separable state. Furthermore, a characterization of hidden non bilocality is also provided in terms of the Bloch-Fano decomposition, wherein we conjecture that to witness hidden non bilocality, one of the two states (used by the sources) must have non-null local Bloch vectors.
Bohr’s complementarity principle is quantitatively formulated in terms of the distinguishability of various paths a quanton can take and the measure of interference it produces. This phenomenon results from the interference of single-quanton amplitudes for various paths. The distinguishability of paths puts a bound on the sharpness of the interference the quanton can produce. However, other kinds of quantum phenomena exist where interference of two-particle amplitudes results in two-particle interference if the particles are indistinguishable. The Hong-Ou-Mandel (HOM) and Hanbury-Brown-Twiss (HBT) effects are well-known examples. However, two-particle interference is not as easy to define as its single-particle counterpart, and the realization that it involves the interference of two-particle amplitudes came much later. This work derives a duality relation between particle distinguishability and the visibility of two-particle interference. The distinguishability of the two particles, arising from some internal degree of freedom, puts a bound on the sharpness of the two-particle interference they can produce, in a HOM or HBT kind of experiment. It is argued that the existence of this kind of complementarity can be used to characterize two-particle interference, which in turn leads one to the conclusion that the HOM and the HBT effects are equivalent in essence and may be treated as a single two-particle-interference phenomenon.