For both quantum and classical processes, it is usually assumed that if two parties interact only once with a physical medium, then only one-way influences are possible. Remarkably, this turns out to be an unnecessary restriction on the possible observable correlations. Formalised in the process matrix framework [1], where quantum theory is taken to hold locally but no global causal structure is assumed, some processes allow parties to establish correlations incompatible with a definite causal order.
Causal nonseparability is the property underlying processes incompatible with a definite causal order. It can be certified in a device-independent way, i.e. requiring knowledge of the observed statistics only, based on the violation of causal inequalities by noncausal correlations. However, not all causally nonseparable process matrices are noncausal in this strong sense. Indeed, a large class of quantum-realisable processes cannot violate causal inequalities [2], and it is still unclear whether any physical process is able to. This includes the ""quantum switch"" [3], the canonical example of quantum circuits with quantum control of causal order, in which the causal order between two parties - Alice and Bob - is coherently controlled by a quantum system given to a third party, Fiona.
In this work [4], we present a method solely based on the observed correlations that certifies the causal nonseparability of all the processes that can induce a causally nonseparable distributed measurement, which includes all bipartite causally nonseparable processes and the celebrated quantum switch. This certification, that we labelled ``network-device-independent’’ (NDI), is achieved by introducing a network of untrusted space-like separated operations allowing to self-test the quantum inputs received by the distributed measurement.
Another protocol, the ""DRF"" certification of the quantum switch [5], is based on the violation of a ‘local-causal inequality’ by correlations generated by Alice, Bob, Fiona, and a space-like separated party, Charlie. It consists in certifying that Charlie remotely controls the causal order of Alice and Bob’s operations by maximally winning a heralded GYNI game, and certifying that the control is non-classical by violating a CHSH inequality between Fiona and Charlie. Unlike our theory-dependent NDI certification, the DRF method certifies a theory independent notion of non-classical control of causal order. As noncontextuality is generally considered as a standard notion of classical explainability for operational phenomena, this raises the question whether the causal nonseparability in the quantum switch might be explained by contextuality.
[1] Oreshkov et al., Nat. Com. 3, 1092, (2012)
[2] Wechs, Dourdent, Abbott & Branciard,PRX Quantum 2, 030335 (2021)
[3] Chiribella et al., PRA 88, 022318 (2013)
[4] Paper in preparation based on Dourdent et al., PRL 129 090402 (2022)
[5] Van Der Lugt et al., arXiv:2208.00719 [quant-ph]]
The quantum switch, which applies two operations to a target system in a superposition of orders, is incompatible with any quantum circuit in which the operations are applied in a fixed order. In this talk I show that this “indefinite causal order” can also be certified on the device- and theory-independent level. My main focus will lie on the way that the proof makes use of (yet certifies something distinct from) Bell nonlocality. I will show how the same idea can be used to devise a proof of indefinite causal order, based instead on GHZ nonlocality, which is both possibilistic and maximal (in the same sense as GHZ nonlocality is). I will also raise the question which other phenomena can be device-independently certified using this proof technique.
Seen from the modern lens of causal inference, Bell's theorem is nothing else than the proof that a specific classical causal model cannot explain quantum correlations. It is thus natural to move beyond Bell's paradigmatic scenario and consider different causal structures. For the specific case of three observable variables, it is known that there are three non-trivial causal networks. Two of those, are known to give rise to quantum non-classicality: the instrumental and the triangle scenarios. Here we analyze the third and remaining one, which we name the Evans scenario, akin to the causal structure underlying the entanglement-swapping experiment. We prove a number of results about this elusive scenario and introduce new and efficient computational tools for its analysis that also can be adapted to deal with more general causal structures. We do not solve its main open problem--whether quantum non-classical correlations can arise from it--but give a significant step in this direction by proving that post-quantum correlations, analogous to the paradigmatic Popescu-Rohrlich box, do violate the constraints imposed by a classical description of Evans causal structure.
Incompatible, i.e. non-jointly measurable quantum measurements are a necessary resource for many information processing tasks. It is known that increasing the number of distinct measurements usually enhances the incompatibility of a measurement scheme. However, it is generally unclear how large this enhancement is and on what it depends. Here, we show that the incompatibility which is gained via additional measurements is upper and lower bounded by certain functions of the incompatibility of subsets of the available measurements. We prove the tightness of some of our bounds by providing explicit examples based on mutually unbiased bases. Finally, we discuss the consequences of our results for the nonlocality that can be gained by enlarging the number of measurements in a Bell experiment.
The last three decades witnessed a resurgence of Bell nonlocality, as an indispensable tool in witnessing the power of nonclassical correlations for information processing. For instance, it is used to certify intrinsic randomness or to prove security of cryptographic protocols. Since Bell inequality violations can be observed by considering only correlations between the outcomes of some measurements, without even knowing the physical details about the underlying systems and measurements, the certification based only on observing Bell nonlocality is called device-independent. The device-independent paradigm can be used for the certification of particular quantum states. The task of certifying quantum states or measurements in a device-independent manner is known under the name of self-testing.
As we learn how to self-test more quantum states the question of whether it is possible to self-test all pure entangled states naturally arises. This question is related to the problem of the limits of the device-independent paradigm. There exist state transformations that cannot be identified device-independently. All these transformations are captured by the notion of local isometries.
In this work, we aim to give a complete characterization of self-testing in the multipartite qubit case. While the bipartite case has been completely characterized for states of any local dimension, little is known for the multipartite case. One important difference is that, while in the bipartite case all states are equivalent, up to local isometries, to their complex conjugates in any basis, this is not true in the multipartite case. Already in the tripartite scenario there exist states which cannot be transformed into their complex conjugate by performing local basis changes. That such a phenomenon exists in states of N qubits for N>2 has already been proven. Recently, it has been proven that all pure entangled states can be self-tested if placed in a quantum network. The protocol can self-test any pure entangled state, but it requires a number of uncharacterized maximally entangled pairs. The problem of whether all pure entangled states can be self-tested in the standard Bell scenario still remains open.
In this contribution, we come closer to answering positively this open problem. In particular, we show that it is possible to self-test every pure entangled state of N qubits. That is, we give a complete characterization of self-testing in the multipartite qubit case: we exhibit a correlation that self-tests any pure multipartite entangled state of qubits which is equivalent to its complex conjugate. Moreover, for any state which is not equivalent to its complex conjugate, we exhibit a correlation that singles out the state up to complex conjugation.
Self-testing has been extensively studied in multipartite bell-type scenarios. Here, we propose a definition of self-testing in the sheaf-theoretic framework for contextuality. We carefully compare this definition with pre-existing ones in the setting of multipartite bell-type scenarios and the graph-theoretic framework for contextuality. This points towards potential new connections between the hierarchy of contextuality and the structure of the convex set of quantum correlations. We argue that these connections might be exploited to devise a self-testing protocol for almost all multipartite entangled states. We then prove a theorem that make use of the idea of simulations between empirical models in order to derive sufficient conditions under which the existence of one self-test guarantees the existence of another. Using this theorem we are able to prove novel self-testing results for certain synchronous correlations.
"Self-testing is the most accurate form of certification of quantum devices. While self-testing in bipartite Bell scenarios has been thoroughly studied, self-testing in the more complex multipartite Bell scenarios remains largely unexplored. We present a simple and broadly applicable self-testing scheme for N-partite correlation Bell inequalities with two binary outcome observables per party. To showcase the versatility of our proof technique, we obtain self-testing statements for the MABK and WWWŻB family of linear Bell inequalities and Uffink’s family of quadratic Bell inequalities. In particular, we show that the N-partite MABK and Uffink’s quadratic Bell inequalities self-test the GHZ state and anti-commuting observables for each party. While the former uniquely specifies the state, the latter allows for an arbitrary relative phase. To demonstrate the operational relevance of the relative phase, we introduce Uffink’s complex-valued N partite Bell expression, whose extremal values self-test the GHZ states and uniquely specify the relative phase.
Published in npj quantum information: DOI -https://doi.org/10.1038/s41534-023-00735-3"
The emergence of quantum devices has raised a significant issue: how to certify the quantum properties of a device without trusting it. To characterise quantum states and measurements in a device-independent way, up to some degree of freedom, we can make use of a technique known as self-testing. While schemes have been proposed to self-test all pure multipartite entangled states and real local projective measurements, little has been done to certify mixed entangled states, composite measurements, or non-projective measurements. By employing the framework of quantum networks, we propose a scheme that can be used to self-test any quantum state and measurement. The quantum network considered in this work is the simple star network, which is implementable using current technologies. For our purposes, we also construct a family of Bell inequalities that can be used to self-test the two-dimensional tomographically complete set of measurements with an arbitrary number of parties.
A Random Access Code (RAC) is a communication task in which the sender encodes a message into a character to be decoded by the receiver so that a randomly chosen character from the original message is recovered. Both the message and the character to be recovered are assumed to be uniformly distributed. In this work, we extend this protocol by allowing more general distributions of these inputs, which alters the encoding and decoding strategies optimising the protocol performance, either with classical or quantum resources. We approach the problem of optimising the performance of these biased RACs with both numerical and analytical tools. On the numerical front, we present algorithms that allow a numerical evaluation of the optimal performance over both classical and quantum strategies and provide a Python package designed to implement them, called RAC-tools. We then use this numerical tool to investigate single-parameter families of biased RACs in the n^2 --> 1 and 2^d --> 1 scenarios. For RACs in the n^2 --> 1 scenario, we derive a general upper bound for the cases in which the inputs are not correlated, which coincides with the quantum value for n=2 and, in some cases, for n=3. Moreover, it is shown that attaining this upper bound self-tests pairs or triples of rank-1 projective measurements, respectively. An analogous upper bound is derived for the value of RACs in the 2^d --> 1$ scenario which is shown to be always attainable using mutually unbiased measurements if the distribution of input strings is unbiased.
"Quantum technologies promise interesting new approaches to areas such as computing and communication. A branch that is becoming increasingly interesting is that of quantum networks. The technological assets for quantum networks have been developing rapidly in recent years and many implementations, often geared towards quantum cryptography, have been reported. In order to demonstrate security of quantum cryptographic protocols, a necessary condition is to guarantee that the observations of the parties cannot be reproduced when classical systems are distributed instead (i.e., to observe non-locality). However, the standard notion of multipartite non-locality only guarantees that something quantum is happening somewhere in the network, in contrast to guaranteeing a complete non-classical behavior of all its elements.
In this talk, I will summarize recent progress on the quest for operationally meaningful and practically useful definitions of genuinely non-local phenomena in networks. First, I will describe the concept of full network nonlocality, that allows to guarantee that non-classical behavior is present everywhere in the network. Importantly, this notion does not assume quantum mechanics. Therefore, the guarantees of non-classicality provided by the observation of full network nonlocality do not rely on quantum mechanics accurately describing nature. Second, I will describe several experimental observations of full network nonlocality in scenarios that are specially relevant as building blocks of large-scale quantum communication networks. The first one is a star-shaped network where three branch parties share each a bipartite quantum state with a central node that performs tripartite entanglement swapping. This is an important scenario to realize multipartite quantum cryptographic protocols mediated by a central authority. The second one is a quantum repeater scenario where we strictly enforce the network structure via space-like separation of its components. The fact that fully non-classical phenomena can be certified in a significant manner, in demanding scenarios, and with state-of-the-art technology, strongly motivates the development of multipartite quantum cryptographic protocols in networks and proofs of security based on full network nonlocality.
This talk is based on arXiv:2105.09325, arXiv:2212.09765, arXiv:2302.02472 and arXiv:2306.15717."
"Distributing quantum information in networks is essential
for global secure quantum communication; moreover, distributed
entanglement is an important resource for clock synchronization,
magnetic field sensing and blind quantum computation. For the
analysis and benchmarking of different network implementations,
however, it is crucial to characterize the topology of networks
by revealing the nodes between which entanglement can reliably
be distributed.
In this contribution we will demonstrate an efficient scheme
for topology certification of networks. Theoretically, our
scheme allows to distinguish different network configurations
consisting of bipartite and multipartite entanglement sources
in a scalable manner. This can be done for different levels of
trust in the measurement devices or network nodes, that is, in
a device-dependent and independent way.
Experimentally, we demonstrate the feasibility of our method
using a six-photon source [1], which produces flexible
entanglement configurations at high rate, employing active
feed-forward and time multiplexing. Our approach can be seen
as an example of a simultaneous test of multiple hypotheses
with few measurements and may therefore be applicable to other
certification scenarios in quantum technologies.
This contribution is joint work with K. Hansenne, S. Denker,
O. Gühne, N. Prasannan, J. Sperling, B. Brecht, and C. Silberhorn.
[1] E. Meyer-Scott et al., Phys. Rev. Lett 129, 150501 (2022)"
"In this work, we prove that the theoretical maximum, 2*log(d) bits of randomness can be certified device-independently from a locally d-dimensional bipartite quantum state, as long as symmetric informationally complete (SIC) POVMs exist in dimension d. We prove this in the Bell scenario proposed in [Tavakoli et al., Sci. Adv. 7, eabc3847 (2021)], which uses a locally d-dimensional maximally entangled state and measurements on Bob's side whose first outcomes are projections from a SIC POVM. Alice, on the other hand, has measurements that project onto the intermediate vectors of pairs of the SIC POVM vectors, and one additional measurement which is a proper SIC POVM (for randomness generation).
In this work, we lift all assumptions on the state and the measurements and show that the maximal violation of the Bell inequalities introduced by Tavakoli et al. implies projective measurements on the local support of the state (apart from the SIC POVM setting of Alice). Furthermore, Bob's measurements must satisfy the ""operational SIC"" conditions introduced by Tavakoli et al. We then show that the existence of finite-dimensional operators satisfying the operational SIC conditions is equivalent to the existence of SIC POVMs in dimension d, a result that is of independent interest given the long-standing open problem of the existence of SIC POVMs in every dimension. We then construct operators on Alice's side that act on the state exactly as the SIC operators on Bob's side. Using this, together with the operational SIC characterisation, we show that the shared state must contain a maximally entangled state and that the above operators of Alice and the SIC operators of Bob only act non-trivially on the maximally entangled state.
Following from this structure and from the maximal violation of the Bell inequality, we show that dichotomic measurements formed from the SIC POVM measurement of Alice, and the SIC measurements of Bob give rise to synchronous correlations. This implies that the SIC POVM operators of Alice also only act non-trivially on the maximally entangled state. Using this structure, we show that the classical-quantum state between Alice and the eavesdropper, after Alice measured the SIC POVM, must factorise between Alice and the eavesdropper. This means that the randomness from the outcome of this measurement is private, that is, secure against collective attacks (and by the entropy accumulation theorem, against the most general coherent attacks). We can also show that the outcome of the SIC POVM measurement is perfectly random, that is, Alice and Bob can certify 2*log(d) bits of device-independent randomness locally, the theoretical maximum from a locally d-dimensional quantum state. While the protocol requires the existence of a SIC POVM in the given dimension, this existence is known up to dimension 53 (with many more known examples in higher dimensions) and is conjectured to be true in every dimension."
Nonlocal tests on multipartite quantum correlations form the basis of protocols that certify randomness in a device-independent (DI) way. Such correlations admit a rich structure, making the task of choosing an appropriate test difficult. For example, extremal Bell inequalities are tight witnesses of nonlocality, however achieving their maximum violation places constraints on the underlying quantum system, which can reduce the rate of randomness generation. As a result there is often a trade-off between maximum randomness and the amount of violation of a given Bell inequality. Here, we explore this trade-off for more than two parties. More precisely, we study the maximum amount of randomness that can be certified by correlations exhibiting a violation of the Mermin-Ardehali-Belinskii-Klyshko (MABK) inequality. We find that maximum quantum violation and maximum randomness are incompatible for any even number of parties, with incompatibility diminishing as the number of parties grow, and conjecture the precise trade-off. We also show that maximum MABK violation is not necessary for maximum randomness for odd numbers of parties. To obtain our results, we derive new families of Bell inequalities certifying maximum randomness from a technique for randomness certification, which we call "expanding Bell inequalities". Our technique allows one to take a bipartite Bell expression, known as the seed, and transform it into a multipartite Bell inequality tailored for randomness certification, showing how intuition learned in the bipartite case can find use in more complex scenarios. We envisage this technique to be widely applicable in DI cryptography, and leave open an intriguing question; namely, if such expanded Bell inequalities go beyond the certification of randomness, and certify the underlying quantum state. Due to their versatile construction, this could lead to a general approach for self-testing wide classes of multipartite states in the future. The full article is available at arXiv:2308.07030.
"Bell's theorem is a fundamental result for our modern understanding of quantum mechanics, that shows how our intuitive understanding of causality is inadequate to describe quantum phenomena.
This incompatibility can be experimentally demonstrated performing a Bell test, which allow us to exclude that the observations could be explained using a classical model.
Apart from its foundational significance, the ability to detect Bell nonlocality also has several important applications in quantum communication and cryptography, enabling protocols with an unprecedented level of security.
However, the proper implementation of a Bell test requires very stringent conditions, including space-like separation between observers and detection efficiency above a certain threshold, that still represent a serious challenge for its realization.
Among these conditions, the so-called assumption of ""freedom of choice"" or ""measurement independence"", necessary for Bell's theorem to hold, is notably difficult to justify experimentally.
Recent studies have shown that quantum networks with several independent sources allow us to replace the assumption of freedom of choice with the assumption of independence of sources, making possible the demonstration of nonclassicality without the need for external, freely chosen inputs.
The simplest example is represented by the triangle network with three independent sources of correlations, which has the advantage of being a robust and experimentally feasible network topology.
It can be demonstrated that this network allows not only for the detection of nonlocality, but also for explicit estimation of the degree of ""freedom of choice"" of a Bell experiment.
In this talk, we present two experimental realization of this kind of network using a bulk photonic setup, and make use of different state-of-the-art techniques to detect nonlocality.
Moreover, we describe two novel methods to estimate the presence of measurement dependence, using entropic techniques and via robust lower and upper bounds.
In particular, we show how the latest approach is capable of detecting the nonclassicality of correlation also when a high degree of measurement dependence is present, and even beyond Tsirelson's bound.
"
"Losses in the transmission channel, which increase with distance, pose a major obstacle to photonics demonstrations of quantum nonlocality. They also present a daunting challenge for long-distance applications that rely on nonlocality, such as device-independent quantum key distribution.
Recently, Chaturvedi, Viola, and Pawlowski (CVP) [arXiv:2211.14231] introduced the concept of routed Bell experiments with the goal of extending the range over which quantum nonlocality can be demonstrated.
These experiments involve two distant parties, Alice and Bob, and allow Bob to route his quantum particle along two possible paths and measure it at two distinct locations – one near and another far from the source.
The premise is that a high-quality Bell violation in the short-path should limit the possible strategies underlying the experiment, thereby weakening the conditions required to detect nonlocal correlations in the long-path. Building on this idea, CVP showed that there are certain quantum correlations in routed Bell experiments such that the outcomes of the remote measurement device cannot be classically predetermined, even when the detection efficiency is arbitrarily low.
In this paper, we show that the correlations considered by CVP, though they cannot be classically predetermined, do not require the transmission of quantum systems to the remote measurement device.
This leads us to define and formalize the concept of `short-range' and `long-range' quantum correlations in routed Bell experiments. We show that these correlations can be characterized through standard semidefinite-programming hierarchies for non-commutative polynomial optimization. We then explore the conditions under which short-range quantum correlations can be ruled out and long-range quantum nonlocality can be certified in routed Bell experiments. We point out that there exists fundamental lower-bounds on the critical detection efficiency of the distant measurement device, implying that routed Bell experiments cannot yield an absolute advantage in terms of detection efficiency over standard Bell experiments, and in particular cannot demonstrate long-range quantum nonlocality at arbitrarily large distances. However, we do find that routed Bell experiments allow for reducing the detection efficiency threshold necessary to certify particular long-range quantum correlations. The improvements, while meaningful, are significantly smaller than those suggested by CVP's analysis.
Along the way, we introduce novel Bell inequalities, tailored to the routed Bell setup. When the close parties have perfect detectors, one of these inequalities implies an optimal reduction in the efficiency requirements of the distant party (50%), compared to standard CHSH tests (~71%), using the same resources (a shared EPR pair and pairs of anti-commuting Pauli measurements). "
"Bell inequalities are natural tools used as witnesses of nonlocality in composite systems. Interestingly, finding the classical bound for a given Bell inequality is in one-to-one correspondence with finding the minimal energy configuration of an associated classical spin system, which is a well-established NP-hard problem.
We show that finding the classical bound of broad families of Bell inequalities can be naturally framed as the contraction of an associated tensor network, but in tropical algebra, where the sum is replaced by the minimum and the product is replaced by the arithmetic addition [arXiv:2208.02798]. We illustrate our method with paradigmatic examples both in the multipartite scenario and the bipartite scenario with multiple outcomes. We showcase how the method extends into the thermodynamic limit for some translationally invariant(TI) systems.
For Bell inequalities in one-dimensional infinite, TI systems with O(1) correlator length, we introduce how to find their corresponding tensors and establish a connection between the notions of tropical eigenvalue and the classical bound per particle as a fixed point of a tropical tensor renormalization procedure. Further, we show the relation between the tropical eigenspaces of the tensor of local deterministic strategies, and their characterization as irreducible domino loops proposed in [PRL, 118(23), 230401.]. This allows us to recover their result in an elegant way, extending it to the finite number of parties case, and for the infinite case as a fixed point of a tropical tensor renormalization procedure. The procedure is also applicable to Bell inequalities with many outcomes such as CGLMP [PRL, 88(4), 040404.], and SATWAP inequalities [PRL, 119(4), 040402.] as a recursive tropical tensor network contraction. Lastly, for the multipartite scenario with any number of inputs and outputs, we show that the upper bound of the number of vertices of the projected Bell local polytope is independent of the number of parties (contrary to e.g. cases with much more symmetry, such as permutational invariance [Science, 344(6189), 1256-1258., AnnPhys,362,370-423.]) and we provide methods to upper bound its number of vertices.
Our study opens many questions that we leave for future work. In particular, the relation between the tropical eigenspaces of the tensor of local deterministic strategies and the facet structure of the TI local polytope. Also, the study of multipartite nonlocality in systems with e.g. hyperbolic or fractal geometries, which can arise in the context of symmetry-protected topological order [Stephen, D. T. PhD diss., TUM, 2021]. On the numerical side, index slicing and heuristic tensor network contraction orders can improve overall efficiency. The formalism here introduced captures the algorithms based on dynamic programming used for approximating the global optimum of highly nonconvex functions."
Bell inequalities play a key role in certifying quantum properties for device-independent quantum information protocols. It is still a major challenge, however, to devise Bell inequalities tailored for an arbitrary given quantum state. Existing approaches based on sums of squares provide results in this direction, but they are restricted by the necessity of first choosing measurement settings suited to the state. Here, we show how the sum of square property can be enforced for an arbitrary target state by making an appropriate choice of nullifiers, which is made possible by leaving freedom in the choice of measurement. Using our method, we construct simple Bell inequalities for several families of quantum states, including partially entangled multipartite GHZ states and qutrit states. In most cases we are able to prove that the constructed Bell inequalities achieve self-testing of the target state. We also use the freedom in the choice of measurement to self-test partially entangled two-qubit states with a family of settings with two parameters. Finally, we show that some statistics can be self-tested with distinct Bell inequalities, hence obtaining new insight on the shape of the set of quantum correlations.
Testing and verifying Bell inequalities on quantum devices is not only a challenging task that reveals the fundamental aspects of quantum mechanics, but it also gives rise to direct applications in quantum cryptography and self-testing. This work aims to describe the tools and methods to study Bell inequalities for graph states (https://arxiv.org/abs/2212.07133). We will present the methods to compute the local, quantum and Svetlichny bounds as well as other characteristics of the inequality such as the noise robustness and min-fidelity. In particular, four-partite inequalities for absolutely maximally entangled (AME) state and GHZ state with local dimensions 3 are explicitly derived and their characteristics are computed with the purpose observe the violation experimentally with a large-scale integrated optics device (https://www.science.org/doi/10.1126/science.aar7053).
"One useful tool to detect and quantify entanglement are witness operators. A standard way to design entanglement witnesses for two or more particles is based on the fidelity of a pure quantum state; in mathematical terms this construction relies on the Schmidt decomposition of vectors. In this contribution, we first present a method to build entanglement witnesses based on the Schmidt decomposition of observables [1]. This scheme works for the bipartite and the multipartite case and is found to be strictly stronger than the concept of fidelity-based witnesses. We discuss in detail how to improve known witnesses for genuine multipartite entanglement for various multipartite quantum states and demonstrate that our approach can also characterize the dimensionality of entanglement.
Second, we will present a related method to construct multipartite entanglement witnesses based on the symmetric and antisymmetric structure constants of Lie algebras. The resulting observables are connected to projectors on highly entangled sub-spaces and are strong entanglement criteria for high-dimensional tripartite systems.
This contribution is joint work with Satoya Imai, Chengjie Zhang, Ali Asadian and Otfried Gühne
[1] Chengjie Zhang, Sophia Denker, Ali Asadian and Otfried Gühne, arXiv:2304.02447
"
"As quantum technologies advance, the ability to engineer increasingly large quantum devices has experienced rapid development. In this context, verifying large entangled systems represents one of the main challenges in employing such systems for reliable quantum information processing. Though the most complete technique is undoubtedly full tomography, the inherent exponential increase of experimental and post-processing resources with system size makes this approach infeasible at even moderate scales. For this reason, there is currently an urgent need to develop novel methods that surpass these limitations. In this talk, I will review novel techniques [1] focusing on a fixed number of resources (sampling complexity) and thus prove suitable for systems of arbitrary dimension. Specifically, a probabilistic framework for single-copy for entanglement detection and few-copy device-independent quantum state verification is reviewed. I will also present the selective quantum state tomography method, which enables estimating arbitrary elements of an unknown state with several copies independently of the system's size. These hyper-efficient techniques define a dimension demarcation for partial tomography and open a path for novel applications.
References
[1] J. Morris, V. Saggio, A. Gočanin and B. Dakić, Quantum Verification and Estimation with Few Copies, Adv. Quantum Technol. 2100118 (2022), https://doi.org/10.1002/qute.202100118.
"
Proving the presence of spectral gaps in quantum many-body systems is important for understanding various properties, such as the ground space structure, the presence of area law correlations, and the applicability of the adiabatic theorem. It has been shown that in the 2D case, the spectral gap is undecidable (Cubitt et al, Nature 528, 207–211 (2015)). Furthermore, in the simpler 1D case, even for 2-local Hamiltonians, the problem of finding the gap is QMA-complete if the local dimension is $>=$13 (arXiv:0705.4067). In this study, we focus on the 1D AKLT model and establish a framework to analytically demonstrate the presence of a gap as well as provide a lower bound.
Previous techniques include the finite-size criteria introduced by Knabe (S. Knabe, Journal of statistical physics 52, 627 (1988)), which employ bounds on finite sizes to demonstrate the presence of a positive gap in the thermodynamic limit. This approach has been extensively explored across different lattice dimensions and various geometries (Anshu, PRB 101, 165104 (2020); Lemm et al, Journal of Statistical Physics 177, 1077 (2019)).
We formulate two constructions based on semidefinite programming to obtain lower bounds on the spectral gap of local, frustration-free, and translation-invariant Hamiltonians. In the first construction, we find positive operator decompositions of $H^2-\delta H$ as a sum of translation invariant operators. The outcome of the SDP depends on the support size of the operators which means that larger the support, more accurate is the lower bound on the gap. The second approach we employ is the moment matrix method to determine the sum-of-squares decomposition of a polynomial involving non-commuting variables represented as Pauli operators. Analogous in essence to the NPA hierarchy, this method yields a tighter lower bound when the support of the Pauli operators describing the final polynomial is larger. We are exploring the nature of bounds obtained from these two methods, and comparing the relative computational efficiency. It is important to note that the dual SDP provides information about the reduced density matrices of the first excited state.
Additionally, we also explore the effect of SU(2) symmetries in the AKLT model. Due to the applicability of the Schur-Weyl duality, the problem is greatly simplified. We devise a set of basis vectors that facilitate the block-diagonalization of SU(2)-symmetric Hamiltonians. As a result, the SDP can be confined to the smaller diagonal blocks, rather than the full matrices of size exponential in the support of the operators.
Our approach will serve as a precursor to numerous practical applications, including the adiabatic preparation of tensor network states. Given that the ground states of 1D gapped local Hamiltonians are expressible in terms of matrix product states, certifying the gap with a lower bound will aid in finding the best path for an effective adiabatic algorithm.
We study the problem of learning the Hamiltonian of a many-body quantum system from experimental data. We show that the rate of learning depends on the amount of control available during the experiment. We consider three control models: one where time evolution can be augmented with instantaneous quantum operations, one where the Hamiltonian itself can be augmented by adding constant terms, and one where the experimentalist has no control over the system's time evolution. With continuous quantum control, we provide an adaptive algorithm for learning a many-body Hamiltonian at the Heisenberg limit: $T = \mathcal{O}(\epsilon^{-1})$, where $T$ is the total amount of time evolution across all experiments and $\epsilon$ is the target precision. This requires only preparation of product states, time-evolution, and measurement in a product basis. In the absence of quantum control, we prove that learning is standard quantum limited, $T = \Omega(\epsilon^{-2})$, for large classes of many-body Hamiltonians, including any Hamiltonian that thermalizes via the eigenstate thermalization hypothesis. These results establish a quadratic advantage in experimental runtime for learning with quantum control.
Joint work with Thomas E. O'Brien and Thomas Schuster
arXiv:2304.07172
We introduce a semidefinite programming algorithm to find the minimal quantum Fisher information compatible with an arbitrary dataset of mean values. This certification task allows one to quantify the resource content of a quantum system for metrology applications without complete knowledge of the quantum state. We implement the algorithm to study quantum spin ensembles. We first focus on Dicke states, where our findings challenge and complement previous results in the literature. We then investigate states generated during the one-axis twisting dynamics, where in particular we find that the metrological power of the so-called multi-headed cat states can be certified using simple collective spin observables, such as fourth-order moments for small systems, and parity measurements for arbitrary system sizes.
arXiv:2306.12711 [quant-ph]
"With the advent of cloud-based quantum computing, it has become vital to provide strong guarantees that computations delegated by clients to quantum service providers have been executed faithfully. Secure - blind and verifiable - Delegated Quantum Computing (SDQC) has emerged as one of the key approaches to address this challenge, yet current protocols lack at least one of the following three ingredients: composability, noise-robustness and modularity.
To tackle this question, our paper lays out the fundamental structure of SDQC protocols, namely mixing a client's desired computation with test computations that are designed to detect a server's malicious behaviour. Using this abstraction, our main technical result is a set of sufficient conditions on these components which imply the security and noise-robustness of generic SDQC protocols in the composable Abstract Cryptography framework. This is done by establishing a correspondence between these security properties and the error-detection capabilities of classes of test computations. Changing classes of test computations and how they are mixed yields automatically new SDQC protocols with different security and noise-robustness capabilities.
This approach thereby provides the desired modularity as our sufficient conditions on test computations simplify the steps required to prove the security of the protocols and allows to focus on the design and optimisation of test rounds to specific situations. We showcase this by systematising the search for improved SDQC protocols for BQP computations. The resulting protocols do not require more hardware on the server's side than what is necessary to blindly delegate the computation without verification, and they outperform all previously known results."
A proper quantum memory is argued to consist in a quantum channel which cannot be simulated with a measurement followed by classical information storage and a final state preparation, i.e. an entanglement breaking (EB) channel. The verification of quantum memories (non-EB channels) is a task in which an honest user wants to test the quantum memory of an untrusted, remote provider. This task is inherently suited for the class of protocols with trusted quantum inputs, sometimes called measurement-device-independent (MDI) protocols. Here, we study the MDI certification of non-EB channels in continuous variable (CV) systems. We provide a simple witness based on adversarial metrology, and describe an experimentally friendly protocol that can be used to verify all non Gaussian incompatibility breaking quantum memories. Our results can be tested with current technology and can be applied to test other devices resulting in non-EB channels, such as CV quantum transducers and transmission lines.
We present a strategy to gain confidence in the correct performance of a quantum computer in realizing a particular computation. For this approach, we make use of the facts that matchgate circuits can be classically efficiently simulated, but become universal when supplemented with additional resources. Given a universal circuit of interest, we derive verification circuits that strongly resemble the original one, but can be classically efficiently simulated. Crucially, our method allows for the inclusion of errors in simulation. Comparing then the output of the quantum computer with a weak classical simulation thereof allows for instance to extract the error parameters.
Quantum key distribution (QKD) is at the forefront of current technological applications of quantum theory. With its deployment underway, it is imperative that we are able to design protocols that are both secure and efficient. However, finite-size effects (e.g. statistical fluctuations) present a major difficulty in producing efficient security proofs -- with current techniques giving key-rates that are only asymptotically tight. In this work we give a generic security proof construction for device-dependent protocols with rates that are not only tight asymptotically but also effectively tight at the leading order correction. Applied to example protocols we demonstrate significant improvements over existing techniques boosting the practicality of QKD. Mathematically our work involves the development on new entropy-like quantities that allow us to handle the finite-size fluctuations.
1) Introduction
Recently, there has been a growing focus on quantum communication due to its inherent security advantages. However, there are still many challenges that need to be addressed, such as the distribution of entangled states over long distances, closing the Bell test loopholes, and increasing the key rate.
In this work, we present an innovative device-independent quantum key distribution (DI-QKD) protocol based on the distribution of near-maximally entangled multiphoton states across long distances. Our proposed strategy employs the resources available within the current landscape of integrated quantum photonic technology, including the utilization of squeezed vacuum states and photon-number-resolving detectors.
Besides, this protocol enables entanglement sharing and quantum communication in free space.
2) Entanglement distribution
The distributed state using multi-photon bipartite entanglement is the Generalized Holland-Burnett (GHB) states, while the losses are due to quantum optical systems modeled with beam splitters.
Alice and Bob locally generate a photon-number-entangled two-mode squeezed vacuum state each, obtained by spontaneous parametric down-conversion (SPDC). They send their idler modes to a remote station Charlie, who holds a balanced beam splitter (BS), where the two modes interfere, and performs a photon-number-resolving (PNR) detection on the BS output modes. The state and the amount of shared entanglement are parametrized by the outcomes of Charlie’s measurement. After Charlie informs the parties classically of the measurement outcome, Alice and Bob know which state they possess and may employ it for quantum applications.
3) Key generation and extraction
Alice and Bob interfere their signal with the coherent states. To this end, they use variable beam splitters. Alice repeatedly chooses between three coherent states as her setting while Bob uses two. In the homodyne limit, this setup allows them to perform a displacement operator on their modes. They interpret the readout obtained with photon-number-resolved (PNR) detectors as binary outcomes.
Alice and Bob apply post-selection on their key generation rounds in which they randomly and independently retain bits “0” with probability p, and keep all bits “1”. Then, they announce the discarded rounds through an authenticated classical channel.
Finally, they apply error correction and privacy amplification procedures. To find the key rate using the Devetak-Winter formula, they need to calculate their conditional entropy and also maximize Eve's guessing probability based on the full behavior of their statistics. This can be done by certifying quantum correlations leveraging SDP (i.e. NPA hierarchy method).
We present a characterization of the set of nonsignaling correlations in terms of a two-dimensional representation that involves the maximal value of a Bell functional and the mutual information between the parties. In particular, we apply this representation to the bipartite Bell scenario with two measurements and two outcomes. In terms of these physically meaningful quantities and through numerical optimization methods (and also some analytical results), we investigate the boundaries of the different subsets of the nonsignaling correlations, focusing on the frontier between the quantum and the post-quantum ones. Our analysis exhibits that there is a trade-off between the amount of mutual information shared by the parties and the magnitude of the violation of a given Bell inequality. Notably, the Tsirelson bound appears as a singular point of this trade-off without resorting to the formalism of quantum mechanics. Finally, we discuss our ongoing attempts to study whether this kind of "informational singularities" can be found in more general Bell scenarios.