Literature DB >> 31485254

Experimental few-copy multi-particle entanglement detection.

Valeria Saggio1, Aleksandra Dimić2, Chiara Greganti1,3, Lee A Rozema1, Philip Walther1, Borivoje Dakić1,4.   

Abstract

Many future quantum technologies rely on the generation of entangled states. Quantum devices will require verification of their operation below some error threshold, but the reliable detection of quantum entanglement remains a considerable challenge for large-scale quantum systems. Well-established techniques for this task rely on the measurement of expectation values of entanglement witnesses, which however require many measurements settings to be extracted. Here we develop a generic framework for efficient entanglement detection that translates any entanglement witness into a resource-efficient probabilistic scheme, whose confidence grows exponentially with the number of individual detection events, namely copies of the quantum state. To benchmark our findings, we experimentally verify the presence of entanglement in a photonic six-qubit cluster state generated using three single-photon sources operating at telecommunication wavelengths. We find that the presence of entanglement can be certified with at least 99:74% confidence by detecting 20 copies of the quantum state. Additionally, we show that genuine six-qubit entanglement is verified with at least 99% confidence by using 112 copies of the state. Our protocol can be carried out with a remarkably low number of copies and in the presence of experimental imperfections, making it a practical and applicable method to verify large-scale quantum devices.

Entities:  

Year:  2019        PMID: 31485254      PMCID: PMC6726491          DOI: 10.1038/s41567-019-0550-4

Source DB:  PubMed          Journal:  Nat Phys        ISSN: 1745-2473            Impact factor:   20.034


Introduction

The reliable verification of quantum entanglement [1] is an essential task for quantum technologies, but it remains a considerable challenge for large-scale quantum systems. The generation of large entangled states [2-9] is required to investigate new quantum phenomena and develop novel applications. At the same time, this makes the problem of reliable verification both more important and significantly more consuming in terms of time and resources. The most exhaustive method for inferring quantum entanglement is to reconstruct density matrices via quantum state tomography [10]. However, the number of measurement settings required to characterize a generic quantum state grows exponentially with the size of the system, making this approach unfeasible for large devices. In many cases the full density matrix is not needed and alternative approaches for entanglement detection, such as witness-based methods, have been developed (see [11] and references therein). Although these techniques show significant improvements with respect to the number of measurement settings [12-15], they still require many detection events (i.e. many copies of the quantum state) to extract expectation values of different operators used to construct a witness. Moreover, almost all the standard techniques assume that every detection event is identical and independent, a situation that is challenging to achieve in practice. For these reasons, as large quantum devices move closer to practical realization, novel methods are urgently needed that are both reliable and resource-efficient. In the past few years, new approaches exploiting various random sampling techniques have been developed, such as randomized benchmarking [16], quantum state tomography via compressed sensing [17] and machine learning [18, 19], direct fidelity estimation [20], self-testing methods [21-26], quantum state verification [27, 28], entanglement verification [29-33], and many others. Most of these techniques are focused on minimizing the number of measurement settings, while an increasing number of copies is needed when higher accuracy in parameter estimation (for example the expectation value of an entanglement witness) is required. These parameters are compared to a certain threshold to conclude whether or not the state is entangled. In contrast here, instead of doing parameter estimation with a certain accuracy, we ask the following: given a certain number of experimental runs, what is the statistical significance that the state is entangled? Remarkably, in this case it has been shown in [34] that even a single copy of the quantum state can be considered as a meaningful resource for entanglement detection. Although parameter estimation reveals much more information about the actual state, it requires significantly more resources than our protocol. Here we develop a generic framework to translate any entanglement witness into a reliable and resource-efficient procedure and apply it to a real experimental situation. We show that our approach detects entanglement with an exponentially-growing confidence in the number of copies of the quantum state, is implemented via local measurements only, and does not require the assumption of independent and identically distributed (i.i.d.) experimental runs. Furthermore, we show that in certain cases our procedure works even if the number of available copies is less than the total number of measurement settings needed to extract the mean value of the witness operator, i.e. even if the corresponding witness-based method is not logically possible. We demonstrate the applicability of our method by validating the presence of quantum entanglement in a six-photon cluster state. This state, produced for the first time at telecommunication wavelengths, is generated with three high-quality single-photon sources and detected with pseudo-number resolving superconducting nanowire detectors. We obtain a fidelity between the produced state and the ideal one of 0.75 ± 0.06, which is equivalent to fidelities obtained in state-of-the-art photonic experiments [2]. We verify the presence of entanglement with at least 99.74% confidence by using around 20 copies of the quantum state and also show that 112 copies suffice to certify genuine six-qubit entanglement with at least 99% confidence. In this way, we lay the foundation for a new efficient and advantageous detection scheme, providing a key tool to characterize quantum devices with minimal resources. While our work shows similarities with Ref. [34], substantial improvements have been made. Ref. [34] focuses only on reducing the resources down to a single copy of the state, thus finding only some suitable classes of quantum states for which the theory works. Furthermore, a reduction down to a single copy is made possible by increasing the size of the system up to tens of qubits, thus not being practically applicable in realistic situations. In contrast, here we develop a new theory applicable to any quantum state (of arbitrary system size) for which one can construct an entanglement witness. Moreover, Ref. [34] does not discuss different types of entanglement (e.g. genuine multipartite entanglement), while we provide a tool to explicitly distinguish between them. In many cases, this distinction is essential as, for example, genuine multipartite entanglement is required for many quantum information protocols.

Probabilistic entanglement verification

We start by clarifying some basic definitions and types of entanglement. A bipartite quantum state is called separable if it is a mixture of product states (i.e. states of the type |ψ1〉|ψ2〉). A non-separable state is called entangled. For multipartite systems, one can define various types of entanglement [11]. For a multipartite quantum system we say that the state is biseparable if we can divide the system into two parts, such that the state is separable with respect to such bipartition. If this is not possible, the state exhibits genuine multipartite entanglement. Full separability refers to separability across any bipartition of the system. In the standard witness-based approach (a witness operator always specifies the type of entanglement), the presence of entanglement is verified by measuring the mean value of the witness operator W to be less than zero, i.e. 〈W〉 ≥ 0 for any separable state ρ, where 〈W〉 = Tr(W ρ). W is in general not locally accessible (one has to decompose it into the sum of local observables W’s as where each W needs to be measured in a separate experimental run), requiring one to estimate several mean values and therefore demanding a large number of copies. Thus, this technique is not reliable when few copies are available. Moreover, for a limited number of copies N, one has to use L independent measurement settings and ensure that for every individual detection event the source provides exactly the same copy of the quantum state (this is the i.i.d. assumption). Neither of these two requirements is very practical. We overcome both of these difficulties by using a probabilistic framework for entanglement detection. More precisely, our protocol is centred on a set ℳ = {M1, M2, …, M} of binary local multi-particle observables, which we will show can be derived for any entanglement witness. Each M (with k = 1, …, L) returns a binary outcome m = 1, 0, associated with the success or failure of the measurement, respectively. The procedure consists of randomly drawing the measurements M’s (each with some probability ε) N times from the set ℳ and applying each of them to the quantum state, obtaining the outcomes m’s. The set ℳ is tailored such that the probability to obtain success (i.e. to get m = 1 for a randomly chosen M) for any separable state is upper bounded by a certain value p < 1, that we call separable bound. On the other hand, the probability of success is maximized to p, called entanglement value, if a certain entangled state (target state) has been prepared. The entanglement value p is strictly greater than the separable bound p, i.e. the difference δ0 = p − p > 0. In a realistic framework, we can prepare a certain state ρ and assume that the application of the M’s to it returns S successful outcomes. The observed deviation from the separable bound δ = pobs − p (where pobs is the observed entanglement value) therefore reads It has been shown in [34] that the probability P(δ) to observe δ > 0 for any separable state is upper bounded as P(δ) ≤ e−, which goes exponentially fast to zero with the number of copies N. Here is the Kullback-Leibler divergence. Therefore, the confidence C(δ) of detecting quantum entanglement is lower bounded by Cmin(δ) as follows: and converges exponentially fast to unity in N. From (2) we can estimate the average number of copies N needed to achieve a certain confidence C0, meaning that for a target state preparation we find which grows logarithmically at the rate of K = D(p + δ0‖p)−1 as C0 approaches unity. If δ evaluates to a positive number, we can use (1) to calculate C(δ) from (2). We summarize the entanglement detection procedure in Fig. 1.
Fig. 1

Illustration of the entanglement detection protocol.

The first step consists of randomly drawing from the set ℳ the measurements M’s N times. Next, they are applied to the experimental state ρ, which then returns binary outcomes 1 or 0 (success or failure, respectively). The superscripts in ρ account for possible variations of the state due to experimental imperfections. After N runs, the protocol returns S successful outcomes. If the deviation δ = S/N − p > 0, entanglement is verified in the system with a confidence of at least Cmin(δ). Otherwise, the protocol is inconclusive.

Additionally, due to random sampling of the measurement settings, our protocol does not require the i.i.d. assumption (see [34] for the proof). This is an important feature of our procedure as the experimental state is necessarily subject to variations over time due to experimental conditions such as source drift etc. It is known that in such cases other schemes can lead to inadequate results [35, 36], whereas in our case we never obtain false positives.

Translating entanglement witnesses into the probabilistic framework

Any entanglement witness can be translated into our probabilistic verification protocol. Therefore, our method can detect any type of entanglement (e.g. genuine multipartite, bipartite) for which there exists a corresponding witness. Here we will show how to construct the set ℳ and find the corresponding separable bound p for any entanglement witness (see Methods, Section I for the detailed proof). We start with the observation that for every witness W, one can define a new equivalent one W′, whose mean value is always positive and bounded by 1, by using the equivalence transformation W′ = aW + b. The mean value of this new witness is the probability of success of our protocol, which is upper bounded by p for any separable state and achieves p > p for a certain entangled state. To illustrate the translation procedure, we consider the example of multipartite entanglement detection in an n-qubit graph state |G〉 via the witness for which we have 〈W〉 ≥ 0 for any biseparable state. This witness W can be easily transformed into the equivalent one for which we get 〈W′〉 ≤ 3/4 = p for any biseparable state. The graph state can be decomposed as the sum of its stabilizers S’s as where the S’s are certain products of local Pauli observables. Therefore, the new witness reads where M = (𝟙 + S)/2 are the binary observables needed in our probabilistic protocol. The sampling is uniform, i.e. the probabilities equal ε = 1/2. As the S’s stabilize the state, p = 1 for an ideal graph state. This procedure also leads to an estimate of the fidelity F = 〈G|ρ|G〉 between the experimentally generated state ρ and the ideal one ρ = |G〉〈G|, as in [20]. Note that we can also use our experimental data for quantum state verification [27]. Given p and p we can obtain the average number of copies needed to achieve a certain confidence C0 from (3). We get N ≤ −D(1‖3/4)−1 log(1 − C0) ≈ −3.48 log(1 − C0). Therefore, to achieve confidence of C0 = 0.99 we need at most Nmax ≈ 16 copies of |G〉, which is a remarkably low number. Furthermore, this number is independent of the system size (i.e. number of qubits n). Notice that different local decompositions of the witness will lead to different scaling constants K in (3), and finding the optimal decomposition is an open challenge [31]. Reduction of resources down to a single copy can be achieved in certain cases [34] by considering a particular dependence of the separable bound on n (see Methods, Section II for a detailed discussion). Once we have found the M’s and p, we can apply the protocol illustrated in Fig. 1 and find the minimum confidence for entanglement detection.

Entanglement verification tailored for a six-qubit cluster state

We will now translate two different witnesses, tailored for our experimental state, into our probabilistic framework. Our ideal experimental six-qubit cluster state is which is equivalent to the state shown in Fig. 2 up to local unitary transformations.
Fig. 2

Schematic of an H-shaped six-qubit cluster state.

The standard way to represent a graph state is to draw a set of vertices and edges. Each vertex is drawn as a disk representing a single qubit prepared in the eigenstate |+〉 of the Pauli operator X. Edges are solid lines representing pairwise controlled phase gates applied to the connected qubits. As a result of the application of these gates, entanglement is created between the linked qubits.

We consider the two following witnesses, defined to detect genuine six-qubit entanglement: The witness presented in [12], composed of only two measurement settings: where the G’s (with k = 1, …, 6) are the experimental generators of the cluster state [37], listed in the Methods, Section III; The standard witness tailored for our cluster state [38]: which requires 26 = 64 measurement settings (since analogously to the previous graph state example). For both witnesses 〈W1〉, 〈W2〉 ≥ 0 for any biseparable state, thus allowing detection of genuine six-qubit entanglement. Nevertheless, both can be also used to distinguish fully separable and entangled states, i.e. to detect only some entanglement, and the corresponding separable bounds can be evaluated numerically [39]. We can then distinguish two types of separable bounds: one is the so called biseparable bound p, that can be directly extracted from our translation protocol and is therefore used for detection of genuine six-qubit entanglement, the other one is the fully separable bound p, which is evaluated numerically and used to detect some entanglement. Following the procedure shown in the Methods, Section I, we find for W1 the set where M1 and M2 are the binary local observables, and the corresponding biseparable bound is p = 3/4. For W2, the binary observables constituting the set ℳW2 are (with k = 1, …, 64) and the biseparable bound is p = 3/4 (see the example of the graph state discussed in the previous section). The derived fully separable bounds read p = 9/16 and p = 5/8. The entanglement values are p = p = 1.

The experimental setup

The experimental setup used for the cluster state generation is shown in Fig. 3a.
Fig. 3

Experimental setup.

(a) A picosecond Ti:Sapphire laser outputs a beam that is temporally multiplexed to double the repetition rate and reduce contributions from unwanted SPDC high-order emissions. Two beams, equally split at the third BS, pump the first and third single-photon source, while the beam exiting the right output of the second BS passes through a HWP and a PBS before pumping the second source. In this way the power of the second source can be tuned. Movable translation stages are used as delay lines for temporal synchronization. A HWP and a QWP are placed along each beam path to set the needed polarization. Each beam pumps a single-photon source, which emits a polarization-entangled photon pair via type-II SPDC. At each PBS, two photons from different sources interfere. All the photons are then sent to a tomographic system composed of a QWP, a HWP and a PBS. Eventually, photons exiting both outputs of the PBSs reach the single-photon detectors. (b) Schematic of a single-photon source. A PPKTP crystal placed into a Sagnac interferometer is used to generate single photons. DM, Dichroic Mirror; DPBS, Dual wavelength PBS; DHWP, Dual wavelength HWP. Narrow-Band and Longpass filters are respectively used to increase the spectral purity of the photons and cut the residual pump.

In the Preparation stage, a Ti:Sapphire pulsed laser is temporally multiplexed [40, 41] to a repetition rate of 152 MHz with two beam splitters (BSs). It then pumps three identical single-photon sources, each built in a Sagnac configuration [42-45]. Each source produces a polarization-entangled photon pair at telecommunication wavelengths via collinear type-II Spontaneous Parametric Down-Conversion (SPDC), specifically the singlet state where |H〉, |V〉 denote the horizontal and vertical photons’ polarization states and i, j the photons’ spatial modes. A schematic of one single-photon source is shown in Fig. 3b (see Methods, Section IV for details). It is possible to switch between different Bell states with a half-waveplate (HWP) placed along one photon path (see Fig. 3b) and/or by rotating the HWP positioned along the pump path right before the source. In the Generation stage, after switching from |ψ−〉1,2 and |ψ−〉3,4 to |ϕ−〉1,2 and |ϕ−〉3,4, and from |ψ−〉5,6 to |ϕ+〉5,6, where photon pairs from different sources interfere at two polarizing BSs (PBSs), at which they are temporally synchronized with the help of delay lines placed along the second and third pump paths. A HWP placed in the path of the third photon is needed to generate the target cluster state. In the Detection stage, each photon passes through a tomographic system — composed of a motorized quarter-waveplate (QWP) and HWP followed by a PBS — that enables measurements in different polarization bases, and is then sent to the detection apparatus, which consists of twelve pseudo-number resolving multi-element superconducting detectors [46, 47]. Lenses to adjust the beam size, fibers and manual polarization controllers (to compensate for polarization changes into the fibers) are not shown in the figure. When the HWP in the third photon path is set to perform a Hadamard gate, the simultaneous detection of the six photons at the outputs nominally produces the state (4).

Results

For the witness W1, we applied N = 150 different measurement settings that were randomly sampled from the set ℳ. For each measurement setting, we acquired data for 40 seconds. In order to ensure that our sampling was random, we only analyzed the first six-photon event in each setting. In 12 of the settings, no six-photon events were detected (see Methods, Section IV), resulting in 138 copies of the state being produced. Fig. 4a,b show plots of the minimum confidence Cmin(δ) versus the number of copies N when the fully separable bound p and biseparable bound p are used, respectively. The points are obtained by plugging the experimentally observed δ into (2) to find Cmin(δ).
Fig. 4

Growth of confidence of entanglement with the number of copies of the quantum state.

Blue dots represent Cmin extracted from (2). (a), (b) show the results for the witness W1, (c), (d) for the witness W2. (a) and (c) show the minimum confidence when the fully separable bound is used (meaning Cmin(S/N − 9/16) and Cmin(S/N − 5/8) for (a) and (c), respectively) and (b), (d) are extracted by using the biseparable bound (meaning Cmin(S/N − 3/4) and Cmin(S/N − 3/4), respectively). δ and δ are positive for all the points in the four plots. The region in which the confidence stabilizes is highlighted and shown in the insets, where areas marked with different colors indicate different thresholds for the confidence level. Red dotted lines emphasize the different levels.

For the witness W2, we acquired data in the same manner, randomly choosing N = 160 different measurement settings from the set ℳ. As before, Fig. 4c,d show the increase in the minimum confidence in the full separability (where p is used) and biseparability (where p is used) cases, respectively. The experimental plots confirm the efficiency of our entanglement verification method by showing an exponential growth of the confidence. The insets show that the confidence stabilizes towards a certain value with N. For the ideal state (cluster state with fidelity of 1), the expression for the minimum confidence in (2) is a monotonic function in the number of copies because all the binary outcomes evaluate to 1. However, since usual technical imperfections decrease the fidelity, occasional events with the binary outcome 0 can occur at random. This will occasionally pull the confidence down, while an outcome 1 will pull it up. Obviously, the fluctuations in the confidence values are linked to the number of measured copies, such that a higher number of copies suppresses these fluctuations. All of this can be seen in Fig. 4. In Fig. 4a the confidence stabilizes to at least 99.12% with only 36 copies. Already 58 copies suffice to exclude full separability in the system with at least 99.99% confidence. Fig. 4b shows verification of genuine six-qubit entanglement with at least 91% confidence with 75 copies, and already 126 copies suffice to reach at least 97%. In Fig. 4c we see that only 20 copies suffice to reveal the presence of entanglement with at least 99.74% confidence, and 50 copies provide more than 99.99%. Fig. 4d shows that biseparability can be excluded with more than 97% confidence with 50 copies, and 112 copies provide more than 99%. Interestingly, in contrast to the standard witness-based method, in this case our protocol works with fewer copies than the total number of measurement settings, i.e. 64. As previously discussed, in this last case we can also estimate the fidelity F = 〈Cl6|ρ|Cl6〉 = 0.75 ± 0.06. The different areas marked with different colours in both plots and the red dotted lines help the visualization of the different confidence levels. In our new approach we bypass the measurement of mean values. Our results clearly show that we are able to detect entanglement with a very high confidence using only a few copies of the quantum state. The practicability of our method may prove essential for entanglement detection in large-scale systems in future experiments. It should also be advantageous to apply our techniques to entanglement verification in other physical systems, such as trapped ions [3], superconducting circuits [4], or continuous-variable systems [7-9].

Methods

Formal proof for generic witness translation

Here, we show how to translate any entanglement witness into our probabilistic protocol. Conventionally, a witness operator W is normalized such that 〈W〉 = Tr(Wρ) ≥ 0 for any separable state ρ. An equivalent form reads W = g𝟙 − O, where O is an Hermitian operator for which 〈O〉 = Tr(Oρ) ≤ g holds for any ρ [48]. Now, let us consider the local decomposition where q is the number of local settings needed to measure 〈O〉. We are free to add a constant term to each local component such that they become non-negative observables. This transformation leads to the new witness We choose a ≥ 0 to take the minimum possible value. Altogether, we can rewrite the separability condition as Our main aim is to test this inequality in practice via our probabilistic procedure. Note that this inequality is violated for certain entangled (target) state ρ, i.e. Tr(O′ρ) = g + aq, with g − g > 0. We proceed by writing the spectral decomposition where M are eigen-projectors (binary observables), with λ > 0 since W’s are non-negative operators. The number µ counts the non-zero eigenvalues of W. Furthermore, we define the constant We have all we need to set up our verification procedure. As the W’s are local observables, the binary operators M’s are local as well. They constitute the set ℳ, which contains in total elements. The probability weights for M’s are set to ε = λ/τ. For a given copy of a separable state ρ, the probability to obtain success for a randomly drawn measurement M from the set ℳ is given by Therefore, the separable bound is given by Clearly, for the target state preparation we obtain with the strict separation δ0 = p − p = (g − g)/τ > 0. Once we have defined the set ℳ and found p, we can apply the protocol illustrated in Fig. 1 and find the minimum confidence for detecting quantum entanglement. We would like to point out that our protocol could possibly be applied to the device-independent entanglement witnesses as well. In this case our procedure would need to be adapted to a device-independent framework.

Scaling of resources with the size of the system

The example of the graph state discussed in the section “Translating entanglement witnesses into the probabilistic framework” shows a constant gap between p and p that does not depend on the number of qubits n. For this reason, the number of required copies needed to achieve a certain confidence does not grow with the number of qubits (we recall that only 16 copies are required to achieve 99% confidence, regardless of the number of qubits). In this case, the standard witness-based approach would require 2 measurement settings, and each setting would demand a large number of copies, whereas our procedure provides reliable detection with a constant overhead. Thus, our method applies even if the number of settings exceeds the number of available copies. A further reduction of copies (even to a single one) was shown for certain classes of large multi-qubit states [34]. More precisely, in [34] examples were presented with p = e− (where α is a constant), which vanishes exponentially fast in n, while maintaining p constant in n. In this case, we can approximate K ≈ 1/(αn), thus even a single copy of the quantum state suffices to verify entanglement with high confidence (provided that n is sufficiently large). On the other hand, as long as δ0 does not vanish when increasing the system size, we still have exponential efficiency of the procedure at the constant rate K. Finally, an interesting case occurs if δ0 approaches zero as we increase the number of qubits. In this case, we can approximate leading to Therefore, as long as grows moderately in n, the procedure remains resource-efficient as the size of the system grows.

Generators of the six-qubit cluster state and witness decomposition

Our six-qubit cluster state (4) is uniquely defined by the following six generators [37]: where X and Z are two of the standard Pauli operators. From this set, we can construct all the products of G’s, and there are in total 26 = 64 independent operators which are called stabilizers. This witness allows one to combine three of the six generators of the cluster state into one measurement setting, reducing the number of measurement settings from six to two. To translate the witness W1 (see main text) into our procedure, we start with and g = 3. The witness O is already in the spectral form with and with eigenvalues +1, therefore a = 0. We get τ = 4 and the sampling is uniform from the set ℳ = {M1, M2}. For the biseparable bound we clearly get p = 3/4. For full separability, we used the algorithm presented in [39] to obtain p = 9/16. The translation procedure for the witness W2 is explained in detail in the main text. For this witness we obtain a biseparable bound of p = 3/4. Also in this case, we numerically found the fully separable bound to be p = 5/8.

Experimental details

We implement the random measurements M’s with our tomography setup. We only analyze measurement results consisting of six-fold coincidence events. When more than one six-fold event is detected during the same measurement setting, we only use the first coincidence event, to ensure that only one copy of the state is used per measurement. We will now give a detailed explanation of Fig. 3a, providing a technical overview of our setup.

Preparation stage

A mode-locked Ti:Sapphire Coherent Mira 900 laser emits pulsed light at a repetition rate of 76 MHz and at an average power of 1.2 W. The pulses have a central wavelength of 772.9 nm and a duration of 2.1 ps. The first two BSs along the pump path are used to double the repetition rate of the laser and decrease at the same time the power of each pulse, such that unwanted contributions from SPDC higher-order emissions are reduced [40]. This approach is referred to as passive temporal multiplexing [41]. One output of the second BS is sent to a third BS, which equally splits the pump power. The other one passes through a HWP and a PBS, wherein the reflected port is stopped by a beam block. This allows us to adjust the pump power along this path if needed. The two output beams from the third BS and the one from the PBS go through a HWP and a QWP so that polarization can be adjusted, and are then used to pump three single-photon sources. Delay lines in the second and third beam paths are needed later for temporal synchronization. A photon pair is generated from each source via collinear type-II SPDC from a 30 mm long periodically poled KTiOPO4 (PPKTP) crystal placed into a Sagnac interferometer, which has the advantages of compactness and phase stability. A schematic of a single-photon source is shown in Fig. 3b. It is composed of a dichroic mirror (DM) reflecting the pump and transmitting the photons, a dual PBS (DPBS) and a dual HWP (DHWP), which work for both pump and photon wavelengths, and a PPKTP. The crystal temperature set to 24° enables photon wavelength degeneracy at 1545.8 nm. The photons generated from the crystal pass through ultra-narrow filters with a bandwidth of 3.2 nm that improve their spectral purity and are eventually coupled into single-mode fibers, not shown in the figure. The residual pump beam is removed using longpass filters.

Generation stage

Each pair of photons coming from different sources is sent to a PBS, at which it has been temporally synchronized using the delay lines discussed above. The photons exit in fibers — not shown in the figure — and propagate in free space through the PBSs, before being coupled into fibers again. A HWP placed along the third photon path is used to generate the cluster state.

Detection stage

Photons from each output go to free space again and then pass through a system composed of a motorized QWP and HWP followed by a PBS. They are eventually re-coupled into fibers and sent to a detection system composed of 12 multi-element superconducting detectors. Each multi-element detector is made up of four nanowires on the same chip, allowing for a pseudo-number resolution and a high detection efficiency (0.87 on average at around 1550 nm). The detectors operate at a temperature of 0.9 K. Photon coincidences are registered using a custom 64-channel time-tagging and logic module. Our six-fold coincidence rate is primarily affected by coupling losses in the Generation stage coming from the propagation of the photons in free space through the PBSs before being coupled again into fibers and filter imperfections. As coupling losses are largest in the second source, we doubled the second source pump power by rotating the HWP placed before the PBS in the Preparation stage to compensate. Our final six-fold rate is around 0.1 Hz. To maximize the probability that each measurement detects at least one copy of the state in every basis, we set the measurement time to 40 seconds. The tomography waveplates are automatized using PCB motors.
  19 in total

1.  Reducing multi-photon rates in pulsed down-conversion by temporal multiplexing.

Authors:  M A Broome; M P Almeida; A Fedrizzi; A G White
Journal:  Opt Express       Date:  2011-11-07       Impact factor: 3.894

2.  Increasing the statistical significance of entanglement detection in experiments.

Authors:  Bastian Jungnitsch; Sönke Niekamp; Matthias Kleinmann; Otfried Gühne; He Lu; Wei-Bo Gao; Yu-Ao Chen; Zeng-Bing Chen; Jian-Wei Pan
Journal:  Phys Rev Lett       Date:  2010-05-24       Impact factor: 9.161

3.  Detecting genuine multipartite entanglement with two local measurements.

Authors:  Géza Tóth; Otfried Gühne
Journal:  Phys Rev Lett       Date:  2005-02-17       Impact factor: 9.161

4.  A wavelength-tunable fiber-coupled source of narrowband entangled photons.

Authors:  Alessandro Fedrizzi; Thomas Herbst; Andreas Poppe; Thomas Jennewein; Anton Zeilinger
Journal:  Opt Express       Date:  2007-11-12       Impact factor: 3.894

5.  14-Qubit entanglement: creation and coherence.

Authors:  Thomas Monz; Philipp Schindler; Julio T Barreiro; Michael Chwalla; Daniel Nigg; William A Coish; Maximilian Harlander; Wolfgang Hänsel; Markus Hennrich; Rainer Blatt
Journal:  Phys Rev Lett       Date:  2011-03-31       Impact factor: 9.161

6.  Quantum state tomography via compressed sensing.

Authors:  David Gross; Yi-Kai Liu; Steven T Flammia; Stephen Becker; Jens Eisert
Journal:  Phys Rev Lett       Date:  2010-10-04       Impact factor: 9.161

7.  Direct fidelity estimation from few Pauli measurements.

Authors:  Steven T Flammia; Yi-Kai Liu
Journal:  Phys Rev Lett       Date:  2011-06-08       Impact factor: 9.161

8.  Multipartite entanglement verification resistant against dishonest parties.

Authors:  Anna Pappa; André Chailloux; Stephanie Wehner; Eleni Diamanti; Iordanis Kerenidis
Journal:  Phys Rev Lett       Date:  2012-06-26       Impact factor: 9.161

9.  Pulsed Sagnac polarization-entangled photon source with a PPKTP crystal at telecom wavelength.

Authors:  Rui-Bo Jin; Ryosuke Shimizu; Kentaro Wakui; Mikio Fujiwara; Taro Yamashita; Shigehito Miki; Hirotaka Terai; Zhen Wang; Masahide Sasaki
Journal:  Opt Express       Date:  2014-05-19       Impact factor: 3.894

10.  Experimental realization of multipartite entanglement of 60 modes of a quantum optical frequency comb.

Authors:  Moran Chen; Nicolas C Menicucci; Olivier Pfister
Journal:  Phys Rev Lett       Date:  2014-03-26       Impact factor: 9.161

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.