| Literature DB >> 31239445 |
James Schneeloch1, Christopher C Tison1,2,3, Michael L Fanto1,4, Paul M Alsing1, Gregory A Howland5,6.
Abstract
Entanglement is the powerful and enigmatic resource central to quantum information processing, which promises capabilities in computing, simulation, secure communication, and metrology beyond what is possible for classical devices. Exactly quantifying the entanglement of an unknown system requires completely determining its quantum state, a task which demands an intractable number of measurements even for modestly-sized systems. Here we demonstrate a method for rigorously quantifying high-dimensional entanglement from extremely limited data. We improve an entropic, quantitative entanglement witness to operate directly on compressed experimental data acquired via an adaptive, multilevel sampling procedure. Only 6,456 measurements are needed to certify an entanglement-of-formation of 7.11 ± .04 ebits shared by two spatially-entangled photons. With a Hilbert space exceeding 68 billion dimensions, we need 20-million-times fewer measurements than the uncompressed approach and 1018-times fewer measurements than tomography. Our technique offers a universal method for quantifying entanglement in any large quantum system shared by two parties.Entities:
Year: 2019 PMID: 31239445 PMCID: PMC6592913 DOI: 10.1038/s41467-019-10810-z
Source DB: PubMed Journal: Nat Commun ISSN: 2041-1723 Impact factor: 14.919
Fig. 1Experimental set-up for adaptive measurements. a An entangled photon source produces spatially entangled photon pairs, which are separated and routed through basis selection optics that switch between measuring transverse-position or transverse-momentum. Computer-controlled digital micromirror devices and photon-counting detectors perform joint spatial projections at up to 512 × 512 pixel resolution. b shows a simulated, true position joint-distribution of P(Xa, Xb) at 128 × 128 pixel resolution, while c–g show its simulated, adaptively decomposed estimate as it is refined to higher detail via quad-tree decomposition. When the joint-intensity in a block exceeds a user-defined threshold, it is split into four sub-quadrants and the process is recursively repeated, rapidly partitioning the space to obtain a compressed distribution from very few measurements
Fig. 2Measured joint probability distributions at 512 × 512 pixel resolution. a–d show the four estimated joint probability distributions with their single-party marginal distributions overlaid, showing tight correlations. e shows an enlarged version of overlaid with the adaptive partitioning, with f showing a small central region to see fine detail. The histogram g shows the number of partitions as a function of their area. Only 6456 measurements are needed instead of 2 × 5124
Fig. 3Entanglement quantification versus acquisition time. The entanglement of formation Ef is given as a function of acquisition time-per-partition for unaltered coincidence data and accidental-subtracted data. Error bars enclosing two standard deviations are determined by propagation of error from photon-counting statistics. We confirm the validity of this error analysis strategy via Monte Carlo simulation in Supplemental Material: Monte Carlo error analysis (see Supplemental Fig. 1)
Fig. 4Entanglement quantification versus maximum resolution. a shows the number of partitions required as a function of maximum allowed resolution and the improvement over the uncompressed approach. b shows the amount of entanglement captured as the maximum resolution increases. We see the progressive nature of the technique, which witnesses entanglement with few measurements at low resolution but more accurately quantifies it with further refinement. Our results approach the ideal maximum measurable value Ef = 7.68 ebits for our source