Machine learning for real-world quantum-enhanced phase estimation

One of the most immediate practical applications of quantum information processing is performing precise quantum measurements. Important examples include the measurement of time with atomic clocks, spatial displacements with optical interferometry, and super-resolved imaging beyond the diffraction limit. Heisenberg's uncertainty principle provides a fundamental bound on the amount of information a measurement can extract. Measurement schemes employing adaptive feedback constitute a promising strategy for reaching the Heisenberg limit. However, devising adaptive measurement procedures is complicated and often involves clever guesswork. I present an automated technique, based on machine-learning that replaces guesswork by a logical, fully automatic, programmable routine. I explain our method for the case of interferometric phase estimation, which has applications such as atomic clocks and gravitational wave detection. Our algorithm autonomously learns to perform phase estimation based on experimental trial runs, which can be either simulated or performed using a real world experiment. The algorithm does not require prior knowledge about the experiment and is effective even if the quantum system is a black box. In addition, our algorithm can learn to account for all systematic experimental imperfections, thereby making time-consuming error modeling and extensive calibration dispensable. We show that our method yields measurement procedures that outperform the best known adaptive scheme for interferometric phase estimation.