29 August 2022 to 2 September 2022
Albano Building 3
Europe/Stockholm timezone

How to learn a quantum state (and how not to)

1 Sept 2022, 16:55
55m
Conference center, room ... (Albano Building 3)

Conference center, room ...

Albano Building 3

Albanovägen 29
Invited talk

Speaker

Yihui Quek

Description

streamed
Learning an unknown n-qubit quantum state is a fundamental challenge in quantum computing. Full tomography, however, requires exponential-in-n many copies of \rho for a good estimate. Is it possible to circumvent this exponential tax on resources? We consider two variants of this question:
1. “Pretty-good tomography” (based on https://arxiv.org/abs/2102.07171, NeurIPS 2021 (Spotlight)): Aaronson and others introduced several “reduced” models of learning quantum states which impose weaker requirements on the learner: PAC-learning, shadow tomography for learning ``shadows” of a quantum state, online learning, whose complexities scale only linearly in n. We show implications and reductions between the many models in this menagerie, and further introduce a combinatorial parameter that characterizes the complexity of learning. As an application, we improve shadow tomography (for classes of quantum states).
2. Probabilistic modelling (based on https://arxiv.org/abs/2110.05517 and https://arxiv.org/pdf/2207.03140.pdf): Deep generative models have recently empowered many impressive scientific feats, ranging from predicting protein structure to atomic accuracy (Alpha-Fold) to achieving human-level language comprehension (GPT-3). At the heart of these models is the question: by drawing very few samples from a probability distribution, can we learn an algorithm that generates more samples from the same distribution? Even more intriguingly: could there be a quantum advantage for such a task? We present both go and no-go results for this setting.

Presentation materials

There are no materials yet.