Now that experiments on quantum computing have reached the level of about ten qubits or more (at least for experiments using ions and photons), data analysis becomes an exponentially hard task, and this forms a serious (computational, not experimental) bottleneck. As a solution to this problem, information criteria will be used to determine the simplest model (containing the minimum number of parameters) that describes the data statistically correctly. Entanglement or other quantum properties of one's system can then be analyzed much more efficiently within that model. Moreover, the few parameters thus identified will reveal what the main noise and decoherence mechanisms are. In summary, one will attempt to quantify how much information a finite amount of data provides about entanglement of N qubits, and so answer the questions (1) what information can a finite amount of data provide about entanglement of N qubits?, and (2) Is it possible to design an efficient entanglement verification method which does not need a number of data that exponentially grow with N?
The research involves two graduate students, both of whom happen to be female. One of these students will participate in an outreach program organized by the Optics Center at the University of Oregon, a science camp, which aims at introducing girls in middle school and high school to science (this summer, 15 girls participated). Moreover, the same student has entered the GK-12 program this year. The PI is involved in the Oregon SciencePub program, explaining his work to the local community.
Quantum computers are believed to be able to solve certain problems much more efficiently than is possible on ordinary classical computers. This also means that quantum computers cannot be simulated efficiently on an ordinary computer (if that were possible, we would just simulate a quantum computer and we would never have to actually build one!). But that does lead to a conundrum: suppose we are doing a (complicated!) experiment with the aim of testing whether a particular quantum system (say, trapped ions) could act as a quantum computer. How could we possibly check whether it runs correctly, and, more importantly, what sort of errors occur, given that we can't simulate the process? The project solved this problem by exploiting a statistical test that allows one to compare different descriptions of the whole process. One can test whether a "simple" description with a few parameters characterizing the errors works better than, for example, a description that contains parameters for all possible errors. Even though the latter description cannot be analyzed completely, bounds can be calculated, and the simple model just has to beat those bounds. If the simple model does not beat those bounds, then we know there are errors actually occurring that were not included in the simple description. By considering different models we can learn what errors occur in our candidate quantum computer, and we can then subsequently eliminate or suppress those errors. See http://arxiv.org/abs/1307.0858 for details.