The interest in possible applications of machine learning in physics has been growing exponentially for a while now and there seems to be a sea of literature. Couple of months ago we decided to review the literature and have weekly seminar about the papers we found interesting. There is a dedicated blog post for each of these on our group website.
Recently I have contributed with the discussion of the paper by E. M. Stoudenmire and D. J. Schwab: Supervised Learning with Quantum-Inspired Tensor Networks (arXiv:1605.05775 (2017)). In this paper the authors propose to train a tensor network with DMRG-like sweeps. You can read my full post here.
In our latest work, now on arXiv, we show how to use a convolutional neural network to extract physical parameters (even the quantum ones!) from experimental currents.
In my PhD I was generally concerned with monitoring and parameter estimation of quantum systems. These elements are crucial for efficiently functioning quantum devices, and, in difference from on-chip quantum operations, there is still a long way to go in terms of getting efficient readout at reasonable times. The ability to extract the maximum amount of information from an experimental record is therefore essential.
In practice, the experimental noise is sometimes so stubborn and viciously correlated that it may be really hard if not impossible to construct a quantum model that describes it. In our work we show that even for the cases where traditional parameter estimation methods do not work the convolutional network is a great solution to find the parameters governing the dynamics of the system.
Lately I have been working a lot with Google’s TensorFlow library for machine learning. It has a really nice tool for data visualisation, TensorBoard, which can be very useful to understand how the training and evaluation of your model is working. One small bottleneck though is that it has a built-in tool for data export that only works for the scalar functions and unfortunately not for more complex visualisation means like histograms. I find especially the histograms to be particularly useful because they show how is your probability distribution narrowing as a function of learning steps, so it is a really useful figure of merit for understanding the training/evaluation. This is also the reason why I thought it would be useful to export the histograms and customise them for example in Matlab. I would like to share here the code for exporting the histograms from the TensorFlow model. Hopefully you will find it useful! You can download it here.