Abstract
Google’s tensor processing units (TPUs) are integrated circuits specifically built to accelerate and scale up machine learning workloads. They can perform fast distributed matrix multiplications and therefore be repurposed for other computationally intensive tasks. In this work we demonstrate the use of TPUs for accelerating and scaling up the density matrix renormalization group (DMRG), a powerful numerical approach to compute the ground state of a local quantum many-body Hamiltonian. The cost of DMRG scales with system size as , where the so-called bond dimension regulates how expressive the underlying matrix product state (MPS) variational ansatz is. We consider lattice models in two spatial dimensions, with square lattices of size (free fermions) and (transverse field Ising model), for which the required MPS bond dimension is known to scale at least as . Using half of a TPU v3 pod (namely TPU v3 cores), we reach an unprecedentedly large bond dimension , for which optimizing a single MPS tensor takes about 2 min.
1 More- Received 28 April 2022
- Revised 11 January 2023
- Accepted 26 January 2023
DOI:https://doi.org/10.1103/PRXQuantum.4.010317
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Published by the American Physical Society
Physics Subject Headings (PhySH)
Popular Summary
Tensor network methods are a computational framework for approximating ground states and low-lying excited states of strongly correlated quantum systems. Their accuracy is controlled by the so-called bond dimension, with higher values yielding higher accuracy. While offering a computationally efficient route to treating strongly correlated quantum systems, the effort often scales as a high power of the bond dimension, which can severely limit the applicability of these methods in practice. On the other hand, the past decade has seen tremendous progress in the development and commoditization of hardware accelerators, for example, graphical processing units. A prominent example are Google's Tensor Processing Units (TPUs), custom-built processors used to train and run large-scale machine-learning tasks, for example, AlphaGo. In this work, we investigate how TPUs can be leveraged to scale tensor network algorithms to unprecedented bond dimensions, speed, and accuracy.
As a paradigmatic tensor network method, we focus on the density-matrix renormalization group (DMRG) algorithm. DMRG and DMRG-related techniques have a wide-range of applications in condensed-matter physics, quantum chemistry, materials science, statistical mechanics, or machine learning and serve as stepping stones toward more advanced algorithms like, for example, projected entangled pair states (PEPS) or the multiscale entanglement renormalization ansatz (MERA). Common to all applications is a desire to run DMRG at the largest possible bond dimensions. We benchmark a novel DMRG implementation on a 10 10 lattice of spinless fermions, and the transverse field Ising model on a 20 20 lattice, both being exceedingly hard problems for DMRG. We show that TPUs are extremely well suited to scale up DMRG calculations to unprecedented speed and size.
Our results also indicate that large-scale hardware accelerators like TPUs can be used to speed up tensor network methods beyond DMRG, for example PEPS or MERA.