• Open Access

Density Matrix Renormalization Group with Tensor Processing Units

Martin Ganahl, Jackson Beall, Markus Hauru, Adam G.M. Lewis, Tomasz Wojno, Jae Hyeon Yoo, Yijian Zou, and Guifre Vidal
PRX Quantum 4, 010317 – Published 16 February 2023

Abstract

Google’s tensor processing units (TPUs) are integrated circuits specifically built to accelerate and scale up machine learning workloads. They can perform fast distributed matrix multiplications and therefore be repurposed for other computationally intensive tasks. In this work we demonstrate the use of TPUs for accelerating and scaling up the density matrix renormalization group (DMRG), a powerful numerical approach to compute the ground state of a local quantum many-body Hamiltonian. The cost of DMRG scales with system size N as O(ND3), where the so-called bond dimension D regulates how expressive the underlying matrix product state (MPS) variational ansatz is. We consider lattice models in two spatial dimensions, with square lattices of size 10×10 (free fermions) and 20×20 (transverse field Ising model), for which the required MPS bond dimension is known to scale at least as exp(N). Using half of a TPU v3 pod (namely 1024 TPU v3 cores), we reach an unprecedentedly large bond dimension D=216=65536, for which optimizing a single MPS tensor takes about 2 min.

  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
1 More
  • Received 28 April 2022
  • Revised 11 January 2023
  • Accepted 26 January 2023

DOI:https://doi.org/10.1103/PRXQuantum.4.010317

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Published by the American Physical Society

Physics Subject Headings (PhySH)

Condensed Matter, Materials & Applied Physics

Authors & Affiliations

Martin Ganahl1,2,*, Jackson Beall1,2, Markus Hauru2,3, Adam G.M. Lewis1,2, Tomasz Wojno1,2, Jae Hyeon Yoo2,4,5, Yijian Zou2,6, and Guifre Vidal2,4,7

  • 1SandboxAQ, Palo Alto, California, USA
  • 2Sandbox@Alphabet, Mountain View, California 94043, USA
  • 3The Alan Turing Institute, 96 Euston Road, London, England NW1 2DB, United Kingdom
  • 4X, the Moonshot Factory, Mountain View, California 94043, USA
  • 5Google Core, Mountain View, California 94043, USA
  • 6Stanford Institute for Theoretical Physics, Stanford University, Palo Alto, California 94305, USA
  • 7Google Quantum AI, Mountain View, California 94043, USA

  • *martin.ganahl@gmail.com

Popular Summary

Tensor network methods are a computational framework for approximating ground states and low-lying excited states of strongly correlated quantum systems. Their accuracy is controlled by the so-called bond dimension, with higher values yielding higher accuracy. While offering a computationally efficient route to treating strongly correlated quantum systems, the effort often scales as a high power of the bond dimension, which can severely limit the applicability of these methods in practice. On the other hand, the past decade has seen tremendous progress in the development and commoditization of hardware accelerators, for example, graphical processing units. A prominent example are Google's Tensor Processing Units (TPUs), custom-built processors used to train and run large-scale machine-learning tasks, for example, AlphaGo. In this work, we investigate how TPUs can be leveraged to scale tensor network algorithms to unprecedented bond dimensions, speed, and accuracy.

As a paradigmatic tensor network method, we focus on the density-matrix renormalization group (DMRG) algorithm. DMRG and DMRG-related techniques have a wide-range of applications in condensed-matter physics, quantum chemistry, materials science, statistical mechanics, or machine learning and serve as stepping stones toward more advanced algorithms like, for example, projected entangled pair states (PEPS) or the multiscale entanglement renormalization ansatz (MERA). Common to all applications is a desire to run DMRG at the largest possible bond dimensions. We benchmark a novel DMRG implementation on a 10 × 10 lattice of spinless fermions, and the transverse field Ising model on a 20 × 20 lattice, both being exceedingly hard problems for DMRG. We show that TPUs are extremely well suited to scale up DMRG calculations to unprecedented speed and size.

Our results also indicate that large-scale hardware accelerators like TPUs can be used to speed up tensor network methods beyond DMRG, for example PEPS or MERA.

Key Image

Article Text

Click to Expand

References

Click to Expand
Issue

Vol. 4, Iss. 1 — February - April 2023

Reuse & Permissions
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from PRX Quantum

Reuse & Permissions

It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures.

×

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×