ASCR Report on a Quantum Computing Testbed for Science
ASCR Report on a Quantum Computing Testbed for Science
for Science
Sponsored by
U.S. Department of Energy
Office of Science
Advanced Scientific Computing Research Program
DOE POC
Claire Cramer
Abstract
This report details the findings of the Quantum Testbed Stakeholder Workshop sponsored by the
Department of Energy’s Advanced Scientific Computing Research Program to identify
opportunities and challenges in establishing a quantum testbed to advance quantum computing
hardware and software systems that will enable science investigations. The workshop was help
on February 14 – 16, 2017 in Washington DC and served as a forum for individual stakeholders
from academia, industry, national laboratories, and government to provide their perspectives and
ideas on the overarching goals and objectives of a quantum testbed, technical considerations, and
how a quantum testbed program would be synergistic with other nascent quantum computing
efforts in the US and worldwide. This report summarizes discussions on best practices for
management of various types of collaborative research activities, including topics such as
workforce training and building strong relationships with the research community. It also reviews
specific technologies that have been identified by the ASCR Program as being important for the
success of a quantum testbed whose overall goal is advancing quantum computing for scientific
applications in the next five years.
2
Executive Summary
Advances in computing are continuously enabling ever more powerful modeling and simulation
tools that increase scientific understanding of complex phenomena relevant to the Department of
Energy (DOE). Today, ASCR is developing platforms that will enable exascale class computing
within 4-5 years. As the community moves toward exascale, a fascinating and challenging question
emerges: what lies beyond exascale?
Quantum Computing (QC) represents the next frontier in world-changing computational
capability. QC is a promising early-stage technology with the potential to provide scientific
computing capabilities far beyond what is possible with even an exascale computer. The classes
of problems that are uniquely suited to solution using QC are at the core of DOE’s missions in
materials science, high-energy physics, nuclear physics, and quantum chemistry. Research groups
around the world are recognizing the game-changing capabilities of quantum computation and are
investing in the long-term research needed to enable that promise.
Realizing quantum computers that will solve DOE-relevant problems beyond the scope of
contemporary classically-based machines is a complex opportunity that presents formidable
challenges. Interdisciplinary scientific research and technology development efforts involving
physicists, applied mathematicians, computer scientists, materials scientists, and engineers will be
required to harness the laws of quantum mechanics to build a quantum computer. Entirely new
algorithms, architectures, and languages will be required. At present, commercial QC systems are
not yet available, and the technical maturity of current QC hardware, software, algorithms, and
systems integration is incomplete. Hence, there is a significant opportunity for DOE to provide
the long-term leadership that defines the technology building blocks, and solves the system
integration issues to enable this revolutionary tool.
The Quantum Testbed Stakeholder Workshop engaged the collective strengths of industry,
academia, national laboratories, and government to discuss a path for quantum computing within
DOE’s Office of Science. This workshop builds upon the community engagement initiated in the
2015 ASCR workshop on Quantum Computing for Science. ASCR is committed to maintaining a
high level of community engagement as the field and DOE’s mission needs evolve.
At the outset of the workshop, the attendees recognized the value of a quantum computing testbed
as a catalyzing force to bring the community together and focus it on the challenges of blending
cutting-edge science with systems engineering to enable the application of QC to solve problems
of interest to DOE. A testbed can also serve as a bridge between industry and academia to
encourage transparency while these precompetitive technologies are being developed, lower the
barriers for entry into the field by medium and small sized companies or academic research groups,
and foster the establishment of metrics and standards to effectively compare the performance of
engineering design tradeoffs.
One of the major barriers that was identified was the immaturity of not only the individual QC
technology building blocks (including hardware and software), but also the system architectures
and general infrastructure required to make effective design choices. The implication of this
current state-of-the-art is that the principle of co-design arguably becomes even more important
than it might be for design of conventional high performance computing (HPC) systems.
3
This workshop report provides a summary and assessment of the many elements that will need to
be incorporated into a testbed to effectively weave together a diverse, high performing team to
advance the state of the art in quantum computing and realize the long term economic and scientific
promise of QC., and ASCR looks forward to continuing to engage the broader community to help
define the future of QC.
4
1 Introduction and Motivation
1.1 Purpose of the Meeting and Agenda Overview
Quantum computing (QC) is a promising early-stage technology with the potential to provide
scientific computing capabilities far beyond what is possible with even an exascale computer to
solve specific problems of relevance to the Office of Science. Such capabilities include (but are
not limited to) materials science, high-energy physics, nuclear physics, and quantum chemistry.
Despite the rapid pace of progress in the underlying technologies of QC hardware, software,
algorithms, and systems integration, commercial quantum computers capable of solving DOE-
relevant computational challenges are not expected in the near term. Thus, there is a significant
opportunity for DOE to define the necessary technology building blocks, and solve the system
integration issues to enable a revolutionary tool optimized for DOE mission problems. Once
realized, QC will have a world-changing impact on the scientific enterprise, economic
competitiveness, and information processing in general.
Prior to this workshop, the DOE Office of Science’s Advanced Scientific Computing Research
(ASCR) program office hosted a workshop in 2015 to explore QC scientific applications. The
goal of that workshop was to assess the viability of QC technologies in meeting the computational
requirements needed to support DOE’s science and energy mission, and to identify the potential
impact of these technologies. That ASCR workshop report commented that research into QC
technologies was progressing rapidly and that it was important for ASCR to understand the
potential use of these new technologies for DOE-relevant applications as well as their impact on
conventional computing systems. It also noted that scientific application development would make
significant advances only when QC systems are available, even at the few-qubit level.
Subsequently, in February of 2017, ASCR sponsored a second workshop, the Quantum Testbed
Stakeholder Workshop (QTSW), which brought together a diverse group of stakeholders from
academia, industry, government, and DOE laboratories. The purpose of the QTSW was to identify
opportunities and challenges in establishing a quantum computing testbed to advance QC hardware
and software systems that could be used to enable scientific investigations. Prior to the workshop,
whitepapers were solicited in a number of topic areas (the full list is in the Appendix). For
example, stakeholders were asked to outline their individual capabilities and interests in QC
hardware and software; comment on best practices for management of collaborative research
facilities, including topics such as workforce training and building strong relationships with the
research community; and review specific technologies that might be important for the success of
a quantum testbed whose overall goal is advancing QC for scientific applications in the next five
years.
The 2017 workshop was structured to serve as a forum for all stakeholders to provide their
perspectives and ideas on the overarching goals and objectives of a quantum testbed platform and
how that platform would be synergistic with other nascent QC efforts in the US and worldwide.
The first day provided an introduction of the goals for the meeting, four plenary technical talks
which provided a status update on key elements of the possible testbed design, and presentations
by the DOE laboratories wherein they identified their individual capabilities and interests in QC
and their use for science applications. The second day focused on programmatic issues, technical
challenges, and the role of co-design in achieving an operational quantum computing testbed.
Sessions on the third day explored the roles of industry and government in shaping a coordinated
national QC vision, and the challenges in constructing functional QC by combining the work of
many subfields.
5
1.2 Plenary Talks
This session consisted of four presentations covering representative technologies, applications, and
verification and validation techniques. The first two plenary speakers discussed two quantum
computing technologies in detail: trapped-ion QC, and superconducting-qubit QC. These leading
quantum computing technologies achieve gate and readout fidelities at the 99.9% level or better,
but have different advantages and disadvantages. For example, ions have demonstrated a higher
single-qubit gate fidelity, but superconducting qubits have a faster clock speed. While the gate
fidelity numbers are impressive, they are significantly less reliable than conventional CMOS and
other transistor-based technologies. This leads to the third plenary talk, which surveyed possible
applications of imperfect near-term technologies for QC, and the need for verification and
validation techniques for QC devices, the topic of the fourth plenary talk.
Ion Trap Quantum Computing, Dr. Christopher Monroe, University of Maryland
Trapped ions are one of the leading candidate technologies for implementing quantum information
processing. In this technology, atomic ions are confined in free space with electromagnetic fields
supplied by nearby electrodes. Qubits stored in trapped atomic ions are represented by two stable
electronic levels within each ion. Using laser cooling, these effective spin states can be initialized
and detected with near-perfect accuracy using well-established optical techniques.
A particular advantage of ion trapped qubits is that, whereas most physical platforms have nearest-
neighbor interactions only, a multi-qubit trapped-ion system features an intrinsic long-range
interaction that is optically gated and connects any pair of qubits, resulting in a highly-connected
graph. These gates are decomposed into laser pulses that are pre-calculated to implement the
desired qubit operation through the Coulomb-coupled motion along with an appropriate
deconvolution at the end. Linear chains of order 10 qubits have been successfully used in proof-
of-principle demonstrations of canonical quantum algorithms and quantum simulations.
Practical applications of QC require quantum control of large networks of qubits to realize gains
and speed increases over conventional devices. Current laboratory implementations of ion trap
qubits require many optical elements and complex lasers. The field is currently attempting to
miniaturize such setups with robust components that can be connected via a modular approach.
Entanglement within a module can be achieved with deterministic near-field interactions through
phonons, while remote entanglement between modules can be achieved with a probabilistic
interaction through photons. Such an architecture paves a path towards a flexible, large-scale QC
platform that promises less spectral crowding and thus potentially less decoherence as the number
of qubits increases. Demonstrations indicate that generating such modular entanglement can be
faster than the observed remotely entangled qubit-decoherence rate, thus motivating the feasibility
of this approach.
Superconducting Circuit Quantum Computing, Dr. William Oliver, Massachusetts Institute of
Technology & MIT Lincoln Laboratory
Superconducting qubits are another leading candidate for a quantum testbed. Superconducting
qubits are anharmonic oscillators that feature transition frequencies around 5 GHz. Their main
features are lithographic scalability, nanosecond-scale gate speeds, and the need for cryogenic
temperatures.
To date, the most advanced superconducting qubit demonstrations feature linear chains of 9 qubits.
Prototype error detection protocols have been demonstrated by UCSB/Google, IBM, and Delft.
6
These demonstrations store quantum information in the superconducting qubits. An alternative
approach used by the Schoelkopf group at Yale stores the quantum information in a microwave
cavity that is made slightly anharmonic by a qubit. This “resonator cat-state memory” has
demonstrated a prototype error correction scheme as well.
Looking forward, it would be highly beneficial to develop testbeds of 10-100 qubits in order to
test new algorithms, identify roadblocks to scalability, and address those issues. Testbeds of this
size will almost certainly require 3D integration. Ideally one would like the flexibility to control
each and every qubit, and wiring laterally from the edges is challenging. Additionally, such
testbeds need to be controlled with microwave electronics, and for larger numbers of qubits, the
form factor of the electronics becomes an issue.
Near-term Practical Applications of Quantum Devices, Dr. Jarrod McClean, Lawrence Berkeley
National Laboratory
To set near-term expectations, first-generation quantum testbeds, comprising 5-15 qubits, will not
solve useful algorithmic problems. The ideal behavior of such a testbed can be simulated with
relative ease on classical computers. So what is first generation quantum testbed good for? This
is often a question in rapidly evolving and promising scientific and technical fields, and the realm
of quantum computing is no different.
In QC, one of the first key milestones is “quantum supremacy” - defined as the completion of a
well-defined computational task that all the classical resources on earth today could not complete
within a reasonable window of time, by using elements that could be used to comprise a universal
quantum computer. While some of the earliest milestones generate excitement both in the science
and technology community and in the public eye, such milestones might not correspond to tasks
that are practically useful in an application sense. There will likely be a gap between the first
demonstration of quantum supremacy and the first use of a quantum computer for a practical
application that exceeds the capabilities of conventional HPC. This gap, sometimes called the
post-supremacy gap, could be surmounted through the development of near-term applications. A
focus on applications can help maintain momentum and government focus between the pending
demonstrations of quantum supremacy and the development of the error-corrected QC systems
that will be required to solve relevant DOE problems.
Promising candidates for quantum speedups on near-term devices have been identified in the areas
of optimization, quantum simulation, and representations in machine learning. A particular focus
of interest is the topic of quantum simulation for quantum chemistry. One example of an
application for such a device is the study of the nitrogenase enzyme that is responsible for nitrogen
fixation in nature. Recent developments include coherence-time friendly methods that have the
potential to get us to this valuable milestone without full quantum error correction. A key aspect
of these algorithms is the careful use of quantum-classical couplings and co-design. Such an
achievement has the potential to cement quantum computers into the landscape of applied
technology.
Characterization and Control of Quantum testbed Devices, Dr. Robin Blume-Kohout, Sandia
National Laboratories
A critical capability that testbeds would provide is the ability to probe and characterize the noise,
errors, and other imperfections that occur in the real-world gates that will make up the testbed.
Since mitigating those errors (via quantum error correction) is expected to occupy the
7
overwhelming majority of future quantum processors’ time, characterization of real-world noise
is one of the most important applications for the testbed. To achieve this goal, testbeds must be
equipped with fast, flexible control systems that can implement arbitrary quantum circuits, and are
not hamstrung by memory limitations.
A variety of techniques have been used to characterize qubits and the operations (gates) performed
on them. Currently, the two dominant techniques are randomized benchmarking (RB), and gate
set tomography (GST). In these methods, data is gathered by performing and repeating a variety
of quantum circuits on the testbed device; it is then collated and analyzed by a classical algorithm
to estimate how the qubits are misbehaving. Both RB and GST admit multiple variations, but as
of yet do not address gatesets that involve more than two qubits. This an area of active research,
as is expanding these protocols to probe specific noise/error properties. Future protocols that
directly probe the effect of changing how we implement gates will be critical for improving our
devices and stabilizing them against drift.
In recent years, characterization protocols have rapidly grown more complex. There is a trend
towards both larger, more powerful noise models, and larger, more structured sets of circuits. For
example, current 2-qubit GST experiments use ~ 27,000 different circuits to estimate ~1,200
distinct noise parameters. We expect these numbers to grow. This trend supports increasingly
powerful debugging and QEC design, but it also demands certain features of the testbed’s control
system. Moreover, future characterization tools must also scale in a resource-friendly manner. A
testbed’s control systems should be designed with these, and other, future needs in mind.
1.3 DOE Laboratory Capabilities for Quantum Computing
The goal of this discussion was to provide a forum for the DOE laboratories to share their
capabilities and interests in quantum computing and testbed operations. Participating labs were
Argonne National Laboratory (ANL), Fermi National Accelerator Laboratory (FNAL), Los
Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL),
Lawrence Livermore National Laboratory (LLNL), Oak Ridge National Laboratory (ORNL),
Pacific Northwestern National Laboratory (PNNL), Stanford Linear Accelerator National
Laboratory (SLAC), and Sandia National Laboratories (SNL).
The number and maturity of the technologies that the individual laboratories have to contribute to
achieving the goals of the quantum computing testbed varies. Several of the laboratories have
made significant and sustained investments that are directly relevant using their internal LDRD
funding, coupled with institutional facility resources. However, the breadth of technologies and
skills that will be required to achieve the goals of the quantum testbed, and its later spinoff systems
may require robust teaming to engage the best from many of the laboratories represented at the
workshop. A few notable areas are below.
Materials characterization. Many of the DOE laboratories have complementary and unique
materials characterization capabilities that can be used to understand the fundamental performance
of qubits and the materials in which they are constructed. This capability ranges from designing
and optimizing nanoscale materials parameters specifically for qubits to modeling of quantum
devices. In some cases, the labs have also invested in unique facilities that could guide qubit
construction and help provide insight into failure modes.
Device development and manufacturing infrastructure. DOE has major investments in
cleanrooms, fabrication facilities, and engineering support for nanodevices, integrated circuits, and
superconducting sensors and detectors that can currently produce ion-trap, SC circuit, and silicon-
8
based dot qubits. Many of these capabilities are one of a kind. For example, Sandia’s
microelectronics fabrication facility is a flexible production facility capable of processing silicon
and III-V materials to produce finished materials. The DOE lab complex also has a number of
prototyping facilities to make one-off devices for answering basic science questions about device
characteristics and to support a fast cycle for testing and improving novel devices.
Applications and algorithm users. The DOE laboratories have physical science applications that
will benefit from quantum simulation, with the primary candidates being in quantum chemistry,
nuclear physics, high-energy physics, and material science. Circuit-based QC algorithm
development has focused on basic operations, such as factoring or linear algebra, but most of these
gate-based computing efforts show great promise in supporting DOE missions in the future. The
DOE labs also have a long history of fielding, testing, and benchmarking emerging technologies
on real mission applications, whether they are new accelerators, new computers, or new sensors.
User facilities and operations experience. Many of the DOE laboratories have long, successful
track records in building, operating, and upgrading large-scale facilities and open user programs
for computing, light sources, neutron sources, and other capabilities. These world-class facilities
have experience fielding equipment that requires massive power and cooling, and supporting
thousands of users with complex workflows. Smaller facilities, such as the nanoscale science
research centers offer a somewhat different user model with closer interaction between users and
facility staff and a suite of unique fabrication and characterization capabilities that could prove
valuable for developing novel QC devices. In contrast to many of these facilities, the cycle-time
of a QC testbed is expected to be in the range of every 3-6 months, owing to rapid technology
refreshes, and in order to support these shorter timeframes a QC testbed may need to adopt a
modular, easily-upgradable design, with in-house expertise and on-call user support for rapidly
debugging user experiments in real time.
The combination of the nanoscale science research centers and other resources across the DOE lab
complex, such as large clean rooms, production-class fabs, and high performance computing
centers, has the potential to accelerate progress in QC for science.
9
1. The process of defining metrics and program goals must include the needs of the
user community. While this may seem obvious, there were multiple examples from
the panel discussion in which this point was not observed, to the detriment of the
facility.
2. Input from a group representing the entire user community is important to guide
investments and directions. This seemed especially important to the panel given the
relative immaturity of QC technology.
3. Past experiences strongly suggest that an agile and flexible management team that
develops a roadmap and works closely with all relevant stakeholders is better
positioned for long term success.
4. It takes time to build a user community for a facility or testbed. The existence of a
physical location where people can visit in person and interact in multiple ways has
been shown to facilitate co-design by accelerating the development of a vibrant
community. If only virtual access to a testbed were available, then the time required
to build the user community would likely be increased.
5. Moreover, the management/access model must be flexible and change as the
technology and testbed matures. With the accelerated cycles of learning expected
in the early development of a testbed, QC system modifications would be expected
on a 6-month cycle, not a multiyear cycle. As noted above, flexibility and agility
in designing and implementing facilities-related improvements in significantly
reduced cycles would also be needed to enable flexibility in the management/access
model.
In addition to the summary points above, the whitepapers and group discussion included the
following key points:
Lessons learned from experiences at other user facilities. The NSF representative shared an
example of a prior NSF center focused on development of microwave interferometer technology
that he believed presented challenges analogous to those facing a quantum testbed program. Both
programs involve relatively immature technology pushing the boundaries of what is possible with
an unclear system integration path for ultimately blending that technology into a functional system.
The NSF program was successful for a number of reasons: the community identified the most
challenging engineering issues and took ownership of the final objective; the goals and metrics
evolved as the technology matured; the program was led by agile program leaders who were also
practitioners in the field; and importantly, the ambition and goals were matched by the available
budget. Another facility manager offered a negative counter-example regarding a facility that
developed a new capability only to have very few users. In retrospect, it is believed this occurred
because the user base was not surveyed to uncover and document the community needs prior to
facility planning.
The role of a testbed in bridging the gap between the academic and industry communities. In
general, discussion participants felt that the national laboratories can be an effective bridge
between these communities because they offer professional staff that can be tasked with
transitioning academic advances into initial practice. Because the path to maturing qubit
assemblies and the interface layers between hardware and software is ill-defined, there is a need
for an environment where accelerated cycles of learning, or “failing fast,” is rewarded without
possible repercussions on careers. The national laboratories provide just such a working
environment and therein provide value at the early stages of technology maturation. Another
10
observation about the testbed is that it can lower the barriers for academia and industry to enter
the field by providing a way to prove (or not) their technology building block by injecting it into
an operational system. New users would not require a fully functioning system of their own to test
their building block concept. Others noted that an issue exists in testbed concept between hardware
development time, when the QC will not be operational, and the actual time that the testbed is
being used for scientific simulations. While both are important, many participants considered
making qubit systems available to users a high priority for a testbed, and that new technology could
be incorporated into the testbed when available and appropriate. There is much to be learned about
how ensembles of qubits work that can help inform the next generation of hardware. Finally, it
was noted in this session and others, that the testbed could enable training for students who might
then launch careers in industry.
The impact and importance of interface standards. There was a diversity of thought about the
importance of, and priority for, setting interface standards. Standards can help broaden the user
base by clarifying how new technology building blocks can be integrated into an operating QC
system, thereby leading to a more engaged user community and driving innovation in all the
stakeholder communities. However, standards can also impede progress by unduly constraining
the solution space, especially if not broadly agreed to by the user community. Thus, helping to
recognize when a standard may be required, and then helping to foster a community-based
standards development process could be one of the roles for a quantum testbed advisory panel. In
both cases, standards should be flexible enough to evolve rapidly as the hardware/software stack
matures. A secondary benefit of the identification of standards could be the development of a
language to describe QC performance. Currently, there is no standard or even rough agreement on
what constitutes a QC, how to compare one QC to another, or how to describe the performance of
individual elements with the QC system. As the hardware and interface software layers (stack)
become available, the communities’ lexicon will also evolve to allow performance of system
elements to be compared so the QC can be optimized for the problem at hand.
Metrics for success. The panel members described what their centers used for metrics of success.
In many cases, metrics were the number of publications and number of users served. These are
reasonable metrics for an established facility mostly stocked with mature tools but may not be the
best metrics for a quantum testbed where the technology is rapidly evolving, especially if
accelerating system-integration innovations is a testbed goal. Pulling from the NSF example
mentioned above, technical performance metrics may be the most appropriate in the first few years.
It was noted that there is an inherent tension between maintaining the testbed strictly for user
access and the continual churning of hardware and software that is at the heart of co-design. One
suggestion for the early stages of the testbed was to run in a campaign mode, alternating between
cycles of operation and engineering improvement. This suggestion was supported by the success
of an NSF center that operated in technology operation/improvement cycles until a reasonably
stable prototype was available to serve as a secondary testbed.
Possible role of an Advisory Panel (AP). Both the panel and discussion participants expressed
support for an advisory panel to counsel the testbed management on success metrics, oversee
progress, and serve a significant role in growing the user base through academic interactions and
their own potential involvement in the testbed program. One of the examples from the NSF center
highlighted the value added by advisory panel members who are leading experts in the field in
helping to set and achieve near-term and long-term objectives for the testbed. These functions are
critical to maintaining user engagement, utilizing the best contributors from across the DOE
11
complex, and maintaining the support of the broader community. The AP can also help evaluate
and prioritize proposals for user access in the context of the overall roadmap when the requests for
access begin to exceed the ability of the testbed to accommodate them. A frequently repeated
suggestion was to not get bogged down in process. Finally, the early stages of a quantum testbed
are likely to be capital-intensive; prioritizing available capital resources and leveraging other
programs such as hardware/software programs in the academic, industry and national labs will be
critical.
The balance between onsite and virtual access to the testbed. Transparency of the system design
and full access to the controls of the QC were believed to be critical to achieving rapid progress.
These principles were balanced by the recognition that full access carries risk to the operation of
the hardware. These two views can be mitigated by a graded approach to access that will evolve
with time. While the first testbeds are being assembled, access could be via onsite visits or email
discussions with the professional team that is in daily contact with the hardware/software stack.
As the systems mature or as more testbeds are developed, significant advantages could be realized
by opening up access to one of the testbeds via a virtual access portal. The IBM experience offers
several lessons—the virtual access portal generated a large number of users, an engaged public
response, and valuable metadata on how the system was actually used. However, the QC experts
expected to be the testbed’s early users will likely desire to operate with greater transparency and
with the ability to control more qubit parameters. The IBM web interface was also achieved at a
significant overhead cost that may not be the best use of resources in the short term. One of the
roles of the testbed management team and advisory panel would be to monitor the demand signals
for offsite access and adjust resources to meet that user desire. Finally, given the world-wide
interest in QC technologies the QC testbed should both expect and actively encourage engagement
with the international community, with due regard for export control restrictions.
2.2 Staffing and Workforce Development
It is anticipated that staffing an experimental quantum computing testbed will require a cross-
disciplinary approach with an engaged workforce from varied backgrounds. This section
summarizes staffing and workforce development considerations raised in a group discussion and
in the whitepapers.
Composition of testbed staff. The different skill sets needed to develop and realize a successful
testbed include domain scientists in the areas of computing hardware, cryogenics, vacuum design,
atomic and molecular physics, lasers and optics, electronics, physics and engineering. One also
needs quantum algorithm developers and implementers. National Laboratories possess a broad set
of personnel who can be tasked with working on several projects simultaneously. While a
dedicated full-time staff may be optimal for an unconstrained budgetary environment, staffing of
the testbed could initially be drawn from existing talent at the Laboratories.
Appropriate staffing depends on the phase of testbed development. Workshop participants
identified phases of testbed development with different staffing requirements: 0) planning for the
hardware; 1) building the physical testbed; 2) transitioning the testbed to operations; 3) a longer
period of standard testbed operations; 4) upgrade of the testbed (which repeats the cycle). Note
that this testbed cycle mimics the typical developments found in many of the ASCR computing
centers. Scientists and engineers would work together to provide requirements and planning during
phase 0. Likely, significant PhD level engagement across several disciplines will be required
during phases 1 and 2, including electronics, vacuum, cryogenics, atomic and molecular physicists,
12
HPC experts, and fabrications experts. Some participants suggested that it could require several
QC hardware experts to get a testbed up and running. During phase 3, the testbed would be
operational and require the engagement of hardware technicians, research scientists, and software
engineers to effectively utilize the hardware. It is possible that phase 3 might require involvement
of fewer individuals than the other phases; however a significant effort to engage the broader
research community will also likely be required during this phase. Effective community
engagement will likely involve staff at several levels, ranging from instrument scientists to science
domain experts and interface developers. Planning for future upgrades (phase 4) is similar to phase
0.
Software development can proceed in parallel. As plans develop for successive generations of the
testbed, it would be logical to pursue a parallel strategy for testbed software development. This
effort could involve slightly different skill sets. Users of the testbed will need to be informed about
plans for software development for the quantum computing system. It may be possible to utilize
other successful open-source software approaches from the Office of Science, such as the one
utilized by the NWChem code development community.
Finally, participants noted that QC is not yet sufficiently well-developed to predict staffing needs
and that any model for a testbed should build in flexibility to adapt this rapidly changing field.
2.3 User Community Development and Interactions
A panel of experts from across various industries participated in a discussion outlining user
communities across these industries in order to provide useful examples for a quantum testbed.
Panel members were from Los Alamos National Laboratory, the University of Washington’s
Institute for Nuclear Theory, the Association of Universities for Research in Astronomy (AURA),
and IBM. Panelists made the following key points:
1. A lesson from the astronomy community is that user facilities, especially those based on
new or cutting-edge technology, are best-served by a world-class staff who are both
professionally invested in the facility and at the same time transparent and accountable to
the users.
2. Longer-term face-to-face interactions focused on solving research problems have proven
more effective than short, presentation-heavy workshops in accelerating progress in areas
that cross disciplinary boundaries. The INT representative presented an example of a visitor
program that has been effective in bringing new ideas into nuclear physics as well as
sharing concepts in nuclear physics that have enriched other disciplines.
3. Exploring applications of quantum computing is often a high-risk endeavor for scientists
not already working in the field. Lowering the stakes by providing opportunities for
researchers to do small projects has been an effective means of engaging new users.
4. Well-developed interfaces are critical for non-expert users. As discussed below, testbed
staff may provide an adequate interface for a first-generation testbed while a more
sophisticated remote access capability is being developed.
The subsequent discussion highlighted how development of a user community and fostering a
vibrant, community-wide interaction are critical to the success of a testbed.
The importance of the user / operator interaction. The panel shared key insights on the important
elements of a functional user interface, noting the role of testbed staff. In particular, it was
emphasized that the QC systems operators who interact with users are one of the testbed’s most
13
important resources. It will be important for users to have a set of experts who operate the
testbed devices to serve as intermediaries between the users and the testbed. In the same vein,
most users will likely not interact with the testbed directly or be physically present in the same
room as the qubit devices. This mediated-access model will ensure the deepest reach to the most
potential users, especially beginners, and will ensure that the users cannot physically damage the
testbed. Before a fully automated, remote interaction model is developed, it may be useful to
have staff guide the users, in a way similar to how the nanoscience centers such as CNMS, or
certain telescope facilities managed by AURA, work. At CNMS, users may tell a staff scientist
that they want to make nanostructure X. Users may rely on staff to execute much of the
lithography to carry out that process. In a similar way, early users may tell the quantum testbed
staff to set up the quantum processor to run simulation Y. The staff can take a leading role in
actually making sure the machine is ready to go in the proper configuration to run Y, if such
tasks have not yet been automated. Nonetheless, many of the users will also be expected to have
expertise in quantum information science, and in the particular qubit technology used. These
advanced users may wish to propose experimental upgrades to the testbed and contribute to
future software and hardware development. A model for testbed operation that allows advanced
users greater flexibility could significantly enhance test scientific productivity. Further, since one
important function testbed could serve is to discover the failure modes of early-stage devices, it
is conceivable that more knowledge could be gained from early device failure than success.
Advanced users may be allowed to physically interact with the testbed devices and in doing so,
might be provided access to an enhanced set of controls versus those which a lay-user might
experience through the remote interface. While none of the existing user models discussed by the
panel appear to be a perfect fit for a quantum testbed, discussion participants felt that a
combination of the IBM Quantum Experience’s remote access model and AURA's flexibility in
supporting a broad spectrum of users - from mediated-access users to power users who interact
directly with the hardware - could be effective.
The role of early adopters in driving innovation. A first-generation testbed is likely to have two
distinct user groups: the first being expert quantum information researchers, and the second being
early adopters seeking to learn about QC in order to develop applications better suited to later-
generation testbeds. Avid users of the early testbed may be those who build or are capable of
building an operating testbed itself, with a focus on verification, validation, fault tolerance, and
other aspects of making a functional system. These “power users” can also be particularly valuable
members of a user group that aims to provide grassroots troubleshooting and community-based
support for new users. At the same time, forward-thinking domain scientists from nuclear physics,
chemistry, quantum field theory, and other disciplines are also likely to be among the early
adopters. These fields represent those in which quantum algorithms could provide substantial
advances if a scalable testbed capable of running large scale algorithms were available. The early
testbed would likely see a large proportion of physicists and quantum algorithm developers testing
new algorithm snippets and demonstrating fault-tolerant schemes (including encoding and
quantum error correction). As the testbed grows, primarily in number of qubits available, and
becomes capable of running more complex algorithms, the user community is expected to evolve
and take on a larger number of domain scientists. The users at this stage should not need to be
experts in quantum information science in order to take advantage of such a testbed.
Enabling the user community. The panel provided ample successful strategies for engaging new
users and developing a community of scientists from outside the quantum information science
field. Several user models currently in use, including one from Los Alamos National Laboratory,
14
provide hands-on courses to train users in this new technology. At IBM, extensive online tutorials
are provided, including videos with robust documentation. The astronomy community user model
has benefitted greatly from having expert users work in tandem with new and inexperienced users
in order to streamline access to complicated instruments while simultaneously propagating
knowledge across the broader user community. In quantum computing, each domain science might
have its own approach to user community development. The challenge of training QC users is
even observed in the current few-qubit systems that exist in laboratories. For example, there is a
significant learning curve to implement modest gate set tomography algorithms for new users of
the systems. A model that draws from each of these strategies and provides a broad spectrum of
outreach and community development will have the highest probability of ensuring the testbed
mission’s success.
Fully engaging a user community. Multiple examples were discussed that describe how user
communities become invested and participate in the smooth operation and productivity of a
testbed. A model espoused by the astronomy community, in which users are active developers of
the testbed in terms of both future software and hardware, potentially has many parallels with a
quantum computing testbed. This model ensures that users help develop a testbed into something
that maximizes usefulness, and it ensures that the testbed operators and management remain
cognizant of new qubit technologies that could be integrated into the system. On the other hand,
while such a model is well-suited for a narrower domain science where most users are astronomers,
it may not be fully compatible with the broad base of different domain sciences expected to
comprise the quantum computing testbed user community because different domain sciences may
have different ideas of the direction that testbed development could take. A hardware upgrade may
be suitable for one domain, but not another. In contrast, current quantum computing testbeds at
IBM and LANL have more abstract levels of interaction, in which users do not have access to low
level hardware and do not participate in testbed development. This model is advantageous in that
the chances of breaking a device are minimized and a broader developer community could likely
be nurtured without requiring users to understand all aspects of hardware operation. At the same
time, it would be important to ensure that new improved technologies could be incorporated and
the testbed were not frozen into suboptimal solutions. Helping strike a balance between these
modes of operation could be a role for an advisory panel.
15
areas from combustion to materials modeling. Currently, a new set of Co-design Centers focused
on computational technologies like adapted mesh refinement (AMR) are funded as part of the
Exascale Computing Project. Co-design of a quantum computer is as important, or even arguably
more important, than in a classical computer. Because of the relative immaturity of QC hardware,
architectures, and software, QC system development will both learn from classical co-design
process and promote even newer development methods.
Speakers from LANL and the University of Maryland initiated a group discussion by presenting
lessons learned from classical co-design and thoughts on how quantum co-design might proceed.
Key points presented include:
1. Establishing a common vocabulary and building a community across diverse disciplines
are both challenging and absolute necessities, especially with non-traditional partners.
2. An expanded multi-scale co-design approach spanning all the way from materials science
to programming could be very effective for early-stage post-Moore’s Law technologies
such as QC. A testbed strategy could make this large endeavor more tractable by breaking
the problem into manageable chunks.
3. Exploring QC architectures, multi-qubit gate implementations, and qubit connectivities
through co-design will accelerate the development of useful devices. Specific hardware
implementations may not be universal, but lessons learned could apply to other
applications.
The subsequent group discussion identified key elements for testbed operation that will enable
rapid and effective iteration of the co-design loop and therefore improvements in capabilities:
1. Development of a broad range of benchmark problems based on DOE mission needs.
2. Agile and effective planning for deployment of improved qubit architectures, accounting
for any asymmetries in relative progress in hardware and software and allowing for
intercomparison of hardware instantiations.
3. Local instantiations of hardware, permitting much lower-level interactions, parameter
adjustments, and optimizations than would be available through pre-defined APIs.
4. A flexible software architecture enabling multiple hardware instances, a variety of
programming languages, adapted to different developer communities (for example, domain
specific languages), and transformation between various intermediate representations to be
supported.
5. Performance modeling, emulation, and simulation tools that can use classical compute
resources to the extent possible to develop algorithms and test the software stack and aid
in hardware design.
Principles of quantum computing co-design are evolving. In contrast to the classical computing
co-design space, the current co-design space for quantum computing is in many ways much less
mature. The number of algorithms for a given application (even the number of algorithms for any
application) is far fewer, and the understanding of their utility to important use-cases, as well as
resource consumption is not as well-known as in the classical case. In addition, though
considerable research, development, and engineering efforts have been made in in the fabrication
of physical qubits, such investment has targeted mostly improving the number and quality of qubits
rather than exploring tradeoffs in system design and ties to application needs. Moreover,
differentiation in the type of error correction that can be implemented for the leading qubit systems
would be a significant addition to the co-design space. Altogether, each class of qubit hardware is
16
very different from the others, much more so than different CMOS-based chip designs. A testbed
could enable a library of well-defined results including data about chip, gate, and qubit
characterization data that will fuel future research, and spur design of benchmarks enabling QC
system comparison
QC systems will combine classical and QC elements. Co-design for quantum computing is likely
to be a more challenging endeavor than in the classical digital space for the reasons mentioned
above, and also because an early testbed will likely involve both classical and quantum
components where the classical computer controls and interprets the results of the quantum
submodule. The optimization that will be required to join both technologies is largely unexplored.
Compared to the standards and engineering capital available in the classical computing world
(fabrication processes, circuit design libraries, instruction set architectures, compiler theory,
programming models, etc.) much more needs to be developed for classical/quantum hybrid
systems.
Co-design as a tool to bridge communities. The communities that must be brought together for
successful quantum co-design are more diverse than that required for classical computing co-
design. In addition to the application domain scientists, applied mathematicians, computer
scientists, and classical software and hardware engineers, the community needs to be augmented
with scientists, typically physicists, and engineers (microwave, optical) who can design and build
quantum hardware, implement error-correction, and validation and verification protocols. The
skill sets of the domain scientists, applied mathematicians, and computer scientists involved must
start to evolve to span the models, algorithms, and programming paradigms appropriate for
quantum computing. An early testbed device will likely involve the co-design of both quantum
and classical hardware, so the classical and quantum skill sets should not be siloed. Creating a
focal point around which a quantum co-design community can form, collaborate, and begin to
share their experiences and goals will maximize return on investment and accelerate progress,
The importance of control algorithms and the software stack. Until very recently, the software
control systems that drive current QC experiments have been restricted to labs where the devices
are fabricated, were not often shared, and had minimal formal structure. For co-design to be
successful, a common software stack must be expanded upwards from the hardware layers to
enable easier composition of simulations by domain scientists—for example, allowing remote
web-based access, and expanding downward to enable robust and efficient connections to the
hardware itself. In addition to a complete software stack, quantum hardware must start to evolve
towards common interface standards not just for input/output, but also for calibration data and
hardware metadata. In addition, the testbed community must also educate itself broadly on the key
challenges and tradeoffs at each level of the design process; doing this will require the testbed
community to develop a common shared vocabulary that bridges disciplines that have thus far
been unfamiliar with each other.
17
The technical breakout sessions focused on the following topics:
4.1 Design and emulation tools - Output of these tools can characterize the impact of
computing model, size, performance, and qubit connectivity on testbed
performance.
4.2 Hardware characterization, verification and validation - A summary of available
verification and validation tools, and a discussion of their ability to predict the
performance of quantum algorithms on a testbed system.
4.3 Analog simulation - Using one quantum many-body system to simulate the
properties of another.
4.4 Tools for making a quantum testbed useful - Properties of a user interface and
operating system to facilitate the efficient implementation of quantum algorithms,
adaptations of compiler and optimizer to the strengths of disparate qubit
implementations, as well as the quantum control capabilities needed for
calibration, verification and validation and to implement algorithms.
4.5 Superconducting qubits - Properties, status, and challenges with this qubit
technology.
4.6 Trapped ion qubits - Properties, status, and challenges with this qubit technology.
4.7 Emerging qubit technologies - Survey of promising qubit technologies in addition
to superconducting or trapped-ion qubits.
4.8 Interconnects - Long range interconnects between qubits and their impact on the
ability to scale the number of qubits in a testbed system.
4.1 Design and Emulation Tools
In this session, participants discussed the status and efficacy of software tools used for the design
and performance evaluation of QC devices. The discussion was initiated by speakers from the
Georgia Tech Research Institute (GTRI), Lawrence Berkeley National Laboratory (LBNL), and
Pacific Northwest National Laboratory (PNNL). Take-away points from the discussion included:
New modes of quantum computing emulation needed. Performing a “brute-force” Hilbert-space
simulation of multi-qubit systems is computationally expensive—each extra qubit doubles the
simulation effort. Since 2010, the world record for the quantum circuit simulation by a
supercomputer was 42 [3]. Only recently (April, 2017, after the workshop completed), has this
increased to 45, using the supercomputer ranked #5 on the Supercomputing Top 500 list, Cori [4].
When the effects of general Markovian noise and decoherence are incorporated into the simulation,
this record drops down to closer to 20 qubits. With companies such as Google planning to
demonstrate a 49-qubit quantum computer before 2017 ends, it will not be long before
supercomputers are no longer able to keep up. New modes of quantum computing simulation and
emulation are needed.
By following ideas developed for simulating classical computers, several possibilities exist. These
include doing analytical calculations, Monte Carlo numerical studies, and multi-scale simulations.
The challenge with any non-exact method is building accuracy and trust. This is an area in which
expertise in emulating complex classical hardware could be helpful. Eventually, just as classical
computers are used to design and evaluate advanced classical computers, the expectation from the
18
group was that small quantum computers would be used to design and evaluate advanced quantum
computers.
Leveraging early-generation classical computing design and evaluation tools. It has been many
decades since classical computers were at the 5-10 logical gate level, and few remember what
design and evaluation tools were utilized at that time. Quantum computers, on the other hand,
currently consist of a relatively small number of physical qubits that have yet to demonstrate error-
corrected operations. To overcome this, it was observed that an effective design and evaluation
tool for a quantum testbed might productively center on the various error mitigation strategies one
might employ. These include optimal and quantum control methods, dynamical decoupling and
pulse-sequence methods, and quantum error-correction methods.
Benchmarking a quantum testbed. Because benchmarking suites have been influential in steering
the development of classical computers; it is natural to ask about developing a standard suite of
quantum algorithms to serve as benchmarks for a quantum testbed. Because relevant applications
for DOE will likely involve many logical qubits, quantum algorithms on physical qubits will
probably not provide a path for architects to design for these large-scale applications of interest.
It was suggested that perhaps the most important benchmarks would measure how well logical
qubits perform at relevant tasks.
Related to benchmarking is the need to establish relevant specifications for quantum computers.
Because quantum-computing technologies can differ substantially from classical, determining the
right tangible properties to compare is unclear, and developing those is likely to be an organic
process as the quantum testbed develops, and not one that can be proscribed ahead of time.
4.2 Hardware Characterization, Verification and Validation
One of the important reasons to build a quantum testbed is to use it to learn about issues relevant
to the design of larger, DOE mission-scale machines. This includes learning about failure modes,
controllability, and challenges to scalability. For this reason, an important task for a quantum
testbed is to execute protocols for characterization (“What did we build?”), verification (“Did we
build it correctly?”) and validation (“Did we build the right thing?”). Only when a quantum testbed
passes these protocols can we trust it to provide us with the information we seek.
The discussion of hardware quantum characterization, verification, and validation (QCVV) was
initiated by speakers from the University of Sydney, LANL, and SNL. Take-away points from the
discussion included:
1. One of the important near-term testbed applications is fault-tolerant quantum error
correction (FTQEC). Some mission-scale algorithms will only run on logical (error-
corrected) qubits.
2. One valuable testbed objective would be to develop control and error models. If this
objective were achieved, the understanding of the science and engineering path to achieve
larger machines will be greatly improved.
3. Low-level QCVV tools play an important role in debugging FTQEC. These tools are
needed to troubleshoot and learn why FTQEC fails if/when it does.
4. Analog simulators and quantum annealers have significantly different QCVV issues than
digital quantum information processors. QCVV for these three types of QC systems must
be considered separately. A digital QC component to a testbed will be essential for
exploring QCVV for science and energy applications of QC.
19
Learning control and error models with a quantum testbed. One of the primary applications for a
quantum testbed scale system could be to develop a path from understanding the performance of
physical qubits assemblies to the development of error-corrected logical qubit building blocks.
Each error-corrected logical qubit, comprised of multiple physical qubits, is the analog of a logical
bit in a conventional electronic computer that reliably processes or stores digital data. Because
quantum computers are not just extremely powerful but also extremely fragile, FTQEC processes
are required to fully understand and then optimize a QC at the physical qubit level. The panel
discussion showed that many FTQEC codes are under consideration, each with their own pros and
cons, but no clear leader has emerged. Hence, a quantum testbed would be an excellent crucible
within which to determine which quantum codes work best in the realistic noisy environments of
the technologies employed in that testbed, or that it can simulate faithfully. Moreover, to indicate
how the performance of FTQEC might scale to mission-scale machines, it is essential that multiple
FTQEC codes and protocols be investigated, at different sizes. Learning the control model and the
error model for one or more quantum devices would be a significant testbed accomplishment.
There is no substitute for direct testing of multiple QEC protocols. While a handful of QEC
experiments have been demonstrated to date, none have run fully fault tolerant QEC (including
preparation and measurement), and none have done so in an iterated fashion that could sustain a
logical qubit for a user-specified amount of time. Because of this, virtually all of the work on
FTQEC has been theoretical, and has assumed numerous things about what kinds of control
capabilities will be available in a real quantum computer and what kinds of errors will afflict the
hardware executing FTQEC in a real quantum computer. Building an actual quantum testbed
would reveal the “ground truth” for both the control and error models studied by theorists. Many
of the discussion participants suggested that the models currently used by most theorists are
probably too “simple,” and that real data are needed from a quantum testbed to assist theorists in
generating more valuable and practical FTQEC solutions, as well as developing error suppression
strategies that could even be implemented in the physical layer of a QC device. Examples of errors
whose parameters are not necessarily well known for all technologies or whose error-mitigation
strategies would benefit from further development include non-Markovian errors, burst errors, and
leakage errors.
Additional QCVV, FTQEC research will be required as QC devices advance. Looking deeper at
FTQEC, there was an expectation that the current tools will fail at some point and will then require
finer-grained QCVV tools for debugging qubit performance. The group discussed several options,
including randomized benchmarking, gate-set tomography, and Hamiltonian identification. This
is a very active area of research, and there is no consensus in the field about which method might
be best. There was good discussion about how fine-grained the models needed to be. For example,
on the one hand, while noise independence between quantum gates and qubits in time and space
at some scale was generally expected to hold, QCVV protocols would need to be executed at larger
and larger scales to prove out what that scale is. Because the cost of tomographic characterization
protocols scales exponentially with system size, this is expected to be an arduous task. On the
other hand, it may not be necessary to know the details of the noise model at such a fine scale.
There could be many aspects of the noise model to which the key quantum testbed application—
FTQEC—is insensitive. From an operational viewpoint, the only aspects of the noise model that
need to be known are those that impact FTQEC. The upshot of this discussion was that there is a
need to better connect low-level QCVV tools with FTQEC.
20
Analog quantum simulators and quantum annealers. The panel discussed how QCVV would work
on a testbed operated in an analog quantum simulation mode. The view of many participants in
this discussion was that there is no general-purpose QCVV methodology known, and whether it is
possible to develop one is an open research question. A lack of QCVV would limit (but not
eliminate) the value of these types of QC devices for some domain science applications.
21
could be tried: comparing two distinct analog simulators targeting the same problem; comparing
two results of known accuracy obtained from classical simulation techniques, or perhaps related
results obtained from classical simulation techniques. Similarly, evaluating the major sources of
error in an analog quantum simulator emulating a many-body Hamiltonian is a very difficult task.
Typically, the Hamiltonian used to describe the dynamics of the laboratory (simulator) system is
an approximation, introducing errors compared to that of the simulated system; ideally there
should be enough classically tractable limits or special cases to diagnose possible errors in the
simulator.
Overall, discussion participants had mixed opinions about the appropriate role for analog quantum
simulation in a testbed. Analog simulation could be a powerful testbed component, provided that
QCVV concerns are adequately addressed and the simulator hardware is sufficiently flexible to
address several computationally well-defined and broadly interesting classes of problems.
4.4 Tools for Making a Testbed Useful
While quantum algorithms [9], quantum complexity [50], and programming quantum computers
[10-12] have been active topics of research for more than 20 years, only recently have software
architectures or workflows to implement quantum programs on actual devices been studied
[13-14]. Figure 4.1 provides an illustration of what such a system could look like, detailing a
software stack with tools available to help with all steps required in going from a simulation
described at a level of abstraction accessible to a domain scientist to being able to run on quantum,
or simulated quantum, hardware.
Speakers from GTRI, LBNL, and ORNL initiated a discussion that yielded the following key
points:
A testbed both needs and is needed to create QC software. A modular software architecture is an
important element for creating a usable testbed and is a necessary part of being able to execute
hardware/software co-design for more effective next-generation systems. In parallel, a testbed
could provide both an environment to evaluate tools in the software stack and a community to
guide the development of standards between software layers. Such standards would allow the
development of additional software components that could provide users with higher levels of
abstraction and guarantee interoperability with middleware components and multiple hardware
platforms.
Many gaps exist in the hardware / software stack. One of the gaps identified in many current
implementations of quantum computing workflows is modular software to control hardware
devices at a fundamental level. These systems tend to be specific to the laboratory where the
device is built, and disconnected from other tool chains. There is also a lack of software in the
community to enable specification of problems at the top level of Figure 4.1, e.g. of describing a
simulation in terms an application scientist would intuitively grasp, e.g. specifying a Hamiltonian.
Similarly software allowing of mapping one Hamiltonian (of the simulation) to another (that which
describes the device) is largely absent.
Modular software tools and standard APIs encourage development. It is likely that in the next two
to five years, progress in physical hardware will enable the deployment of systems that we are not
currently able to emulate or simulate, and improved tools will help prepare users for future
systems. It is important to keep the software and tools ecosystem as flexible as possible to
encourage a broad community and shared software ecosystem development.
22
While the list of relevant software is growing, a comprehensive catalog is lacking. There is active
work in describing a software stack at a conceptual level [14], but there are few current
comprehensive assessments of the usability, utility, etc., of current QIS software and tools from
application developer to qubit control. The following list of computer science tools that could be
useful for testbed implementation was developed during and after the session. The list is meant to
be illustrative rather than comprehensive.
XACC (ORNL) – Programing framework for hybrid quantum/classical applications
Circuit Scheduler and Target Translator (GTRI) – Scheduling and mapping
instructions to quantum hardware
ProjectQ (ETH, et al.) – Embedded Domain Specific Language;
Simulation/emulation; Compiler with interface to multiple gate-based intermediate
representations
Scaffold (U Chicago, et al.) – C-like language and compiler
Quipper (Applied Communication Sciences, et al.) – Functional language and
compiler
ARTIQ (NIST) – Embedded Domain Specific Language for control for trapped-ion
qubits
FermiLib (Google / LBNL) – Library to facilitate fermion simulations with ProjectQ
PyQuil, Quil & Forest (Rigetti Computing) – Embedded Domain Specific Language;
Intermediate Representation; Instruction Set Architecture and Simulation
LIQUi> (Microsoft) – Embedded Domain Specific Language (in .NET);
Simulation/emulation; compiler & runtime
QCoDeS (Copenhagen / Delft / Sydney / Microsoft) – Python-based data acquisition
framework
QX, QISKit (IBM) – Visual user interface, open quantum assembler language
specification
Software Validation and Verification is also important. Several workshop participants pointed out
that verification and validation for software tools is just as important as the hardware V&V
discussed in previously. Established techniques such as formal methods, satisfiability modulo
theories (SMT), etc., may need to be extended.
23
Figure 4.1. Illustrative Sketch of Key Components for an End-to-end Software Stack
4.5 Superconducting Qubits
Macroscopic superpositions in cryogenic (T ~ 20 mK) electronic circuits form the basis of the
superconducting qubit [15]. Although many variations of the qubit have been investigated, such
as charge, flux, or transmon devices, they all crucially leverage superconducting metals to realize
ultra-low-loss linear elements that are combined with one or more Josephson junctions operating
as a nonlinear element to realize a tunable artificial atom. Superconducting qubits have now been
demonstrated by many groups, with applications including factoring [16], quantum annealing [17,
18], and quantum simulation [19]. Moreover, the applicability of many traditional semiconductor
manufacturing and high speed electrical signal processing techniques support the idea of a scalable
quantum processor architecture using these qubits.
Superconducting qubit technology was discussed at length in this breakout session, which began
with presentations from LBNL and MIT Lincoln Laboratory, addressing advantages and
disadvantages, enabling technologies, and scaling challenges. Key points include:
Movement of quantum information is a technical challenge to scaling to larger numbers of
interconnected qubits. Current iterations of superconducting qubit technology have shown
impressive results in systems ranging from 2-9 qubits, and technological developments are in
progress that will allow scaling beyond this point. A crucial step for scaling this architecture is
related to the movement of large quantities of both classical and quantum information, and in
particular the challenge of resource efficient and robust quantum/classical interfaces on the control
and readout lines in a many qubit architectures. For example, measurement and back action, both
due to engineered circuitry and the uncontrolled electromagnetic environment, play a crucial role
in determining processor coherence and operation fidelity. Maintaining such quantum application
imposed constraints in a scalable device is a topic of active research.
Circuit topology affects scalability. Depending on the circuit used, limitations may arise from the
fundamental topology of the qubits. Devices thus far have essentially been limited to 1D
24
connectivity, or have had too few elements for this to be considered. Without the move to a 2D
qubit topology, implementations of leading quantum error correcting codes may not be possible,
and other simulation algorithms may also suffer from unnecessary overhead in qubit swap
operations. An enabling technology to overcome this technological gap that is currently being
pursued by many research groups is 3D integration. This integration process can include a number
of steps and components, including through-silicon-vias (TSVs), bump bonds, and thermo-
compressive bonding. A fully controlled 2D lattice with flexible connectivity has yet to be
experimentally demonstrated.
Cryogenic electronics are a mid-to-long-term need. As the number of qubits grows, some suspect
a move in the control electronics from room temperature hardware to cryogenic electronics may
be required. However, others believe this is beyond the timescale of the testbed, perhaps 5-10
years out, and these electronics could be introduced only as needed and not before. It was
speculated that current control and fridge electronics will be sufficient for up to 100 or 100s of
qubits, but that fundamentally new solutions may be required to reach the level of 1000s. Some
work has already been done to integrate other quantum technologies with cryoelectronics and the
difficulties were not insurmountable; however, more research is required to evaluate the specific
impact on superconducting technology.
A testbed may yield critical insight into mitigating crosstalk. A potential challenge in the
scalability of superconducting qubits using microwave control is crosstalk between the qubits.
Some work has been done on 3D integration for the past decade in attempt to reduce the amount
of crosstalk to reasonable levels, and some believe that a potential path for mitigating this crosstalk
is local shielding structures. Planar architectures may always suffer from some degree of crosstalk,
but it is an open question as to how much of an impact this will have at the level of 100 or 1000
qubits. There is concern that it may increase correlated errors within systems that are not fully
compatible with current error correcting codes.
The concern with respect to crosstalk is closely related to the attendees’ view on the optimal
outcomes of a testbed system within the next few years. In particular, the construction of a system
with 100-1000 qubits will allow, for the first time, a glimpse into the types of errors we expect in
a scalable implementation of superconducting qubits. This information will be essential both for
engineering future devices and for theoretical algorithmic development. Ideally, these devices
would allow theorists to more rapidly prototype algorithmic innovations, such as those needed to
realize simulation of quantum systems on near term devices with real noise.
In summary, the attendees felt positive about the scalability of superconducting qubits in both the
short- and long-term, and that current implementations were well suited for a test-bed system. The
challenges to overcome, such as 3D integration, scaling of electronics, and crosstalk would be
made more accessible through the development of a testbed device, and its ideal outcome would
provide widespread applicability for both experimentalists and theorists alike.
4.6 Trapped Ion Qubits
It has long been recognized that qubits can be encoded in the clock states of trapped ions. These
states are well isolated from the environment resulting in long coherence times [20] while enabling
efficient high-fidelity qubit interactions mediated by the Coulomb coupled motion of the ions in
the trap. Quantum states can be prepared with high fidelity and measured efficiently using
fluorescence detection. State preparation and detection with 99.93% fidelity have been realized in
multiple systems [20, 21]. Single qubit gates have been demonstrated below rigorous fault-
25
tolerance thresholds [20, 22]. Two qubit gates have been realized with more than 99.9% fidelity
[23,24]. Quantum algorithms have been demonstrated on systems of 5 to 15 qubits [25-27].
Speakers from MIT Lincoln Laboratory, Innsbruck, and SNL initiated a discussion to explore the
current status, challenges and promise of trapped ion based quantum computing that yielded the
following key points:
There are multiple paths to increasing the number of interconnected qubits. A hierarchy of
approaches can be used to scale trapped ion quantum processors [28]. First, ions can be trapped
and manipulated in chains of moderate length. Even though these are 1-dimensional chains,
interactions between any pair of ions can be realized leading to a fully connected graph [25].
Second, on a single trap chip, larger systems can be realized by shuttling ions, and thus qubits, in
microfabricated trap structures [29]. Shuttling not only enables larger systems, but allows one to
realize dynamically reconfigurable systems that can then be adapted to the requirements of
different quantum algorithms. Finally, using the optically active ion qubits and remote
entanglement, trapped ion systems can be scaled beyond a single chip [30]. This approach enables
the assembly of a large quantum information processor from identical elementary logic units.
Technical challenges to be addressed to realize scaling are: the anomalous heating of ions in close
proximity to trap electrodes [31]; realizing an excellent vacuum to achieve long lifetime of ion
chains; mastering the control complexity in shuttling ions between different sites with minimal
heating of their motion and directing the necessary control laser beams on individual ions; and
realizing a sufficiently strong atom light interaction for efficient generation of remote
entanglement.
Integrated photonics, electronics, and systems engineering are critical enablers. Realizing ion traps
capable of trapping ions in multiple locations and shuttling ions between different locations relies
on microfabrication. Current fabrication technologies enable one to build almost any surface ion
trap [32]. Important next steps will be the integration of light delivery [33] and detection systems
with the traps. While this is a challenging task, a good balance between monolithic and hybrid
integration techniques will make these integrated devices possible. Finally, integration of voltage
generation and optical modulations systems would enable one to reduce the number of necessary
control lines per qubit and thus be of great value for increasing system size while keeping control
complexity manageable. Systems engineering will be an important aspect in balancing monolithic
and hybrid integration and achieving best system performance.
Trapped ion systems are flexible and reconfigurable. In small systems, fully connected interaction
graphs can be realized with important advantages for algorithm performance [34], while shuttling
of ions can provide a means of dynamically reconfiguring the system for optimal performance of
a quantum algorithm. With the same system, analog quantum simulation as well as fault tolerant
digital quantum computation can be achieved.
Clock speed is a potential limiting factor. While trapped ion systems offer large coherence time
to gate time ratios and high fidelity operations, the currently achieved clock speeds are
considerably slower than in superconducting qubit systems. While the clock speeds might be
sufficient to realize algorithms, the statistics necessary to calibrate and characterize a trapped ion
quantum processor will take much longer due to these slow clock speeds.
26
In summary, a trapped ion testbed could take immediate advantage of the high coherence time to
gate time ratios, high fidelity operation, and reconfigurability. As discussed elsewhere, a testbed
could also facilitate overcoming technical challenges to scalability.
4.7 Emerging Qubit Technologies
While there was significant focus at the workshop on superconducting cavity and trapped ion
technologies, there are a number of emerging qubit technologies that may overcome some of the
possible scaling issues with these more mature technologies. Speakers from Stanford and SNL
guided a discussion about the status, promise, and challenges of these alternative qubit
technologies that yielded the following key points:
There is inherent value in maintaining diversity among qubit technologies. Many of the session
participants felt that that it is too early to choose a winning qubit technology, and that
precompetitive basic research into a variety of approaches should continue. Of the, admittedly
non-exhaustive, emerging technologies reviewed during this session, silicon quantum dots were
considered to be moving very quickly because two qubit gate constructions have become more
reliable in only the past couple of years. Trapped neutral atoms have very simple and scalable two
qubit gate implementations. Finally, a third technology, photonics, was considered to be emerging
as both a qubit candidate and as an interconnect between separated qubits.
Other qubit technologies discussed included:
1. Various quantum dot systems, including alternatives to silicon. Silicon-based
qubits with bismuth donors in place of phosphorus promise to solve the problem of
nondeterministic defect placement, potentially leading to scalable manufacturing
[35].
2. Nitrogen vacancy centers in diamond, which offer long coherence times and a route
towards scalable quantum memories [36].
3. Other spin donors [37].
4. Trapped electrons on liquid helium [38].
5. Photonic quantum annealers [39] were discussed in the context of non-universal
QC, in which a coprocessor can be used to accelerate a certain task, such as machine
learning, without the need for a universal gate set. These devices could also lead to
a general quantum Hamiltonian simulation paradigm [40], but more research is
needed on where the quantum speedup occurs in order for quantum annealers
(optical or otherwise) to be truly useful.
Each of these qubits have inherent limitations and potential advantages that will require additional
research to explore more thoroughly. It is likely that some of them will be particularly suited to
certain classes of applications. The group generally considered photonic qubits the most mature of
the technologies discussed, with a larger number of algorithms and error correction schemes
demonstrated to date (including Grover’s algorithm [41] homomorphic encryption [42], machine
learning [43], a surface code demonstration [44], and various simulators [45-47]). However,
progress on all of them should be monitored for possible inclusion into future testbed architectures.
4.8 Interconnects
Speakers from the University of Illinois and Raytheon BBN initiated a discussion of the
importance of connecting qubits, interconnect technologies, and technical challenges. Key points
included:
27
Optical interconnects could be an important element of a fault tolerant quantum computer. This
need arises from the belief that only a certain number of physical qubits will fit into a single
vacuum chamber/dilution refrigerator, likely far fewer than required to carry out a fault tolerant
quantum operation. Since fault tolerant qubits require entanglement between many qubits in a
stabilizer state in order to carry out quantum error correction, connections between qubits in
different chambers will be required if such codes are to be implemented across the number of
qubits required to make a fault tolerant device useful for a meaningful computation. The need can
be reduced by placing as many qubits as possible within one device, but stakeholders present at
the session, and more broadly at the workshop, stated that somewhere between a few dozen and
one hundred qubits could likely be expected to fit into a single device, depending on the
technology.
The appropriate interconnect technology depends on distance. For the long distance interconnects
between two separate chambers, photonics-based interconnects may be the appropriate
technology. However, it was also noted that over short distances, and within the same physical
device, microwaves are good interconnects for qubits such as superconducting current loops,
owing to the fact that microwaves are a natural way to address such qubits and control them.
Scalability remains a challenge. In either case, interconnect technology has obstacles to overcome
before becoming scalable. Current first generation interconnect proposals based on schemes such
as entanglement swapping and joint Bell measurements have very low success probabilities,
making them inadequate for scaling. With a low chance of successfully mediating entanglement
between two distant qubits, a computation that requires more than one chamber/qubit device is
itself unfeasible. This is an active area of research and the technology for scalable interconnects
is expected to emerge.
5 Industry Perspectives
The development and demonstration of operational quantum computing systems will require the
close coupling of industry, academia, national labs, and government within a common framework.
Each player brings strengths to the challenge, and leveraging these is one of the key tenets of the
co-design practice. To begin to build understanding between these players, industry
representatives from large and small companies were invited to share their perspectives on the
near-term promise of quantum computing, the major challenges to developing this technology, and
the role government might play in advancing the development of quantum computers.
The panel included representatives from IBM, Google, Rigetti Computing, IonQ, ColdQuanta, and
Quantum Circuits. Each of these companies has made significant internal investments to develop
quantum computing systems. In addition, Sandia National Labs shared their experience on
partnering with industry to develop high-performance computing architectures.
Decide on the overarching goal and initial applications. A number of common themes emerged
from both the presentations and subsequent discussions. One of the first themes to emerge, even
before this focused industry session, was that a clear definition of the overarching goals and the
application space of a quantum testbed was needed. By having general community agreement on
the objectives of an initial 6 - 10 physical qubit system, possible partners can see how their
technologies can integrate into a system. There was recognition that general community
agreement will never equate to complete agreement, but that significant forward momentum can
be generated and sustained by the establishment of a community practice to share and debate ideas.
This initial community understanding can then be used to roadmap the future to define the
28
technologies and major application space that subsequent QC systems will fill. Further expanding
on the need for specific goals, one of the common points of discussion was the quest for succinct
articulation of the major problems a QC might solve. In the context of this DOE-sponsored
workshop, participants discussed DOE-centric applications of QC such as materials design,
molecular structure, and possibly material subjected to extreme environments. This was balanced
by the comment that this focus does not need to preclude other applications, even though there is
a need to focus efforts in the early stages of QC system development to achieve early program
wins, and thereby build a broad base of support.
Define the metrics for measuring progress. Another theme to emerge from the panel and
discussion was determining the role the testbed would serve in fostering an ecosystem where the
technical language to properly describe QC technologies and their performance can grow. One of
the major goals of industry is to sell QC systems to a wide variety of users, including the US
government. However, at this early stage in the technology development, there are neither tools
nor even a common language to describe system performance analogous to the commonly used
HPC benchmarking standards. These benchmarks are important because they provide one basis
for making the myriad technology and architecture co-design tradeoffs that will likely be required
to develop specialized QC systems. They also serve as a way to make fair and balanced
comparisons between technologies, including between conventional HPC and QC. One way to
foster the development of a comparison language is by forging a closely coupled workforce in a
work ecosystem wherein to be successful, individuals must develop a common language to
describe the technology, architectures and performance of these currently underdeveloped
systems. In addition, the ecosystem provides a training ground for people that will eventually move
to industry.
The importance of standards and transparency. One possible outgrowth of a common benchmark
language is the development of appropriate interface standards. Standards are recognized to lower
the price for entry of new ideas and technology into existing systems. The innovation of smaller
technology firms, or academia, can be supported because they are not required to have a complete
operational QC system to test their building block ideas. Another positive aspect of standards is
that they encourage transparency and open source solutions. A number of the companies present
noted that open source efforts and transparency will be important to engage the broader community
and share the cost of system development. It is believed that transparency will also encourage the
development of unique applications for which QC systems are the only viable solution. Finally,
standards, like the relationships between institutions that generated the need for standards, need to
be flexible and time dependent. A testbed could facilitate this by opening up options to test new
ideas and enable a user to fail fast so that they can quickly recover and try again—and eventually
succeed.
5.1 Roles for Government and Industry in a National Ecosystem
Historically, industry and government have teamed to accomplish significant goals that increase
national economic competitiveness, ensure national security, and improve citizens’ lives. The goal
of this session was to understand some of the elements of a successful partnership. Speakers from
the DOE partnerships office and Intel helped guide the discussion with the following take-away:
DOE is committed to providing a flexible organizational framework that ensures that the legal
needs of all the participants are recognized and protected while not stifling the technical
interactions that are at the heart of the envisioned testbed community. There are a number of
possible agreement mechanisms that can be mixed and matched to provide the level of IP
29
protection and federal program oversight required. There is not one particular model for user
facilities that was viewed as most appropriate, but rather a suite of tools that could be used and
changed as needs evolve. For example, Memorandums of Understanding (MOUs) can be
combined with Cooperative Agreements and Cooperative Research and Development Agreements
(CRADAs) in a flexible structure, which can also be modified as needed. DOE also has experience
with a number of management models, such as the hub model and various user facility models. In
addition, there are models for international interactions, such as those with CERN. The upshot of
the discussion was that DOE can establish a legal construct for collaboration among stakeholder
groups appropriate to the goals and changing needs of a quantum testbed and its user community.
Industry works best when needs are clear. The representative from Intel shared philosophical
insight on what makes for high performing government/industry interactions. Those comments
were prefaced by the understanding that this perspective was one company’s perspective, although
several of the other industry representatives in the room also agreed in principle. One key tenet
was that the industrial sector works best when it has a clear understanding of the needs of
government. Thus, one of the roles that government can play is to set strategy, assemble the larger
team, and define roles and responsibilities. This is critically important when the technology is
immature and the timeline is decadal, like in the field of QC. The vision for a quantum testbed
could help catalyze this type of long-term planning. Industry would be both a user of the
technology advances produced by the testbed team and an active participant in providing
technology to the quantum testbed.
There was discussion about the places that various industries would contribute in the context of
the hardware/software stack. The lower layers are hardware-centric, and composed of individual
qubits, qubit assemblies, memory elements, error corrected logical qubits, and finally the
architecture of logical qubits. Interwoven through this stack are control layers to drive individual
physical qubits and build the logical qubit architectures. Sitting above the hardware layer(s) are a
number of as yet undefined software interface levels culminating in a traditional high level
programming interface layer that could look, for example, like Python. One often-mentioned role
for government is to help develop these upper layers, including standardized tests to compare
machines and stress test them on challenging applications. There was a recognition that
government also has an interest in understanding how the hardware layers function and in funding
basic research on those topics.
5.2 Constructing Functional Quantum Computers
The coupling of industry, academia and government engenders a successful ecosystem in which
to develop a functional quantum computer. Since quantum computers are today at a low technical
readiness level, the design and delivery of a system that can tackle DOE-relevant problems
becomes both an exciting research opportunity and a challenging path toward functionality.
Working together, industry, academia and the national laboratories have ideas for the initial
objectives but after that the path forward becomes less clear. The goal of this session was to
explore the near term objectives and look for common themes. Speakers from D-Wave and
QxBranch were invited to share their perspectives and lead a group discussion that led to the
following observations:
Hurdles depend on the hardware. For example, the D-Wave representative noted that control of
the annealing cycle and a need to increase connectivity among qubits remains an important
challenge. Another hurdle identified during this session involves the control of correlated errors
which are relevant to atomic and superconducting instantiations of a quantum computer. In
30
classical Monte Carlo algorithms, the control of bias and error correlation must be handled with
care. In quantum computing, we do not yet have a clear understanding of how errors correlate or
propagate on larger systems and we cannot extrapolate from current few qubit systems. One clear
objective of a QC testbed could be to explore this important problem and quantify how errors affect
answers.
Application and algorithm development will be an important aspect of a QC testbed. The
construction of algorithms will depend on the underlying technology, at least at the early stages of
quantum computing. Different hardware platforms could enable the testing of multiple algorithms
that we know work classically and provide a way to engage a broad community to utilize the
platform. Questions for users might be: my algorithm works on a classical machine; can a QC
provide a time-to-solution that is faster, or enable a larger problem to be solved? Classically, we
know what to expect from certain algorithms; what will be the quantum analogue, and how does
one validate results from a QC simulation? This user-driven approach to the testbed can also enable
an early adopter model within DOE which has a similar feel to the early user programs at
Leadership Computing Facilities.
Discussion of how the response to technical challenges and the unpredictable outcome of current
scientific research might influence the future of the field pointed toward several other
considerations. A carefully constructed and managed feedback loop should be developed so that
hardware can be improved upon to solve problems of interest to the DOE scientific community.
While industry is a willing partner, at some point QCs will need to generate profit. This also argues
for a multiple testbed approach as the government needs to consider level playing fields as QC
technology matures. For the DOE, the determining factor for the future of the field will be whether
specific mission relevant problems can be solved. Success for industry involves turning a research
opportunity into a profitable business.
6 Summary
The Quantum Testbed Stakeholder Workshop brought together a broad constituency of
practitioners from industry, academia, national laboratories, and government to explore the critical
program elements that are required to realize the promise of quantum computing. The goals of the
workshop included identifying individual institutional capability in quantum computing hardware,
assessing its use for near term applications, and then identifying those critical elements that will
be needed to advance the goal of advancing quantum computing for scientific applications in the
next five years. Equally important to assessing the technical state of the field was the sharing of
best practices for the organization and management of a testbed system, including elements of
workforce development and building stronger relationships within the broader research
community. With input from presentations and discussion, this workshop report provides a
summary and assessment of the many elements that will need to be incorporated into a testbed to
effectively weave together a diverse, high performing team to advance the state of the art in
quantum computing and realize the long term economic and scientific promise of QC.
The US government, and the ASCR office in particular, has a leading role to play in providing a
framework for intentional development of quantum computing systems. A testbed could serve as
a catalyzing force to bring the community together and focus it on the challenges of blending
cutting edge science with the systems engineering to enable the application of QC to solve
problems of interest to DOE. Those problems, in areas such as quantum chemistry, materials
science, or matter at extreme environments, and more fully discussed in prior ASCR workshops
31
[48, 49], are far beyond the capability of any envisioned HPC machine and rely on the unique
entangled processing of a QC. The testbed can also serve as a bridge between industry and
academia to encourage transparency while these precompetitive technologies are being developed,
lower the barriers for entry into the field by medium and small sized companies or academic
research groups, and foster the establishment of metrics and standards to effectively compare the
performance of engineering design tradeoffs. This extends to the development of a common
language to characterize performance, as well as the obvious training of the next cadre of quantum
computing scientists and engineers that will carry the field forward.
One of the major barriers that was identified was the immaturity of not only the individual QC
technology building blocks (including hardware and software), but also the system architectures
and general infrastructure required to make effective design choices. The implication of this
current state-of-the-art is that the principle of co-design arguably becomes even more important
than it might be for conventional HPC design. Current QC technologies are working at the
individual physical qubit level and have yet to demonstrate the robust, error-corrected logical gates
that will be required for future architectures. Thus, a quantum testbed will require an extension of
the principles of co-design.
The development of a quantum testbed and the realization of the many economic and scientific
benefits of having an operational system are well aligned with the long-term nurturing of advanced
computing technologies within the ASCR office. It is fully expected that the problems that can be
addressed by a QC will find applications in many of the other offices of DOE/NNSA. The
demonstration and growth of a quantum computing ecosystem is a long-term undertaking, with a
time horizon of a decade or more, which will require the close coupling of many diverse skills,
and entities. This is precisely the type of challenge that ASCR is effectively structured to bring to
completion.
32
References
[1] On the Role of Co-design in High Performance Computing, Richard F. Barrett, Shekhar
Borkar, Sudip S. Dosanjh, Simon D. Hammond, Michael A. Heroux, X. Sharon Hu, Justin
Luitjens, Steven G. Parker, John Shalf, and Li Tang, Transition of HPC Towards Exascale
Computing, E.H. D’Hollander et al. (Eds.) IOS Press, 2013
[2] Co-Designing a Scalable Quantum Computer with Trapped Atomic Ions, K. R. Brown, J.
Kim, and C. Monroe, arXiv:1602.02840v1, npj Quantum Information (2016) 2, 16034
[3] T. Häner and D. S. Steiger, 0.5 Petabyte Simulation of a 45-Qubit Quantum Circuit,
arXiv:1704.00127 (2017).
[4] D. B. Trieu, Large-scale simulations of error prone quantum computation devices, Fortsh.
Jülich 2 (2009).
[5] Feynman, R. P., Int. J. Theor. Phys. 21, 467 (1982)
[6] J. Ignacio Cirac and Peter Zoller, Goals and opportunities in quantum simulation, Nature
Physics, 8, 264, (2012).
[7] I. Buluta and F. Nori, Quantum Simulators, Science 326, 108 (2009)
[8] Jae-yoon Choi, Sebastian Hild, Johannes Zeiher, Peter Schauß, Antonio Rubio-Abadal,
Tarik Yefsah, Vedika Khemani, David A. Huse, Immanuel Bloch, Christian Gross,
Exploring the many-body localization transition in two dimensions, Science 352, 1547
(2016).
[9] Quantum algorithms: an overview, Ashley Montanaro, npj Quantum Information (2016) 2,
15023
[10] A brief survey of quantum programming languages, P. Selinger, Proc. Seventh Int’l Symp.
Functional and Logic Programming, 1, 2004.
[11] Quantum programming languages: survey and bibliography, S. Gay, Math. Structures in
Computer Science 16:581–600, 2006.
[12] Foundations of Quantum Programming, Mingsheng Ying, Morgan Kaufmann (2016)
[13] A layered software architecture for quantum computing design tools, Krysta M Svore,
Alfred V Aho, Andrew W Cross, Isaac Chuang, and Igor L Markov, Computer, 74 (2006)
[14] A Software Methodology for Compiling Quantum Programs, Thomas Haner, Damian S.
Steiger, Krysta Svore, and Matthias Troyer, arXiv:1604.01401v1
[15] T. P. Harty, D. T. C. Allcock, C. J. Ballance, L. Guidoni, H. A. Janacek, N. M. Linke, D.
N. Stacey, and D. M. Lucas, Phys. Rev. Lett. 113, 220501 (2014).
[16] R. Noek, G. Vrijsen, D. Gaultney, E. Mount, T. Kim, P. Maunz, and J. Kim, Opt. Lett. 38,
4735 (2013).
[17] R. Blume-Kohout, J. K. Gamble, E. Nielsen, K. Rudinger, J. Mizrahi, K. Fortier, and P.
Maunz, Nat. Commun. 8, (2017).
[18] J. P. Gaebler, T. R. Tan, Y. Lin, Y. Wan, R. Bowler, A. C. Keith, S. Glancy, K. Coakley,
E. Knill, D. Leibfried, and D. J. Wineland, Phys. Rev. Lett. 117, 060505 (2016)
33
[19] C. J. Ballance, T. P. Harty, N. M. Linke, M. A. Sepiol, and D. M. Lucas, Phys. Rev. Lett.
117, 060504 (2016)
[20] S. Debnath, N. M. Linke, C. Figgatt, K. A. Landsman, K. Wright, and C. Monroe, Nature
536, 63 (2016).
[21] D. Nigg, M. Müller, E. A. Martinez, P. Schindler, M. Hennrich, T. Monz, M. A. Martin-
Delgado, and R. Blatt, Science 345, 302 (2014).
[22] T. Monz, P. Schindler, J. T. Barreiro, M. Chwalla, D. Nigg, W. A. Coish, M. Harlander,
W. Hänsel, M. Hennrich, and R. Blatt, Phys. Rev. Lett. 106, 130506 (2011).
[23] C. Monroe and J. Kim, Science 339, 1164 (2013).
[24] D. Kielpinski, C. Monroe, and D. J. Wineland, Nature 417, 709 (2002).
[25] C. Monroe, R. Raussendorf, A. Ruthven, K. R. Brown, P. Maunz, L.-M. Duan, and J. Kim,
Phys. Rev. A 89, 022317 (2014).
[26] D. A. Hite, Y. Colombe, A. C. Wilson, K. R. Brown, U. Warring, R. Jördens, J. D. Jost, K.
S. McKay, D. P. Pappas, D. Leibfried, and D. J. Wineland, Phys. Rev. Lett. 109, 103001
(2012).
[27] P. L. W. Maunz, High Optical Access Trap 2.0. (Sandia National Laboratories (SNL-NM),
Albuquerque, NM (United States), 2016)
[28] K. K. Mehta, C. D. Bruzewicz, R. McConnell, R. J. Ram, J. M. Sage, and J. Chiaverini,
Nat. Nanotechnol. 11, 1066 (2016)
[29] N. M. Linke, D. Maslov, M. Roetteler, S. Debnath, C. Figgatt, K. A. Landsman, K. Wright,
and C. Monroe, ArXiv170201852 Quant-Ph (2017)
[30] Superconducting Circuits for Quantum Information: An Outlook. M. H. Devoret and R. J.
Schoelkopf. Science 339, 1169 (2013).
[31] Computing prime factors with a Josephson phase qubit quantum processor. E. Lucero, R.
Barends, Y. Chen, J. Kelly, M. Mariantoni, A. Megrant, P. O’Malley, D. Sank, A.
Vainsencher, J. Wenner, T. White, Y. Yin, A. N. Cleland and John M. Martinis. Nature
Physics 8, 719–723 (2012).
[32] Coherent coupled qubits for quantum annealing. S.J. Weber et. al, arXiv:1701.06544.
[33] Evidence for quantum annealing with more than one hundred qubits. S. Boixo, T. F.
Rønnow, S.V. Isakov, Z. Wang, D. Wecker, D.A. Lidar, J. M. Martinis and M. Troyer.
Nature Physics 10, 218–224 (2014).
[34] On-chip quantum simulation with superconducting circuits. A.A. Houck, H.E. Türeci and
J. Koch. Nature Physics 8, 292–299 (2012).
[35] Sims, Hunter, et al. "Understanding the behavior of buried Bi nanostructures from first
principles." Bulletin of the American Physical Society 62 (2017).
[36] Dutt, MV Gurudev, et al. "Quantum register based on individual electronic and nuclear
spin qubits in diamond." Science 316, 1312-1316 (2007).
[37] Trauzettel, Björn, et al. "Spin qubits in graphene quantum dots." Nature Physics 3, 192-
196 (2007).
34
[38] F. R. Bradbury, Maika Takita, T. M. Gurrieri, K. J. Wilkel, Kevin Eng, M. S. Carroll, and
S. A. Lyon, Efficient Clocked Electron Transfer on Superfluid Helium, Phys. Rev. Lett.
107, 266803 (2011).
[39] McMahon, Peter L., et al. "A fully-programmable 100-spin coherent Ising machine with
all-to-all connections." Science, aah5178 (2016).
[40] Takeda, Yutaka, et al. "Boltzmann sampling for an XY model using a non-degenerate
optical parametric oscillator network." arXiv preprint arXiv:1705.03611 (2017).
[41] P. G. Kwiat, J. R. Mitchell, P. D. D. Schwindt, and A. G. White, “Grover's search
algorithm: An optical approach,” Journal of Modern Optics 47 , 2-3, (2000).
[42] Barz, Stefanie, et al. "Demonstration of blind quantum computing." Science 335, 303-308
(2012).
[43] Cai, X-D., et al. "Entanglement-based machine learning on a quantum computer." Physical
Review Letters 114, 110504 (2015).
[44] Yao, Xing-Can, et al. "Experimental demonstration of topological error correction." Nature
482, 489-494 (2012).
[45] Lanyon, Benjamin P., et al. "Towards quantum chemistry on a quantum computer." Nature
Chemistry 2, 106-111 (2010).
[46] Ma, Xiao-song, et al. "Quantum simulation of the wavefunction to probe frustrated
Heisenberg spin systems." Nature Physics 7, 399-405 (2011).
[47] D.G. Angelakis et al, “Mimicking Interacting Relativistic Theories with Stationary Pulses
of Light”, Phys. Rev. Lett. 110, 100502 (2013).
[48] ASCR Report on Quantum Computing for Science. Alán Aspuru-Guzik, Wim van Dam,
Edward Farhi, Frank Gaitan, Travis Humble, Stephen Jordan, Andrew Landahl, Peter
Love, Robert Lucas, John Preskill, Richard Muller, Krysta Svore, Nathan Wiebe, Carl
Williams, Ceren Susut. (2015)
[50] Quantum Complexity Theory. E. Bernstein and U. Vazirani. SIAM J. Comput. 26, 1411-
1473 (1997).
35
Appendix A: Program Committee
The workshop agenda was developed by a program committee from Industry, National
Laboratory and Government stakeholders.
Jonathan Carter Lawrence Berkeley National Laboratory
David Dean Oak Ridge National Laboratory
Greg Hebner Sandia National Laboratory
Jungsang Kim Duke University
Andrew Landahl Sandia National Laboratory
Peter Maunz Sandia National Laboratory
Raphael Pooser Oak Ridge National Laboratory
Irfan Siddiqi Lawrence Berkeley National Laboratory and
U.C. Berkeley
Jeffrey Vetter Oak Ridge National Laboratory
36
Appendix B: Workshop Agenda
Tuesday, February 14, 2017
8:00-9:00 Continental breakfast and registration
9:00-9:30 Welcome and Introduction – DOE Perspective
9:30-10:00 Plenary 1: Quantum Processors Based on Ion Traps Chris Monroe
University of Maryland
10:00-10:30 Plenary 2: Quantum Processors Based on Will Oliver
Superconducting Qubits MIT/Lincoln Labs
10:30-11:00 Break
11:00-11:30 Plenary 3: Near-term Practical Applications of Jarrod McClean
Quantum Devices LBNL
11:30-12:00 Plenary 4: Evaluating the Efficacy of Quantum Robin Blume-Kohout
Hardware SNL
12:00-1:00 Working lunch
1:00-2:15 Lab Presentations 1 ANL, FNAL, LANL
2:15-2:30 Break
2:30-3:45 Lab Presentations 2 LBNL, LLNL, ORNL
3:45-4:00 Break
4:00-5:15 Lab Presentations 3 PNNL, SLAC, SNL
5:15-5:30 Wrap-up, instructions for Day 2
37
2. Tools for making a testbed usable
3. Trapped ion qubits
4. Interconnects
5:15-5:30 Wrap-up, instructions for Day 3
38
Appendix C: Breakout session guidance
The following is the guidance that was provided to the breakout session leaders to aid in leading
their sessions.
Wednesday, February 15, 2017 breakout topics
Programmatic Breakout Discussions (9:30 – 12:00)
Topic 1: Best practices for management of and access to a quantum computing testbed
In this breakout session, we will discuss how existing facility models and practices could be
adapted for use by a quantum computing testbed. The session will begin with thoughts from a
panel with familiarity of the operation of a number of facilities and then flow into a free discussion
of the following questions:
1. How can lessons learned from other facilities and testbeds inform implementation
of a quantum computing testbed? Does the management and access model depend
on the testbed implementation?
2. What are the advantages and disadvantages of various approaches to
implementing a quantum testbed and providing user access?
3. How does the answer to the questions above depend on the technical readiness
level of quantum computing technology? Given that quantum computers are not
yet off-the-shelf systems, what is the proper balance between efforts in hardware,
software, architecture, and systems engineering?
4. If the quantum testbed were to grow beyond its initial implementation, what
would be important factors to consider in evaluating and prioritizing future
technologies, scaling paths, and possible upgrades?
5. What are the advantages and disadvantages to different processes, including peer-
reviewed proposals, to give users access to the testbed?
Topic 2: Staffing and workforce considerations for a quantum computing testbed
In this breakout session, we will discuss staffing concerns for a quantum testbed. We will also
discuss possible roles for a quantum testbed in developing a workforce to meet future DOE needs
in quantum information science. To facilitate discussions, we will flesh out answers to a set of
questions.
1. What types of scientific and technical expertise will be required during the first year
of testbed deployment and operation? How does this depend on the model chosen
for testbed implementation?
2. How will the initial staffing needs evolve over time as qubit technologies mature
and commercial devices become available?
3. What staffing models from other facilities or testbeds could be adapted for a
quantum testbed and what are their advantages and disadvantages?
4. How could one utilize the testbed for workforce development?
5. How could one design a successful mode of operation that produces both science
and well-trained scientists who would then continue R&D in the quantum
computing arena?
39
Topic 3: User community development and interactions for a quantum computing testbed
In this breakout session, we will discuss the user community for a quantum computing testbed and
strategies for successful interaction between the testbed and its users. The session will begin with
thoughts from a panel representing a variety of different perspectives and flow into a free
discussion of the following questions:
1. Which scientific communities are likely to be the first users of a quantum
computing testbed and why? How will the relevance of a quantum testbed to
various domain sciences change as technical capabilities of a testbed improve
over time?
2. What are different strategies for engaging new user communities and bringing
them up to speed? To what extent might different strategies be appropriate for
different communities?
3. User groups can serve many functions in keeping a facility or testbed productive.
What are the advantages and disadvantages of different user group models?
4. What are the most significant hurdles, technical or otherwise, to making early-stage
quantum devices accessible to users who are not themselves hardware experts?
In this session, there will be two talks, one on classical digital co-design and one on some of the
first efforts in quantum co-design. This will be followed by a discussion which will attempt to
address the following questions:
1. What communities must be brought together for effective co-design of a quantum
testbed? How can a testbed help to bring these communities together?
2. What standards, interfaces, etc., for hardware, software, theoretical and mathematical
models are needed to enable the co-design community to effectively communicate
goals, requirements, tradeoffs, limitations, etc. with each other?
3. What are the elements of testbed operation that will enable rapid and effective
iteration and improvement of hardware, software, and simulations?
4. Model and algorithm development; fundamental device engineering; and software
(end-user facing, "middleware", and at the device level) are all key for successful
co-design. What are some of the key advances required in these areas for the next
2- and 5-years?
40
Technical Breakouts (2:30 – 3:45)
Models for system design and testing
In this breakout session, we will discuss software tools for designing and evaluating the
performance of quantum computing hardware. The session will begin with a few brief
presentations that lead into a discussion of the following questions:
1. What software tools exist for design and evaluation of systems of qubits, quantum
simulation algorithms, etc.? What are the inherent limitations of these tools? What
problems are they well-suited to address and what problems can only be explored
with hardware?
2. To what extent are tools and techniques for design and evaluation of early-stage
classical computing technology applicable to quantum computing?
Speakers:
1. Adam Meier, GTRI
Testbed Modeling and Validation
2. Anastasiia Butko, Lawrence Berkeley National Lab
Towards Scalable Quantum Architecture Simulation
3. Adolphy Hoisie, Pacific Northwest National Lab
The CENATE Approach to Testbeds
41
Superconducting qubits
In this breakout session, we will discuss superconducting qubits as a potential technological
foundation for a quantum testbed. The session will begin with a few brief presentations that lead
into a discussion of the following questions:
1. What is the scaling potential for quantum computing devices based on
superconducting qubits? What factors limit scalability?
2. What enabling technology will be important for advancing quantum computing
with superconducting qubits? Please be specific.
3. What are the advantages and disadvantages of superconducting qubits for a
quantum testbed?
4. What computing model, size, performance, and qubit connectivity are of value for
a trapped ion testbed?
5. Are there scientific applications to which superconducting qubits are particularly
well or poorly suited?
Speakers:
1. Irfan Siddiqi, Lawrence Berkeley National Lab
Scaling up Multi-qubit Circuits for Quantum Simulation
2. Will Oliver, MIT/Lincoln Lab
Superconducting Qubit Testbed Facility
42
Characterization, validation, and verification
In this breakout session, we will discuss the characterization, verification, and validation needs of
a quantum testbed as well as the role a testbed could potentially play in developing validation and
verification capabilities. The session will begin with a few brief presentations that lead into a
discussion of the following questions:
1. What verification and validation protocols are available for use with a quantum
testbed? To what extent are these able to predict the performance of quantum
algorithms on a testbed system?
2. What characterization capabilities will a quantum testbed need? How does this
depend on the specific hardware instantiation?
3. How might a testbed be used to advance research in validation and verification?
What are the hardware requirements for a testbed capable of advancing this
research?
4. What are the quantum control capabilities needed for calibration, verification, and
validation? Do these differ from control capabilities needed to implement
algorithms?
Speakers:
1. Andrew Landahl, Sandia National Lab
Demonstrating Fault-Tolerant Quantum Error Correction with a Small Testbed
2. Michael Biercuk, University of Sydney
Quantum Control Engineering for Quantum testbeds
3. Scott Pakin, Los Alamos National Lab
Physical Characterization of Quantum testbeds
Session Chair: Robin Blume-Kohout, Sandia National Lab
43
Trapped ion qubits
In this breakout session, we will discuss trapped ions as a potential technological foundation for
a quantum testbed. The session will begin with a few brief presentations that lead into a
discussion of the following questions:
1. What is the scaling potential for quantum computing devices based on trapped ions?
What factors limit scalability?
2. What enabling technology will be important for advancing quantum computing
with trapped ions? Please be specific.
3. What are the advantages and disadvantages of trapped ions for a quantum testbed?
4. What computing model, size, performance, and qubit connectivity are of value for
a trapped ion testbed?
5. Are there scientific applications to which trapped ions are particularly well or
poorly suited?
Speakers:
1. Thomas Monz, Innsbruck
Technical Considerations for an Ion-trap-based Quantum testbed
2. Jeremy Sage, MIT/Lincoln Lab
Technologies for a Robust, Scalable Trapped-ion Quantum testbed
3. Matthew Blain, Sandia National Lab
Micro-fabricated Ion Traps for Scalable Quantum Information Processing
44
Thursday, February 16, 2017 Breakout Topics
Industry Panel
Representatives from large and small companies engaged in developing quantum computing and
related technologies will share their individual views on the near-term promise of quantum
computing, the major challenges to developing this technology, and the role government agencies
might play in advancing the development of quantum computers. The panel will also address
questions from the audience.
The panel will include representatives from: IBM, Google, Rigetti Computing, IonQ, ColdQuanta,
and Quantum Circuits, as well as a representative from one of DOE’s labs with significant
experience partnering with industry in the classical digital computing space.
Industry Breakouts:
After the initial industry panel, workshop participants will take lunch to one of the following
breakout sessions. Each breakout will begin with one or two presentations that will lead to general
discussion on the breakout theme.
Topic 1: Roles for Government and Industry in a National Quantum Computing Ecosystem
Speakers and Discussion Leaders: Anne Matsuura (Intel) and Brian Lally (DOE)
The discussion will focus on two areas:
1. Ways individual workshop participants envision government getting involved
2. Specific examples of government-industry partnerships that could be developed and what
these partnerships could accomplish.
Additional discussion topics could include (but are not limited to): industry interactions with DOE
testbeds, IP management, and government role in transitioning technology.
Additional discussion topics could include (but not limited to): system design and operating system
development, collaboration between system integrators and key component vendors, and roles of
national labs and government agencies.
45
Appendix D: Acronym List
ALCF Argonne Leadership Computing Facility
ANL Argonne National Laboratory
AP Advisor Panel
API Application Program Interface
ASCR Advanced Scientific Computing Research Program
BES Basic Energy Science
CINT Center for Integrated Nanotechnologies
CMOS Complementary Metal Oxide Semiconductor
CRF Combustion Research Facility
DOE Department of Energy
FNAL Fermi National Accelerator Laboratory
FTQEC Fault-Tolerant Quantum Error Correction
GST Gate Set Tomography
GTRI Georgia Tech Research Institute
HPC High Performance Computing
LANL Los Alamos National Laboratory
LBNL Lawrence Berkeley National Laboratory
LLNL Lawrence Livermore National Laboratory
MESA Microsystems and Engineering Sciences Applications Complex
NSF National Science Foundation
ORNL Oak Ridge National Laboratory
OS Office of Science
PNNL Pacific Northwestern National Laboratory
QC Quantum Computer
QCVV quantum characterization, verification, and validation
QTSW Quantum testbed Stakeholder Workshop
SLAC Stanford Linear Accelerator National Laboratory
SNL Sandia National Laboratories
STEM Science, Technology, Engineering and Math
46