“We are focused not only on achieving exascale computing but on ushering in a new era of computational science for the nation.”
Â
So announced Doug Kothe, project director for the Exascale Computing Project (ECP) and deputy associate lab director of the Computing and Computational Sciences Directorate at the Oak Ridge National Laboratory (ORNL), the opening speaker at the 2018 Oil and Gas High Performance Computing Conference at Rice University. “We want exascale systems to do things we can’t do today, and that’s a very big ambition,” he said, sounding a theme echoed by other speakers.
Â
Hosted March 12-13 by the Ken Kennedy Institute for Information Technology, the 11th annual event drew more than 500 leaders from the oil and gas industry, the high-performance computing and information technology industries and academics.
Â
“We are an annual conference, a place where people in business, academia and the national labs can regularly get together and network,” said Jan Odegard, executive director of the Ken Kennedy Institute and associate vice president for research computing and cyberinfrastructure at Rice.
Â
A computing system with exascale capacity can perform a billion billion calculations per second. That’s 1 followed by 18 zeroes. The U.S. Department of Energy’s (DOE) first exascale computer, A21 built by Intel and Cray, is expected to become operational in 2021 at Argonne National Laboratory. Frontier at ORNL and El Capitan at Lawrence Livermore National Laboratory are scheduled to go online in short order after that.
Â
Analysts expect China to reach exascale capability by 2020, followed soon after by the European Union and Japan. “Exascale is more than merely a technological achievement.  It will become an important, necessary part of our lives, with applications in national security, energy security, economics, scientific discovery, Earth systems and health care, among other things,” Kothe said.
Â
Andrew Siegel, director of application development for the ECP and a senior scientist at Argonne, said, “We are developing and enhancing the predictive capability of applications critical to the DOE. The good news is that standards are evolving aggressively to meet exascale,” he said.
Â
Exascale’s greater computing power should enable researchers to run more complicated simulations faster, and boost their ability to make predictions and reduce the reliance of models on guesswork. “We expect to streamline the modeling process. DOE has 25 mission-critical energy applications,” Siegel said. Within the ECP are 25 application teams and 60 software projects.
Â
Ahmed Hashmi, the global head of upstream technology for BP, said his company, in response to the drop in oil prices, has revised its production optimization strategy, with a new emphasis on improving algorithms and boosting computer capacity.
Â
“BP works towards improving our ability to predict outcomes. That means dealing with the inevitable sub-surface uncertainties when starting on anew drilling project,” Hashmi said.
Â
Like other speakers, Hashmi cited the growing importance of data science and data scientists: “Our people are devoting more time to analyzing data and less to looking for it.” BP now owns one of the world’s largest supercomputer dedicated to commercial research. It is 9 petaflops, has a total of 1,140 terabytes of memory and 30 petabytes of storage, which is equal to more than 500,000 iPhones, or, as Hashmi said, “more than 18 times more powerful than the fastest supercomputer a decade ago.”
Â
“A data scientist can’t fix a broken business model. He can’t solve all your big problems,” said Jeremy L. Graybill, senior manager of data science and advanced analytics for the Anadarko Petroleum Co., “but when we’re integrated with subject-matter experts, we have shown we can reduce high-quality screening time from six months to 30 days.”
Â
Graybill said Anadarko has roughly 1.72 billion barrels-equivalent of proven oil reserves, putting it among the world’s largest independent exploration and production companies.  Â
Â
As moderator of a panel discussion, Addison Snell, CEO of Intersect360 Research, said, “Industry is ready for exascale, and I think oil and gas will get there first,” and Siegel agreed. Other panelists suggested such applications as risk management analysis in finance, precision medicine and modeling the human brain.
Â
“Think of all the things we can’t do today that we will be able to do in the future. People will soon be doing things with exascale they never could before,” said Nicolas Dubé, the chief strategist for high-performance computing at Hewlett Packard. He noted that the cost of operating an exascale supercomputer for one day could reach $100,000.
Â
Also speaking were Kevin Kissell, the technical director of the Office of OTC at Google, who discussed “New Paradigms for Large-Scale Computing on the Google Cloud Platform” and the future of quantum computing, and Scott Morton, former head of Geophysical Technology Development at Hess and an adjunct faculty member in the Department of Computational and Applied Mathematics at Rice.
Â
“Our brains are wired for Newtonian physics,” Kissell said. “Quantum phenomena defy all our intuitions.”Â
Â
This year’s conference included two tutorial sessions, three keynotes, 10 plenary talks, 12 papers in two parallel sessions and 25 student poster presentations.
Â
In addition to Odegard, the workshop organizers included Sverre Brandsberg-Dahl, PGS; Simanti Das, ExxonMobil; Erik Engquist, Rice University; Keith Gray, BP; Maxime Hugues, Total; Alex Loddoch, Chevron; Scott Morton, Rice University; Ernesto Prudencio, Schlumberger; Michael Ritter, Shell; Paul Singer, Statoil. Twenty-nine sponsors helped fund the conference.
Â