“Deciding where to drill hasn’t been getting any easier. We still need accurate, detailed subsurface images, computed from seismic survey data.”
So said Alan Lee, corporate vice president of research and advanced development at AMD, stating the recurrent theme of the 10th-annual Oil & Gas High-Performance Computing Conference at Rice University. Hosted March 15-16 by the Ken Kennedy Institute for Information Technology, the event drew more than 435 leaders from the oil and gas industry, the high-performance computing and information technology industries and academics.
Calling the 10th-anniversary gathering “a family reunion,” Jan E. Odegard, executive director of Kennedy Institute and associate vice president, office of information technology, said in his opening remarks: “We’re not a fly-by-night operation. I have referred to 2015 and 2016 as the `Wile E. Coyote Years’ for the oil and gas industry, but we’re not plummeting.”
In his plenary talk, “Big Compute: Under the Hood,” Lee stressed the need for what he called “big compute architectures”: “All the easy oil and gas has already been found. We need much better velocity models and we need to look at things more probabilistically.”
John Eastwood, geophysics manager at ExxonMobil for Seismic Imaging/Processing/FWI Research and Acquisition Research, echoed the theme in his opening keynote address, “High Performance Computing and Full Waveform Inversion”:
“We’re seeing a paradigm shift from the conventional processing we’ve used for building models of the subsurface. We need to use the entire seismic wavefield to generate high-resolution velocity models for imaging.”
Greater accuracy of imaging, Eastwood said, reduces the expense and environmental cost of drilling additional, sometimes unproductive wells. ExxonMobil’s proprietary algorithms and use of supercomputers enable the company to exploit the promise of full wavefield inversion and reveal the actual geological and geophysical properties of subsurface rock layers.
“The trend as we see it is to use more bandwidth the more complicated the geology becomes,” Eastwood said. “This technology requires a collaboration between geophysical researchers, software engineers and systems engineers. We have to maximize HPC capabilities. Computing advances enable imaging technology to progress.”
In her plenary talk, “Things to Consider: the Changing Landscape of HPC and Data Center,” Debra Goldfarb, chief analyst and senior director of market intelligence for Intel’s Data Center Group, predicted that within two year as much as 60 percent of the world’s data may have migrated to the Cloud. At present, the amount totals about four percent.
“The non-Cloud architecture is shrinking. The industry must be ready for the shift. I can tell you the top companies are offering kids contracts in their sophomore and junior years. They want to be ready,” Goldfarb said.
In just the last year, she noted, much progress has been reported in the development of machine learning (ML) and deep learning (DL). Operators of high-performance computers are developing and running ML/DL workloads on their systems.
“Users and algorithm scientists are optimizing their codes and techniques that run their algorithms, and system architects are working out the challenges they’re facing on various system architectures,” Goldfarb said.
The other keynote speaker, David Keyes, director of the Extreme Computing Research Center at King Abdullah University of Science and Technology in Saudi Arabia, spoke on “Algorithmic Adaptations to Extreme Scale.”
The U.S. Department of Energy’s Exascale Computing Project (ECP) expects the first post-petascale system to be deployed by 2021. That’s faster than the original timeline, and will make the U.S. project more competitive with similar projects underway in China and Japan.
“We must be ready for what is coming. We need the algorithms for where the new architectures are going to be. There will be more burdens on software than on hardware,” Keyes said, and added:
“Algorithms must span the widening gap between ambitious applications and austere architectures. With great computing power comes great algorithmic responsibility.”
Jim Kahle, CTO and Chief Architect for Data Centric Deep Computing Systems, IBM Fellow, IBM Research, Austin, spoke on “Data Centers Impacts from the Convergence of High Performance and Cognitive Computing.”
“Data is the new basis of computational value,” he said. “We have a lot of work to do. The new technologies may not be ready for high-end applications in time to meet the end of scaling.”
He added: “We have massive data requirements driving a composable architecture for big data, complex analytics, modeling and simulation. Cognitive solutions are getting high-performance computing to work smart, not hard. The fastest calculation is the one you don’t run.”
In his plenary talk, Peter Ungaro, president and CEO of Cray Inc., spoke on “Supercomputing: Yesterday, Today and Tomorrow.” He characterized the industry’s present state as “transitional,” and said, “Consider the transition from a `lowest-cost’ perspective to a `competitive-edge and return-on-investment’ perspective.”
Ungaro said parallelism in computing is now considered mandatory to achieve processing efficiency. “It’s the new normal. Architectures will change to harness the power of `wider’ computers. Interconnects must improve rapidly to deal with congestion and throughput.”
Ungaro offered another series of predictions: “Exascale will be the end of the CMOS era. Within five years we will see 10-plus teraflops on a single node. The world is shifting.”
The Oil & Gas High-Performance Computing Conference closed with a panel discussion moderated by John Mellor-Crummey, professor of computer science at Rice. One of the panelists, Peter Bramm, CEO of Campaign Storage, stressed the importance of introducing student to Big Computing as early as possible. “It has become the main thing that has happened in the last 30 years,” Bramm said.
“Peter Braam really captured the program committee’s larger vision, one of bringing together the three communities -- oil & gas, IT and academics -- to address technology needs, build a community, and support workforce development, the much-needed talent pipeline we will depend on over the next decade,” Odegard said.
This year’s workshop included three tutorial sessions, a mini-workshop, two keynotes, six plenary sessions, six “disruptive technology talks” and 25 student poster presentations.
In addition to Odegard, the workshop organizers included Sverre Brandsberg-Dahl, PGS; Henri Calandra, Total; Simanti Das, ExxonMobil; Erik Engquist, Rice; Keith Gray, BP; Alex Loddoch, Chevron; Ligang Lu, Shell; Scott Morton, Hess Corp.; Ernesto Prudencio, Schlumberger; Paul Singer, Statoil. Thirty-one sponsors helped fund the workshop.