High performance computing is supercharging research into clean energy solutions, providing data to speed or even replace costly experimental research, Vanessa Zainzinger reports
With most of us keeping a powerful little electronic device right within our pocket, it’s hard to imagine there are computers that still take up entire rooms. However, the world’s most powerful computer Summit takes up 5600ft2 of floor space at the Oak Ridge National Laboratory in Oak Ridge, Tennessee. It has a peak performance of 200 petaflops, or 200,000 trillion calculations/sec.
Summit and other so-called supercomputers – systems holding thousands of processors and servers in one machine – are changing the way scientists research gravitational waves, combat neurodegenerative diseases, monitor climate change and even make energy production more sustainable.
‘High performance computing [HPC] is another tool in the toolbox of all researchers, and in this digital age where information is more valuable than oil, it is essential that scientists are capable of exploiting its power for their own benefits,’ says David Schibeci, head of supercomputing at the Pawsey Supercomputing Centre in Perth, Australia. ‘For those in the energy sector, it is key to solving the energy needs of the world, from making combustion engines more efficient to harnessing the power of the ocean to provide clean energy.’
Supercomputers can, for example, make wind farms more competitive by providing a better understanding of atmospheric dynamics. In the world of biogas, they can help to map seismic activity and provide essential data to aid the oil and gas industry in drilling operations. And they can also help harness electricity from wave power, by producing mathematical models on water, air, and membrane fluid mechanics.
Image: Pawsey supercomputer
At Pawsey, several research teams are using some of the most powerful supercomputers in the Southern Hemisphere to take their sustainable energy projects to the next level. One of them, headed by Richard Sandberg from the University of Melbourne, has focused its efforts on gas turbines. Using HPC to simulate and understand gas turbines will help industry design more efficient and reliable systems, Sandberg says. ‘The benefits will be reduced fuel consumption and emissions and better reliability. Some models will also be able to be applied to renewable applications, mainly wind turbines, to improve efficiency and reduce noise emission.’
Gas turbines are a great tool for gathering renewable energy but although their CO2 emissions are already low, they release other harmful compounds, such as oxides of nitrogen. Better design is one way to tackle this, Sandberg says. His work also aims to gain some more insight into the physical mechanisms responsible for losses and heat transfer in gas turbines. Data collected from HPC simulations has already impacted on the design of low-pressure turbines, in terms of how far apart to space rotor and stator blades, he says.
Blue energy
One of the most dynamic areas embracing HPC is blue energy. Otherwise known as osmotic power, blue energy is the energy obtained via the salinity difference between salty sea water and pure river water in estuaries where the two mix. It has the potential to become a significant source of electricity but, so far, we are lacking a way to harness it efficiently.
200 |
2x |
100,000 |
Researchers in France are using HPC to try and find one. In a breakthrough in 2018, they used two supercomputers to produce molecular simulations, which showed that a method using new carbon materials can produce osmotic energy more efficiently than any that has been tried before (Physical Review X, doi: org/10.1103/PhysRevX.8.021024s).
Osmotic power is normally gathered via a semipermeable membrane, which produces energy by the difference of pressure as water molecules circulate through it. But the method dissipates too much energy in the process and is prone to fouling.
The French team introduced an alternative technology for blue energy production, using capacitors with carbon nanoporous electrodes and aqueous electrolytes. With capacitive mixing, which links electricity generation directly to the mixing of salt and fresh water without using converters like turbines, they leveraged charge and discharge cycles in more or less salty solutions. When run in reverse their technique also turned out to be an efficient way to desalinate water in a process known as capacitive deionisation.
With help from the MareNostrum supercomputer in Barcelona and the Curie supercomputer near Paris, they have now predicted the capacitance of devices that contain nanoporous carbon materials as the electrode and salty water as the electrolyte.
‘Compared with other molecular simulations in the literature, we are able to perform realistic simulations of these systems taking two key features into account: the complex structure of the carbon electrode and its polarisation by the electrolyte in the presence of a voltage applied between the electrodes,’ explains project lead Benjamin Rotenberg of the French National Centre for Scientific Research (CNRS) and Sorbonne Université in Paris.
Their biggest challenge was that the small size of the water molecules forced the team to simulate a huge number of molecules — at a much larger computational cost, Rotenberg says. Next, the team will be simulating other salts to address ion specific effects, and other carbon structures to address effects on charge storage and salt adsorption.
‘We can now simulate much more complex systems because our code is adapted not only to the GENCI French supercomputers, but also to European supercomputers such as the one in Barcelona,’ Rotenberg says.
Super-simulations
Elsewhere in Europe, the energy industry is already using supercomputers at an industrial scale. Some 60km outside of Milan, Italian energy company Eni’s Green Data Center houses the HPC4, one of the world’s most powerful industrial systems.
The oil and gas industry is well aligned with supercomputers because of the staggering complexity and huge datasets involved in reservoir simulation. Reservoir simulation begins by creating vibrations on the surface of the earth, or the undersea surface, and converting the reflected sound waves into data. Large-scale numerical processing then turns these data into images that are analysed by geoscientists to determine if a reservoir prospect contains hydrocarbons and where they are located.
The process is time-consuming and expensive, but the high cost of operating drilling rigs and the financial losses resulting from ‘drilling mud’ make it invaluable. Those companies that most consistently strike energy reserves have a huge competitive advantage.
Keeping this in mind, Eni seems to have struck gold. In 2018, the company announced that HPC4 has executed a record-breaking 100,000 high-resolution reservoir model simulation runs, taking into account geological uncertainties, in a record time of 15 hours. In comparison, most reservoir engineers in the industry can run just one single simulation in a few hours with CPU-based hardware and software. With a peak performance of 18.6 petaflops, HPC4 simulates 15 years’ worth of oil reservoir production in an average of 28 minutes.
The system allows Eni to store and process enormous quantities of data for geophysical imaging, the modelling of oil systems and reservoirs. And it has the potential to be applied much more widely, an Eni spokesperson explains: ‘Just to mention a few of them, [HPC could be applied in the] design of more efficient materials for solar panels, smart energy distribution grid design and management, seismic data processing and imaging oriented toward underground carbon storage monitoring.
‘Moreover, our industry is starting to use machine learning with very promising initial results,’ Eni says. ‘The challenge is the size of the dataset, which can be enormous, hence working on the convergence between high performance computing and machine learning will be crucial.’
As its name suggests, HPC4 is Eni’s fourth supercomputer. HPC1 began running in 2013 but was soon outgrown by the sheer size of datasets, and the accuracy that the industry expected from its results, according to Eni. The company expects the trend to keep going this way. As such, ‘we are already working on the heirs of HPC4’, it says.
Evidently, HPC will have to live up to great expectations in the coming years. Today’s petaflop machines are able to perform more than one quadrillion floating point operations/sec (FLOPS). A petaflop is a figure so enormous that, to process it, a computer needs to handle a number of operations equivalent to more than twice the number of stars in the Milky Way.
And HPC systems are inching towards exascale – supercomputers that will be a thousand times more powerful than today’s petaflop machines. These exaflop machines will have a computing speed equating to a billion billion – a quintillion – calculations/sec.
No one seems more excited about this than the researchers using them. Schibeci speculates that supercomputers could find their way into the space of quantum computing, or at least fuse with other flourishing technologies. ‘I think you will see cloud computing, HPC and machine learning infrastructure merging into one so that instead of having to move their code from one system to another, researchers can access a single system to satisfy all their disparate needs,’ he says.
Sandberg sees supercomputers providing data good enough to replace costly experimental research in the energy sector. ‘I do not have a crystal ball, but I would think that we will see even more widespread adoption of new computing architectures worldwide that will allow us to tackle more and more realistic applications numerically,’ he suspects.
And when they do, the science leading us to more sustainable energy will speed up, Sandberg says: ‘This is one of the key advantages of using HPC: reducing the time to solution.’
Powering data centres While supercomputers are a key to more efficient energy generation, they also consume enormous amounts of power. To address this paradox, data centres need to be powered with sustainable, renewable energy themselves. German automotive firm Daimler has an ambition of bringing hydrogen fuel cells to data centres. Currently finding use in cars, they combine hydrogen and oxygen to produce electricity, with water created as a byproduct. The fuel cells are therefore perfectly green when in use, although they can lead to the release of greenhouse gases during manufacturing. ‘The deployment of an automotive fuel cell in data centres is a new application [but] we see a lot of similar requirements to the systems and economies of scale which both sectors can benefit from,’ says Dietrich Thoss from the Daimler Innovations Lab 1886. Daimler started the pilot phase of the project, together with the US National Renewable Energy Lab (NREL), in 2018. By now, the project team has installed one fuel cell system at the NREL site in Colorado and is planning to install several further systems this year, Thoss says. ‘Our goal together with NREL is to subject the fuel cell to a number of different duty cycles and then study the response and wear and lifetime of the fuel cell,’ he explains. Daimler is envisaging hydrogen storage and fuel cell systems to supply power directly to the racks of computer servers housed in data centres. Its concept of a ‘hydrogen-based’ carbon-free data centre utilises hydrogen fuel cells, electrolysers, storage, photovoltaic cells and wind turbines; resulting in a significantly smaller carbon footprint. ‘To use the fuel cell across other automotive segments and in large volumes in the future, our task is now to develop our basic module further so that it can be even more efficiently integrated into our group-wide electric modular system for alternative powertrains. This would give us maximum flexibility,’ Thoss says. |