Science Made Simple: What Is Exascale Computing?

Science Made Simple: What Is Exascale Computing?

Exascale computing is the following milestone in the advancement of supercomputers. Capable of processing information much faster than today’s most powerful supercomputers, exascale computers will certainly give researchers a new tool for addressing a few of the most significant obstacles facing our world, from climate change to understanding cancer to designing brand-new sort of materials.

Exascale computers are digital computers, nearly comparable to today’s supercomputers and computers yet with a lot more powerful hardware. This differentiates them from quantum computers, which represent a completely new approach to constructing a computer fit to particular types of questions.

How does exascale computing compare to other computers? Scientists measure computer performance in floating-point operations per second (FLOPS). This involves simple arithmetic like addition and multiplication problems. Generally, a person can solve addition problems with pencil and paper at a speed of 1 FLOP. This means it takes us one second to do one basic addition problem. Computers are much faster than individuals. Their performance in FLOPS has numerous zeros researchers instead utilize prefixes. For example, the “Giga” prefix represents a number with nine zeros. A modern-day personal computer processor can run in the gigaflop range, at about 150,000,000,000 FLOPS, or 150 gigaFLOPS. “Tera” means 12 zeros. Computers hit the terascale landmark in 1996 with the Department of Energy’s (DOE) Intel ASCI Red supercomputer. ASCI Red’s peak performance was 1,340,000,000,000 FLOPS or 1.34 teraFLOPS.

Oak Ridgle Frontier Supercomputer

The Frontier supercomputer at Oak Ridge National Laboratory (ORNL) is expected to be the first exascale computer in the United States. Credit: Image courtesy of Oak Ridge National Laboratory

Exascale computing is unbelievably faster than that. “Exa” represents 18 zeros. That implies an exascale computer can perform greater than 1,000,000,000,000,000,000 FLOPS, or 1 exaFLOP. That is beyond one million times faster than ASCI Red’s peak performance in 1996.

Constructing a computer this powerful is not simple. They predicted that these computers could need as much energy as 50 homes would utilize when researchers began thinking seriously about exascale computers. Thanks to continuous research with computer vendors, that figure has actually been slashed. Scientists also require ways to guarantee exascale computers are reliable despite the substantial number of components they contain. In addition, they must discover tactics to move data between processors and storage quick enough to stop slowdowns.

Why do we need exascale computers? The obstacles facing our world and the most complicated scientific research questions require more and more computer power to address. Exascale supercomputers will permit researchers to produce more realistic Earth system and climate models. They will help researchers comprehend the nanoscience behind new materials. Exascale computers will help us construct future fusion power plants. They will power brand-new investigations of the universe, from particle physics to the formation of stars. Furthermore, these computers will assist ensure the safety and security of the United States by sustaining jobs such as the upkeep of our nuclear deterrent.

Fast Facts

  • Watch a video of an exascale-powered COVID simulation from NVIDIA.
  • Performance in computers has increased gradually since the 1940s.
  • The first electronic computer in the world was the Colossus vacuum tube. Constructed in Britain throughout the Second World War, Colossus performed at 500,000 FLOPS.
  • The first supercomputer with 3 megaFLOPS was the CDC 6600 in 1964.
  • The first supercomputer to get to over 1 gigaFLOP was the Cray-2 in 1985.
  • The first greatly parallel computer getting to over a teraFLOP was the ASCI Red in 1996.
  • The first supercomputer to get to 1 petaFLOP was the Roadrunner in 2008.

DOE Contributions to Exascale Computing

The Department of Energy (DOE) Office of Science’s Advanced Scientific Computing Research program has worked for years with U.S. technology companies to construct supercomputers that push barriers in scientific discovery. Lawrence Berkeley, Oak Ridge, and Argonne National Laboratories house DOE Office of Science individual facilities for high-performance computing. These facilities offer scientists computer accessibility based upon the possible advantages of their research study. DOE’s Exascale Computing Initiative, co-led by the Office of Science and DOE’s National Nuclear Security Administration (NNSA), started in 2016, intending to speed up the advancement of an exascale computing ecosystem. One of the elements of the initiative is the seven-year Exascale Computing Project.

The project aims to prepare researchers and computing facilities for exascale. It focuses on 3 wide areas:

  • Application Development: developing applications that take full advantage of exascale computers.
  • Software Technology: creating new tools for managing systems, handling huge quantities of information, and incorporating future computers with existing computer systems.
  • Hardware and Integration: establishing partnerships to produce brand-new components, new training, standards, and constant testing to make these new tools operate at our other facilities and national laboratories.

DOE is releasing the United States’ first exascale computers: Frontier at ORNL and Aurora at Argonne National Laboratory and El Capitan at Lawrence Livermore National Laboratory.


Read the original article on Scitech Daily.

Related “Inmarsat: IoT to Overtake Cloud Computing as Primary Industry 4.0 Technology”

Share this post