Tag: Supercomputing

  • Innovation in MOF Design with Generative AI and Supercomputing

    Innovation in MOF Design with Generative AI and Supercomputing

    Credit: Unsplash.

    MOFs consist of inorganic nodes, organic nodes, and organic linkers, offering countless configuration possibilities. To expedite the discovery process, researchers from the U.S. Department of Energy’s Argonne National Laboratory, in collaboration with institutions including the University of Illinois Urbana-Champaign (UIUC), are leveraging generative artificial intelligence (AI), machine learning, high-throughput screening, and molecular dynamics simulations.

    AI-Driven MOF Exploration

    The team rapidly generated over 120,000 new MOF candidates using generative AI within 30 minutes. Computational calculations were conducted on the Polaris supercomputer at the Argonne Leadership Computing Facility (ALCF).

    Time-intensive molecular dynamics simulations were then performed on the Delta supercomputer at UIUC to assess candidate stability and carbon capture capacity.

    Pioneering MOF Design

    This interdisciplinary approach marks a paradigm shift in MOF material design, synthesizing optimal MOF contenders. With upcoming advancements like the ALCF’s Aurora exascale supercomputer, researchers anticipate exploring billions of MOF candidates, unlocking novel structures with unprecedented capabilities.

    The team integrates chemical insights from various disciplines, enhancing MOF performance for carbon capture. By leveraging biophysics, physiology, and physical chemistry datasets, the algorithm refines MOF designs, promising transformative materials that are efficient, cost-effective, and scalable.

    Collaborative Endeavors for Future Progress

    This research underscores the potential of AI-driven approaches in molecular sciences. By fostering collaboration among institutions and harnessing the creativity of young scientists, this endeavor paves the way for innovative solutions to pressing environmental challenges.

    As the AI model evolves, its predictions will become increasingly precise, facilitating the experimental validation of newly designed MOFs. This interdisciplinary effort advances carbon capture technology and sets a precedent for AI applications in scientific research, with implications extending to biomolecular simulations and drug design.


    Read the original article on Nature.

    Read more: Understanding Carbon Capture and Storage: Can It Effectively Reduce Emissions?

  • Science Made Simple: What Is Exascale Computing?

    Science Made Simple: What Is Exascale Computing?

    Exascale computing is the following milestone in the advancement of supercomputers. Capable of processing information much faster than today’s most powerful supercomputers, exascale computers will certainly give researchers a new tool for addressing a few of the most significant obstacles facing our world, from climate change to understanding cancer to designing brand-new sort of materials.

    Exascale computers are digital computers, nearly comparable to today’s supercomputers and computers yet with a lot more powerful hardware. This differentiates them from quantum computers, which represent a completely new approach to constructing a computer fit to particular types of questions.

    How does exascale computing compare to other computers? Scientists measure computer performance in floating-point operations per second (FLOPS). This involves simple arithmetic like addition and multiplication problems. Generally, a person can solve addition problems with pencil and paper at a speed of 1 FLOP. This means it takes us one second to do one basic addition problem. Computers are much faster than individuals. Their performance in FLOPS has numerous zeros researchers instead utilize prefixes. For example, the “Giga” prefix represents a number with nine zeros. A modern-day personal computer processor can run in the gigaflop range, at about 150,000,000,000 FLOPS, or 150 gigaFLOPS. “Tera” means 12 zeros. Computers hit the terascale landmark in 1996 with the Department of Energy’s (DOE) Intel ASCI Red supercomputer. ASCI Red’s peak performance was 1,340,000,000,000 FLOPS or 1.34 teraFLOPS.

    Oak Ridgle Frontier Supercomputer

    The Frontier supercomputer at Oak Ridge National Laboratory (ORNL) is expected to be the first exascale computer in the United States. Credit: Image courtesy of Oak Ridge National Laboratory

    Exascale computing is unbelievably faster than that. “Exa” represents 18 zeros. That implies an exascale computer can perform greater than 1,000,000,000,000,000,000 FLOPS, or 1 exaFLOP. That is beyond one million times faster than ASCI Red’s peak performance in 1996.

    Constructing a computer this powerful is not simple. They predicted that these computers could need as much energy as 50 homes would utilize when researchers began thinking seriously about exascale computers. Thanks to continuous research with computer vendors, that figure has actually been slashed. Scientists also require ways to guarantee exascale computers are reliable despite the substantial number of components they contain. In addition, they must discover tactics to move data between processors and storage quick enough to stop slowdowns.

    Why do we need exascale computers? The obstacles facing our world and the most complicated scientific research questions require more and more computer power to address. Exascale supercomputers will permit researchers to produce more realistic Earth system and climate models. They will help researchers comprehend the nanoscience behind new materials. Exascale computers will help us construct future fusion power plants. They will power brand-new investigations of the universe, from particle physics to the formation of stars. Furthermore, these computers will assist ensure the safety and security of the United States by sustaining jobs such as the upkeep of our nuclear deterrent.

    Fast Facts

    • Watch a video of an exascale-powered COVID simulation from NVIDIA.
    • Performance in computers has increased gradually since the 1940s.
    • The first electronic computer in the world was the Colossus vacuum tube. Constructed in Britain throughout the Second World War, Colossus performed at 500,000 FLOPS.
    • The first supercomputer with 3 megaFLOPS was the CDC 6600 in 1964.
    • The first supercomputer to get to over 1 gigaFLOP was the Cray-2 in 1985.
    • The first greatly parallel computer getting to over a teraFLOP was the ASCI Red in 1996.
    • The first supercomputer to get to 1 petaFLOP was the Roadrunner in 2008.

    DOE Contributions to Exascale Computing

    The Department of Energy (DOE) Office of Science’s Advanced Scientific Computing Research program has worked for years with U.S. technology companies to construct supercomputers that push barriers in scientific discovery. Lawrence Berkeley, Oak Ridge, and Argonne National Laboratories house DOE Office of Science individual facilities for high-performance computing. These facilities offer scientists computer accessibility based upon the possible advantages of their research study. DOE’s Exascale Computing Initiative, co-led by the Office of Science and DOE’s National Nuclear Security Administration (NNSA), started in 2016, intending to speed up the advancement of an exascale computing ecosystem. One of the elements of the initiative is the seven-year Exascale Computing Project.

    The project aims to prepare researchers and computing facilities for exascale. It focuses on 3 wide areas:

    • Application Development: developing applications that take full advantage of exascale computers.
    • Software Technology: creating new tools for managing systems, handling huge quantities of information, and incorporating future computers with existing computer systems.
    • Hardware and Integration: establishing partnerships to produce brand-new components, new training, standards, and constant testing to make these new tools operate at our other facilities and national laboratories.

    DOE is releasing the United States’ first exascale computers: Frontier at ORNL and Aurora at Argonne National Laboratory and El Capitan at Lawrence Livermore National Laboratory.


    Read the original article on Scitech Daily.

    Related “Inmarsat: IoT to Overtake Cloud Computing as Primary Industry 4.0 Technology”