Category: Physics

  • Uncovering the Secrets of the Big Bang With Machine Learning

    Uncovering the Secrets of the Big Bang With Machine Learning

    A quark gluon plasma after the collision of two heavy nuclei. Credit: TU Wien

    Can machine learning be used to reveal the secrets of the quark-gluon plasma?

    Yes, it can. However, only with advanced new methods.

    It can hardly be more complicated. Little particles whir around wildly with extremely high energy, many interactions happen in the matted mess of quantum particles. This leads to a state of matter called “quark-gluon plasma”. Promptly after the Big Bang, the entire universe found itself in this state. Today, it is generated by high-energy atomic nucleus collisions, as an example at CERN.

    Such processes can just be examined using high-performance computers and very complicated computer simulations whose outcomes are difficult to assess. For that reason, utilizing artificial intelligence or machine learning for this goal looks like an obvious idea. Average machine-learning algorithms, however, are not ideal for this task. The mathematical properties of particle physics call for a very special structure of neural networks. At TU Wien (Vienna), it has now been demonstrated how neural networks can be effectively used for these difficult tasks in particle physics.

    Neural networks

    ” Simulating a quark-gluon plasma as realistically as possible calls for an extremely big quantity of computing time,” claims Dr. Andreas Ipp from the Institute for Theoretical Physics at TU Wien. “Even the largest supercomputers on the planet are bewildered by this”. Consequently, it would be preferable not to calculate every detail precisely but to identify and predict specific plasma properties using artificial intelligence.

    Therefore, neural networks are used, similar to those utilized for image recognition. Artificial “neurons” are linked together on the computer similarly to neurons in the brain. This produces a network that can identify, as an example, whether a cat is evident in a specific image.

    However, there is a significant issue when employing this technique to the quark-gluon plasma. The quantum fields utilized to mathematically explain the particles, as well as the forces in between them, can be represented in various different ways. “This is described as gauge symmetries,” states Ipp. “The fundamental principle behind this is something we are acquainted with. If I adjust a measuring device differently, for example, if I utilize the Kelvin scale instead of the Celsius scale for my thermometer, I obtain entirely different numbers, even though I am describing the very same physical state. It is comparable with quantum theories– other than that, and the allowed changes are mathematically much more complex.” Mathematical objects that look totally different at first glimpse might depict the very same physical state.

    Gauge symmetries developed into the structure of the network

    ” If you do not take these gauge symmetries into account, you can not meaningfully interpret the results of the computer simulations,” claims Dr. David I. Müller. “Teaching a neural network to find out these gauge symmetries by itself would certainly be incredibly hard. It is better to begin by designing the structure of the neural network as though the gauge symmetry is immediately considered. This ensures that different depictions of the exact same physical state additionally create the exact same signals in the neural network,” says Müller. “That is exactly what we have actually now prospered in doing. We have actually established totally new network layers that immediately take gauge invariance into account.” In some examination applications, it was revealed that these networks could, in fact, learn better exactly how to manage the simulation data of the quark-gluon plasma.

    ” With such neural networks, it becomes feasible to make predictions about the system– for example, to estimate what the quark-gluon plasma will appear like at a later moment without actually having to calculate every intermediate step in time in detail,” says Andreas Ipp. “And at the same time, it is guaranteed that the system just produces results that do not oppose gauge symmetry– in other words, outcomes which make sense a minimum of in concept.”

    It will be a long time before it is possible to replicate atomic core collisions at CERN with such methods fully. Yet, the new type of neural networks offers a promising as well as totally new device for describing physical phenomena for which all other computational techniques may never ever be effective enough.


    Read the original article on Scitech Daily.

    Related “MIT Magnet Allows Path to Commercial Fusion Power”

  • Neutrinos Transform the Universe: Researchers Validate the Theory

    Neutrinos Transform the Universe: Researchers Validate the Theory

    Density distribution of neutrinos (left) and dark matter (right) in the cosmic large-scale structure. While the neutrinos move fast and look diffuse, dark matter distribution composes cosmic webs such as filamentary structure.

    In an international initially, an investigation team led by Kavli IPMU Principal Investigator Naoki Yoshida successfully conducted a 6-dimensional simulation of neutrinos travelling through the universe.

    The consequences of practically massless subatomic particles known as neutrinos on the creation of galaxies has long been a cosmic riddle, one that scientists have struggled to solve since the particles’ discovery in the year 1956.

    In contrast, a global scientific group developed cosmological models that exactly portray the function of neutrinos in the evolution of the universe. Naoki Yoshida, the director of the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) and Professor of Physics at the University of Tokyo, is part of this team.Their findings were recently published in The Journal of Astrophysics.

    The investigation is a turning point in the simulation process of the structure of the universe, according to Dr. Shun Saito, a cosmologist from Missouri University of Science and Technology (Missouri S&T), assistant professor of physics, and a researcher in the study. Saito also works as a visiting associate researcher at Kavli IPMU.

    They used a set of differential equations known as the Vlasov-Poisson equations to explain how neutrinos of variable mass quantities move through space.

    Their findings show that neutrinos limit the grouping of dark matter, the universe’s undefined mass, and, as a result, galaxies. They revealed that neutrino-rich regions are tightly linked to massive galaxy clusters and that the effective temperature of neutrinos varies significantly depending on their mass.

    “In general, such as the findings we obtained agree with theoretical predictions and previous simulation results,” explains Dr. Kohji Yoshikawa, the study’s lead author and director of the University of Tsukuba’s Center for Computational Sciences. “It’s comforting that the outcomes of two completely different strategies for simulation agree.”

    “Our simulations are important because they constrain an unresolved quantity of neutrino mass,” explains Saito of Missouri S&T. “Neutrinos are the lightest particles known. We very recently discovered that neutrinos have mass as a result of a finding included in the 2015 Nobel Prize in Physics.”

    That value was given to two researchers, including Takaaki Kajita, Kavli IPMU’s Principal Investigator and Director of the Institute for Cosmic Ray Research at the University of Tokyo, for their independent discoveries that one type of neutrino can become another, demonstrating that neutrinos have mass.

    “Our work could lead to a strong determination of the neutrino mass,” Saito argues.

    Dr. Satoshi Tanaka, a postdoctoral scholar at Kyoto College’s Yukawa Institute for Theoretical Physics, was the fourth researcher in the work titled “Cosmological Vlasov, Poisson Simulations of Structure Formation with Relic Neutrinos: Nonlinear Clustering and the Neutrino Mass.”

    The researchers’ Vlasov-Poisson simulation (left) predicts a smoother and less noisy density distribution of neutrinos compared to a traditional N-body particle simulation of Newtonian gravitational interaction (right).

    Originally published on Asia Research News. Read the original article.

  • Ultracold Quantum Fragments Break Timeless Symmetry

    Ultracold Quantum Fragments Break Timeless Symmetry

    The symmetry in dynamic development found in many natural events aid scientists in their comprehension of a system’s fundamental mechanism. These symmetries, nevertheless, are not always obtained in quantum physics. For the first a period of time physicists from Heidelberg College’s Center for Quantum Dynamics demonstrated the theoretically predicted divergence from traditional symmetry in laboratory tests with ultracold lithium particles. Their findings have been released in Scientific investigation.

    An expanding cloud of quantum particles violates the scaling symmetry. Credit: Enss

    Throughout the classical physics universe as a whole the energy of an ideal gas grows according to the applied pressure. This occurs immediately as a result of scale symmetry, and the same link holds true in any scale-invariant system. However, in the domain of quantum physics, the relationships between quantum particles can be so powerful that this classical scale symmetry no longer applies, explains Institute for Theoretical Physics Associate Professor Dr. Tilman Enss. The work of his group of researchers collaborated with the Laboratory for Physics’ team led by Professor Dr. Selim Jochim.

    The scientists studied the behavior of an ultracold, superfluid gas of lithium atoms in their experiments. When the gas is pushed from its equilibrium state, it begins to expand and contract in a “breathing” motion repeatedly. Different from classical particles, these quantum particles can unite into pairs, and, therefore, the superfluid becomes stiffer the more it is compressed.

    Dr. Puneet Murthy and Dr. Nicolo Defenu, colleagues of Prof. Jochim and Dr. Enss, led the team that noticed this deviation from classical scale symmetry and so explicitly validated the quantum nature of the system in question. According to the researchers, this phenomenon provides a far better understanding of the behavior of systems with comparable features, including graphene or superconductors, that have no electric resistance when cooled below a particular temperature point.


    Originally published on Scitechdaily.com. Read the original article.

    Reference: Puneet A. Murthy et al, Quantum scale anomaly and spatial coherence in a 2D Fermi superfluid, Science (2019). DOI: 10.1126/science.aau4402

  • Radiography is being used by scientists to better understand the development of fluid and powerful microjets.

    Radiography is being used by scientists to better understand the development of fluid and powerful microjets.

    The projections of a study from 2020 that computationally evaluated the influence of melting on shock-driven metal microjets were experimentally corroborated by scientists at Lawrence Livermore National Laboratory (LLNL). According to a previous investigation, melting the foundation material did not always result in an important rise in jet mass.

    The LLNL, led by David Bober, corroborated the microjet behavior predictions with liquid and solid tin microjet experiments. The work was featured in the journal Applied Physics, and it was also chosen as an editor’s pick.

    Bober claimed microjets are very important to research because they are examples of wider jetting and ejecta processes throughout condensed matter shock physics, implying anything from dynamites to asteroid impact.

    Bober added that a set of simulations done by LLNL design physicist Kyle Mackay, a co-author of the present research report, encouraged the team. The work of Mackay can be found here and summarized below.

    Mackay’s computations revealed an unexpected trend, and we wanted to see if it was true,” Bober added. “Specifically, that job predicted that melting the foundation material would not always result in a dramatic increase in the mass of material ejected from a surface area feature, which contradicts conventional wisdom about how these points are meant to work.

    The experiment was carried out by cutting a little groove in the top of a tin plate. The group then launched a fast-moving projectile at the bottom portion. As a result, a fluid-like jet of tin was ejected from the groove and into the direction of an intense X-ray beam.

    Ultimately, we used those X-rays and a variety of high-speed cameras to capture a series of photos of the flying tin jet, which allowed us to calculate things like the jet’s mass and velocity,” Bober explained. We credit a great deal to many colleagues, particularly those in the Dynamic Compression Sector at Argonne National Labs’ Advanced Photon Source.

    Bober introduced that he is thrilled to explain how the results occur in nature and simulations. The crew has recently gathered follow-up data assessing the jets’ local phase and planning future shoots in order to determine the material parameters they feel are important to the phenomenon.

    The group of researchers still has work ahead of them to understand exactly what is going on in the experiments, Bober explained. I wish we could improve ejecta variations by understanding the physics of melt transformation.


    Originally published on Gamar Central. Read the original article.

    Reference: David B. Bober et al, Understanding the evolution of liquid and solid microjets from grooved Sn and Cu samples using radiography, Journal of Applied Physics (2021). DOI: 10.1063/5.0056245

  • A Brand-New Concept of Superconductivity

    A Brand-New Concept of Superconductivity

    A team of researchers from the College of Tsukuba’s Division of Quantum Condensed Issue Physics has devised a new theory of superconductivity. Based on the estimation of the ‘Berry connection,’ this model contributes far more to explaining new findings from experiments than the current theory. The research could enable future electrical grids to send out power with no waste.

    The superconductors are amazing materials that appear ordinary at room temperature, but when cooled to extremely low temperatures, they allow electric current to flow without resistance. Superconductivity has various obvious applications, such as lossless energy transmission, but the physics connected with this process is still not fully understood. The Bardeen-Cooper-Schrieffer (BCS) theory is the typical way of thinking about the transition from normal to superconducting.

    Molecules in this model may develop “Cooper pairs” that move together and resist scattering as long as thermal excitations are low enough. The BCS design, on the other hand, does not completely explain all sorts of superconductors, limiting our ability to construct even more robust superconducting devices which function at normal temperatures.

    A professor at the University of Tsukuba researcher has developed a brand-new superconducting model that better explains physical principles. Instead of focusing on the pairing of charged particles, this novel idea use the ‘Berry link’ mathematical technique. This number computes a space winding through which electrons move. “According to the basic BCS concept, electron pairing is the place to begin for superconductivity.” “In this theory,supercurrent is CARACTERIZED as the dissipationless flow of coupled electrons, whereas single electrons continue to encounter the resistance,” adds The writer Teacher Hiroyasu Koizumi.

    Josephson’s joints, for example, are generated when two superconductor layers are separated by a thin barrier of regular steel or an insulator. Josephson junctions, despite being widely used in high-precision magnetic field detectors and quantum computers, do not fit well with the BCS theory. “According to the brand-new theory, electron pairing must maintain the Berry link rather than getting the cause of superconductivity upon its own, and supercurrent is the flow of solitary and paired electrons generated by the twisting of the area where electrons move caused by the Berry link,” demonstrates Professor Koizumi. As a result, this research could lead to advances in quantum computing and energy conservation.


    Originally published on Scitechdaily.com. Read the original article.

    Reference: Hiroyasu Koizumi, Superconductivity by Berry Connection from Many-body Wave Functions: Revisit to Andreev−Saint-James Reflection and Josephson Effect, Journal of Superconductivity and Novel Magnetism (2021). DOI: 10.1007/s10948-021-05905-y

  • Introduction to Particle Physics

    Introduction to Particle Physics

    Fragment Zoo

    Physical scientists thought there were only three basic elements for most of the first half of the twentieth century: the known proton, neutron, and electron. Nonetheless, by the mid-1960s, that image had shifted. Improvements to particle accelerators and detector technology set the stage for the discovery of a seemingly infinite number of new particles. The so-called ‘particle zoo’ of the day lacked simplicity and elegance, which are indications of an excellent theory in science. Investigators began looking for a much simpler, unifying theory to explain the particles fundamentally.

    Incomplete, but Elegant

    Over the following decades, a theory known as the Standard Model of Particle Physics arose. The model explains the fundamental structure behind the particle zoo with unbelievable ‘accuracy.’ Presently, the theory is one of the most well-supported scientific theories in history.

    The theory distinguishes two types of particles: fermions, which comprise all about us, and bosons, which mediate how fermions communicate with one another. Two typical examples are the electron (a fermion) and a photon (a boson), the particle of light that lugs the electromagnetic force. Fermions are then separated right into quarks – which make up protons and neutrons – and leptons – that include electrons in addition to muons, taus, and the elusive, barely-massive neutrinos.

    The Standard Model estimates the properties of particles with unbelievable precision.

    The particles of the Standard Model: fermions in red (quarks) and green (leptons), vector bosons in blue and the Higgs boson in yellow.

    For a while, it indeed appeared to be the fundamental theory that physicists of the ‘particle zoo’ days looked for so fervently. Yet, one significant issue persisted – the theory can not explain why any particle has mass, much less estimate the masses of individual particles.

    The Higgs and Beyond

    Peter Higgs, François Englert, and others theorized an expansion to the Standard Design to solve this issue. They predicted the existence of an important field that exists everywhere, all the time, and provides mass to fundamental particles. Additionally, they proposed that excitation of this field could be observed as a particle – the famous Higgs Boson. In July 2012, almost fifty years after the first theorization of the Higgs Boson, CERN confirmed that both the CMS and Atlas experiments had observed the elusive particle.

    This first observation of the Higgs caused nearly as many answers as questions. Physicists have found out little about the boson’s properties from experimental data. More data must be collected to verify the extent to which the observed particle matches the estimated one. And, despite its successes, the Standard Model has some drawbacks. It can not consider the majority of the mass in the universe, which is bound up in Dark Matter. Nor can it explain why deep space is filled with matter and not made of equal parts matter and anti-matter. And don’t even think of including gravity in the picture! There are several concerns to explore concerning the universe and subatomic particles.


    Originally published on Stanford. Read the original article.

  • MIT Magnet Allows Path to Commercial Fusion Power

    MIT Magnet Allows Path to Commercial Fusion Power

    Community Fusion System (CFS) and MIT’s Plasma Science and Fusion Center claimed that a high-temperature superconducting magnet was successfully tested. The 20-tesla field intensity, according to MIT researchers and CFS, is the most intense electromagnetic field ever created on Earth, paving the way for the construction of the first fusion power generator.

    Magnet design is one of the most difficult issues in producing the conditions required for fusion. According to the researchers, it is now possible to construct and contain plasma that generates more energy than it consumes. Magnet technology developed by the MIT-CFS team makes this possible.

    MIT studies on fusion power

    A statement from Dennis Whyte, supervisor of MIT’s Plasma Science and Fusion Center, the unique alliance and collaboration between MIT and CFS enabled them to be quick and efficient in making, producing, and testing this magnet. As stated by Whyte at a news briefing, they could draw from and build on the assets of each firm to form a consortium to offer this technology on the short timeline demanded by the climate problem.

    Furthermore, the fusion presents substantial challenges. If proven, MIT’s technology could become a carbon-free, infinite source of energy. The demonstration is a big step toward addressing the most serious issues about SPARC, a high-field fusion energy project at MIT. 

    They want SPARC to have a fusion gain, or Q-factor, of at least 2, which means that twice as much fusion power is produced as is required to sustain a reaction. A demonstration device is expected to be completed in the year 2025.

    As stated by Maria Zuper, the vice chairman of MIT Research, the goal is to build a power plant the size of a small school gym that generates the same amount of power as a coal plant while emitting no carbon. The fuel is hydrogen, which is derived from water, which we have plenty of.

    Magnetic fields

    Fusion is the method through which the Sun is powered. A fusion process occurs when two light nuclei combine to form a single heavier nucleus, releasing energy if the total mass of the resulting single nucleus is less than the total mass of both originating nuclei. The leftover mass is converted into power.

    A field of magnets keeps a group of protons and electrons, or plasma, together, creating a hidden cloak. Magnetic fields exert tremendous influence upon electrically charged particles. One of the most well-known concepts for control is a doughnut-shaped device known as a tokamak. Over 150 tokamaks have been created and tested, with each demonstrating capability by approaching the fusing threshold. While most systems generate electromagnetic fields using copper electromagnets, the French ITER concept employs low-temperature superconductors.

    The use of elevated temperatures superconductors, according to the researchers, is a critical step in the MIT-CFS fusion endeavor. These superconductors can withstand a much stronger electromagnetic field, resulting in smaller tokamaks. This was accomplished by employing a new superconducting material, rare-earth barium copper oxide (ReBCO), which operates at twenty degrees Kelvin.

    A ribbon-shaped ReBCO variant became commercially available only a few years ago. The use of brand-new high-temperature superconducting magnets took use of decades of tokamak studies.

    Four MIT researchers working on the magnet project for fusion power optimization
    Magnet Project (Source: MIT)

    Magnet style

    A three-year period of design, supply chain, and manufacturing process development were required for the magnet enhancement. According to the experts, numerous models were created using a real model and CAD designs.

    The new magnet was gradually charged in a series of steps until it produced an electromagnetic field of 20 tesla. That represents the highest field strength yet achieved by a high-temperature superconducting fusion magnet, according to fusion researchers. The magnet is constructed from 16 plates stacked on top of one another. Researchers said that in order to create a strong magnetic field, the material must be encased in a strong metal framework.

    The newly developed magnet’s size and performance are similar to those of a non-superconductor magnet used in MIT’s Alcator C-Mod experiment, which was completed in 2016. According to Whyte, the distinction in terms of power consumption is indeed fascinating because the constraining magnetic field was created by an ordinary copper conducting magnet [consuming] about 200 million watts of power

    The magnet’s properties

    According to Whyte, the new magnet utilized approximately 30 watts, meaning that the amount of energy required to confine the electromagnetic field was lowered by a factor of approximately 10 million. He stated that switching to a high-field superconducting device could result in “net energy from fusion [because] we do not need to use as much power to provide the constraining magnetic field.”

    MIT Researchers prepare high-field superconducting device
    Researchers prepare high-field superconducting device. (Source: MIT)

    The MIT fusion center experiment also demonstrated that a scale-built magnet could maintain an area of more than 20 tesla. The required performance level for the SPARC tokamak device, which will be utilized to demonstrate net energy from fusion.

    This test entails obtaining temperatures high enough for a superconducting magnet to generate a field while consuming as little power as possible. The magnitude of the field strength, which required several days to ramp up, was thought to be sufficient to keep what developers termed a stable state. The equilibrium between energy use and temperature was achieved in this situation.

    The next stage is to construct SPARC, using the successful magnet test as a foundation. Significant technical and economic challenges remain; nonetheless, scientists believe the road to fusion energy may finally be descending.


    Read the original article on EETIMES.

    Want to read more about this topic? Read this post about “The Standard Model of Physics

  • A Dive Into The Standard Model

    A Dive Into The Standard Model

    Particles of the Standard Model of particle physics. Credit: Daniel Dominguez/CERN

    The theories and findings of countless physicists since the 1930s led to an excellent insight into the basic structure of matter. They found that there are a few basic building blocks known as fundamental particles that compose everything in the universe, ruled by four fundamental forces. Our best understanding of just how these particles and 3 of the forces relate to each other is embedded in the Standard Model of particle physics.

    In the beginning of the 1970s, the concept successfully elucidated almost all experimental results and accurately predicted a wide range of occurrences. The Standard Model has evolved into a well-tested physics theory over time and through numerous tests.

    Matter particles

    All stuff among us is made up of fundamental fragments, which are the building elements of matter. These particles are classified into two types: quarks and leptons. Each group consists of six particles linked in pairs, or “generations.” The first generation consists of the lightest and strongest particles, whereas the subsequent generations consist of more heavy and less stable particles. 

    Every stable matter in our universe is made up of first generation particles; bigger particles decay quickly to more stable particles. The six different quarks are divided into three separate pairs by three generations: the “up quark” and the “down quark” for the first generation, the “charm quark” and “strange quark” for the second, and the “top quark” and “bottom (or beauty) quark” for the third generation.They also come in three distinct “colors” that must be combined in order to create colorless objects.

    The leptons are arranged similarly to quarks: the “electron” and “electron neutrino” for the first generation, the “muon” and “muon neutrino” for the second generation, and the “tau” and “tau neutrino” for the third generation. The electron, muon, and tau all have an electric charge and a large mass, whereas neutrinos are neutral in electricity with a very small weight.

    Forces

    The universe is regulated by 4 essential forces: the powerful force, the weaker force, the force of electromagnetic radiation, and the force of gravitational. They have different levels of action and intensity.

    Gravity, although being the weakest force, has an endless length. Similarly, the electromagnetic force has an unlimited range, but it is many orders of magnitude more intense than gravitation. The both powerful and weak forces possess a relatively small range and dominant only at the subatomic particle stage.

    Regardless of the title, the smallest energy is significantly more powerful than gravity, yet it is unquestionably the weakest of the three. The strong force, as the name implies, is the most powerful of the four fundamental reactions.

    Three primary forces are derived through the interchange of force-carrier particles, which are members of a larger group known as “bosons.” Matter particles exchange bosons with one another to convey different amounts of energy. Each fundamental force has a corresponding boson. The “gluon” is accountable for the strong force, the “photon” for the electromagnetic force, and the “W and Z bosons” for the weak force.

    Forces and carrier particles

    Despite the fact that it has yet to be discovered, the “graviton” must be the analogous force-carrying particle of gravity. The Standard Model encompasses the weak, strong, and electromagnetic forces, as well as all of its associated carrier particles. It also clearly describes how these forces act on all matter particles.

    Nonetheless, gravity, the most known factor in our daily lives, is not included in the Standard Model since fitting gravity neatly right into this framework has proven to be a difficult issue. The quantum theory employed to define the micro world and the general theory of relativity used for defining the macro universe are difficult to reconcile.

    In the context of the Standard Model, no one has made the two mathematically consistent. Fortunately for particle physics, when it comes to the miniscule size of particles, gravity’s effect is so faint as to be minimal. Only when mass is in bulk, as in the capacity of the human body or planets, does gravity’s effect take precedence. So, despite its grudging removal of one of the fundamental forces, the Standard Model nonetheless works well.

    However, so far, so good …

    Physicists ought not to give up their work right away. Despite being the most thorough account of the subatomic world now accessible, the Standard Model fails to clarify all.

    Except for gravitational pull, the theory only addresses three of the four basic forces. Nevertheless, it does not address fundamental questions such as “What is dark matter?” or “What took place to opposites after the big thump?” or “Why do we have three generations of quarks and leptons with such wildly different mass scales?” Lastly, there is the Higgs boson, which is a necessary component of the Standard Model.

    Additional discoveries

    The Higgs boson is followed by this particle. Additional study is needed to determine whether this is the Higgs boson predicted by the Standard Model. The Higgs boson, as postulated in the Standard Model, is the most visible representation of the Brout-Englert-Higgs system. Other hypotheses that extend beyond the Standard model anticipate other forms of Higgs bosons.

    François Englert and Peter Higgs were awarded the Nobel Prize in Physics on October 8, 2013, “for the theoretical identification of a mechanism that advances our comprehension of the origin of mass in subatomic objects.” The ATLAS and CMS experiments at CERN’s Large Hadron Collider recently validated this by discovering the expected basic particle.”

    Despite the fact that the Standard Model properly describes the occurrences within its area, it is still imperfect. Most likely, it is only a small portion of a greater picture that contains new physics buried deep inside the subatomic realm or deep within the cosmos as a whole. New information from LHC experiments will help us find more of these missing bits.


    Read the original article on CERN.

    Want to read more about this topic? Read this post about “Introduction to Particle Physics.”

  • Uncovering Concealed Local States in a Quantum Material

    Uncovering Concealed Local States in a Quantum Material

    Scientists have collected evidence of local symmetry breaking in a quantum material upon heating. They believe these local states are associated with electronic orbitals that serve as orbital degeneracy lifting (ODL) “precursors” to the titanium (Ti) dimers (two molecules linked together) formed when the material is cooled to low temperature. (Electron orbitals are considered degenerate when they have the same energy levels.) Understanding the role of these ODL precursors may offer scientists a path forward in designing materials with the desired technologically relevant properties, which typically emerge at low temperature. Credit: Ariana Tantillo

    Quantum material portray exotic behaviors resulting from quantum mechanics, or how matter acts on the small scale of atoms and subatomic particles. The technologically significant properties of quantum materials result from intricate interactions of electron charge, orbital, and spin and their connection to the material’s crystal framework. For instance, in some materials, electrons can move openly with no resistance; this phenomenon, called superconductivity, could be used to transmit power more efficiently. Usually, properties arise at low temperatures, where crystals present low (broken) structural symmetry.

    ” Unsurprisingly, this low-temperature regimen is well researched,” stated Emil Bozin, a physicist in the X-ray Scattering Group of the Condensed Matter Physics and Materials Science (CMPMS) Department at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory. “Meanwhile, the high-temperature regime stays mainly unexplored because it is connected with a relatively high symmetry, which is considered to be dull.”

    But Bozin and associates have just lately discovered high-temperature local symmetry breakdown situations. These local states function as orbital degeneracy lifting (ODL) “precursors” to what happens at low temperatures because they are connected to electronic orbitals, which are regions of an atom where electrons are likely to be located. When orbitals have the same amount of energy, this is known as orbital degeneracy. From this degeneracy, it may be inferred that some orbitals will undoubtedly have relative higher energies than another.

    According to Bozin, “We assume that such local states still serve as enablers of the material qualities of interest that arise at much-lower temperatures.”

    The researchers first observed these local states in 2019 in a material (copper iridium sulfide) with a metal-insulator transition as well as in an iron-based superconductor. The group representing – Brookhaven Lab; DOE’s Oak Ridge National Lab; University of Tennessee, Knoxville; and Columbia College – has discovered them in an insulator including sodium, titanium, oxygen, and silicon. This insulating material is one of the minerals developing the Earth’s upper mantle. Past the geological interest, it is a candidate for quantum spin liquids (QSLs), an unfamiliar state of matter in which electron spins stay fluid-like to the lowest temperature, constantly changing. QSLs might give a material platform for quantum computing, spintronics (electronics based on electron spin rather than charge), superconductivity, and other technologies.

    Weiguo Yin, a physicist of the CMPMS Division Rigid Matter Theory Group, said that the findings “indicate that this ODL precursor habits at high temperature may be quite common and needs to be taken into account in theoretical studies to fully understand the functionality of quantum substances.”

    In order to probe the material’s atomic structure, the group examined how the material scattered neutrons and X-rays. Both probes are required due to their different sensitivities to specific elements based on atomic weight. Unlike X-rays, neutrons can discern light elements, such as oxygen. Using the neutron and X-ray scattering patterns, the local arrangement of atoms can be reasoned through the atomic pair distribution function (PDF), describing ranges between various atoms in an example. Utilizing software, researchers can then discover the structural model that best fits the experimental atomic PDF function.

    Their analysis showed signatures of local symmetry breaking much over the temperature at which the material undertakes a structural transition to make titanium dimers (two molecules connected). When the material is heated, these dimers appear to go away, but truly, they stay, evolving into a dual ODL state.

    “The high-temperature, high-crystallographic-symmetry state supposes the presence of orbital degeneracy, yet orbital depravity might not be vigorously beneficial,” stated Bozin. “As we see here, the dimers obtain replaced, as well as what remains is an in your area misshaped crystal framework. This distortion lifts the depravity of 2 orbitals and allows the system to go into a lower-energy state.”

    Next off, the group plans to tailor orbital properties in this material, for instance, by replacing titanium with ruthenium, which will undoubtedly transform the electron count and is expected to supply a much better QSL. They will also observe whether the ODL precursors exist in other products and how they are connected to phenomena of interest, such as superconductivity. Notably, they would like to explore systems with various levels of spin-orbit coupling, which is an alternative mechanism for ODL.

    The identification of these orbital predecessors, according to Simon Billinge, a physicist in the CMPMS Division X-ray Scattering Group and teacher of material science and engineering as well as of applied math and physics at Columbia University, helps us better understand the competition between various low temperatures quantum states and will allow us to level the playing field to obtain materials with desired low temperatures properties.


    Read the original article on Brookhaven National Laboratory.

    Reference: R. J. Koch et al, Dual Orbital Degeneracy Lifting in a Strongly Correlated Electron System, Physical Review Letters (2021). DOI: 10.1103/PhysRevLett.126.186402

  • Quantum Materials Cut Closer Than Ever

    Quantum Materials Cut Closer Than Ever

    Crystals of the material hexagonal boron nitride can be etched so that the pattern you draw at the top transforms into a smaller and razor-sharp version at the bottom. These perforations can be used as a shadow mask to draw components and circuits in graphene. This process enables a precision that is impossible with even the best lithographic techniques today. To the right are images of triangular and square holes taken with an electron microscope. Credit: Peter Bøggild, Lene Gammelgaard og Dorte Danielsen

    Scientists from DTU and the Diamond Flagship have advanced the craft of patterning nanomaterials. A road to computing and storage space utilizing 2D materials, which can offer far greater performance and consume less power than today’s technology, is provided by the exact patterning of 2D materials.

    Two-dimensional substances like diamonds are among among the most important recent developments in physics and material science. Compared to other materials, graphene is the lightest, smoothest, strongest, and best at transmitting power and heat.

    Their most one-of-a-kind feature is maybe their programmability. Their properties can be considerably altered (and possibly make what we need) by producing delicate patterns in these materials.

    Throughout more than ten years, DTU researchers have worked to advance the state-of-the-art in patterning 2D materials by utilizing cutting-edge lithography equipment in the 1500 m2 cleanroom facility. Their work is carried out at the DTU’s Institute for Nanostructured Graphene, which is a member of The Graphene Flagship and funded by the Danish National Research Institute.

    The previously DTU Nanolab’s electron beam lithography technology can etch details as small as 10 nanometers. In order to create new types of electronics, computer computations can exactly predict the size and shape of patterns in graphene. They can control the electron’s charge as well as quantum properties like spin or valley degrees of freedom, enabling extremely fast calculations with very little power consumption. However, these estimates call for atomic precision, which is beyond the capabilities of even the best lithography devices.

    According to Peter Bøggild, professor and team leader at DTU Physics, we need to reach below 10 nanometers and reach the atomic scale to unlock the treasure chest for future quantum electronics.

    This is precisely what the scientists have prospered in doing.

    Peter Bggild continues by stating that in 2019, it was demonstrated that semimetallic graphene may be converted into a type of semiconductor by placing circular holes with just a 12-nanometer spacing. Right present, they can make triangles and other shapes with nanometer-sharp corners. These patterns can manufacture vital spintronics components by sorting electrons according to their spin. Other 2D materials can also be used with this method. We could create extremely compact and electrically tunable metalenses to be used in high-speed interaction and biology with the help of these ultra-small nanostructures.

    Razor-sharp triangle

    Postdoc Lene Gammelgaard led the research, an engineering graduate of DTU in 2013 that has played an important part in the experimental exploration of 2D materials at DTU:

    According to Lene Gammelgaard, the trick is to put the nanomaterial hexagonal boron nitride on top of the material intended to be patterned. Afterward, you drill openings with a specific etching recipe. 

    He continues by saying that the etching process developed over the past years down-size patterns below their electron beam lithography systems’ otherwise unbreakable limit of around 10 nanometers. Supposing they make a circular 20-nanometers opening, the graphene hole can then be scaled down to 10 nanometers. While if they make a triangle-shaped hole, with the round-shaped holes coming from the lithography system, the size reduction will make a smaller triangle with self-sharpened edges. Usually, patterns become even more imperfect when you make them smaller. This is the opposite and also permits them to recreate the structures the theoretical forecasts inform them are optimal.

    For example, one can create level electronic meta-lenses – a type of super-compact optical lens that can be managed electrically at very high frequencies. According to Lene Gammelgaard, they can be essential elements for future communication technology and biotechnology.

    Pushing the limits

    The other essential person is a young student, Dorte Danielsen. In 2012, she became interested in nanophysics after a 9th-grade internship. In 2014, she won a spot in the final of a national science competition for high school students, and pursued studies in Physics and Nanotechnology under DTU’s honors program elite students.

    She explained that the mechanism behind the “super-resolution” frameworks is still not well comprehended. They have possible descriptions for this surprising etching behavior; however, they still do not understand a lot. Still, it is an exciting and highly beneficial method for them. Simultaneously, it is great news for countless researchers around the world pushing the limits for nanophotonics and 2D nanoelectronics.

    Supported by the Independent Research Fund Denmark, within the METATUNE project, Dorte Danielsen will proceed with her work on incredibly sharp nanostructures. Here, the technology she helped create will be used to produce and explore optical metalenses that can be tuned electrically.


    Read the original article on Scitech Daily.

    Reference: Materials provided by Technical University of Denmark. Original written by Tore Vind Jensen. Note: Content may be edited for style and length.