Tag: Machine Learning

  • Two Scientists Won the Nobel Prize in Physics for Discoveries Enabling Machine Learning

    Two Scientists Won the Nobel Prize in Physics for Discoveries Enabling Machine Learning

    John Hopfield and Geoffrey Hinton, two pioneers of artificial intelligence, were awarded the Nobel Prize in Physics on Tuesday for their contributions to the foundation of machine learning, which is transforming how we live and work, while also posing new risks to humanity.
    John Hopfield and Geoffrey Hinton, seen in picture, are awarded this year’s Nobel Prize in Physics, which is announced at a press conference by Hans Ellergren, center, permanent secretary at the Swedish Academy of Sciences in Stockholm, Sweden Tuesday Oct. 8, 2024. Credit: Christine Olsson/TT News Agency via AP

    John Hopfield and Geoffrey Hinton, two pioneers of artificial intelligence, were awarded the Nobel Prize in Physics on Tuesday for their contributions to the foundation of machine learning, which is transforming how we live and work, while also posing new risks to humanity.

    Geoffrey Hinton, known as the godfather of artificial intelligence, is a dual citizen of Canada and Britain and works at the University of Toronto, while John Hopfield, an American, is based at Princeton.

    These two gentlemen were the true pioneers,” said Nobel physics committee member Mark Pearce. According to Ellen Moons of the Nobel committee, the researchers’ work on artificial neural networks—computer systems modeled after the brain’s neurons—has become central to science, medicine, and everyday life.

    The Lasting Impact of Early AI Research and Its Future Potential

    Hopfield, whose 1982 research laid the foundation for Hinton’s work, told the Associated Press that he is continually amazed by its impact. Hinton, in a press call with the Royal Swedish Academy of Sciences, predicted AI will have a “huge influence” on civilization, improving productivity and health care, comparing its potential to the Industrial Revolution.

    “Instead of surpassing humans in physical strength, AI will surpass us in intellectual ability,” Hinton said, noting both the exciting possibilities and the need for caution regarding the potential risks, especially the threat of AI becoming uncontrollable.

    Artificial intelligence pioneer Geoffrey Hinton speaks at the Collision Conference in Toronto, Wednesday, June 19, 2024. Credit: Chris Young/The Canadian Press via AP, File

    Balancing AI’s Promise with Ethical Concerns

    The Nobel committee also acknowledged concerns about the potential downsides of AI. Ellen Moons noted that while AI offers “enormous benefits,” its rapid advancement has sparked worries about humanity’s future. She emphasized that it is humanity’s collective responsibility to use this technology safely and ethically for the greatest good.

    Geoffrey Hinton, who left his role at Google to speak more openly about the risks of AI, shares these concerns. “I worry that this could lead to systems more intelligent than us eventually taking control,” he said.

    In fact, John Hopfield, who signed early petitions urging strong regulation of AI, likened the risks and benefits of the technology to those of viruses and nuclear energy, both of which can benefit or harm society.

    Hopfield, who was staying with his wife at a cottage in Hampshire, England, said he was met with a flood of emails after grabbing a coffee and getting his flu shot.

    I’ve never seen that many emails in my life,” he remarked. He mentioned that a bottle of champagne and a bowl of soup were ready, but doubted there were any other physicists in the area to celebrate with him.

    Computer scientist Geoffrey Hinton poses at Google’s Mountain View, Calif, headquarters on Wednesday, March 25, 2015. Credit: AP Photo/Noah Berger, File

    Hinton expressed surprise at receiving the honor.

    I’m flabbergasted. I never expected this,” he said when contacted by the Nobel committee. He mentioned he was staying in a budget hotel without internet access.

    In the 1980s, Hinton, now 76, pioneered a technique called backpropagation, which is crucial in teaching machines to “learn” by adjusting errors until they disappear. This method resembles how a student improves by correcting mistakes in repeated attempts until the solution aligns with the system’s version of reality.

    The Unique Path of a Pioneering AI Scientist

    Nick Frosst, Hinton’s former protégé and first hire at Google’s AI division in Toronto, noted that Hinton had an unconventional background as a psychologist who also dabbled in carpentry and was deeply curious about the workings of the mind. Frosst said, “His playfulness and genuine curiosity in addressing fundamental questions are key to his success as a scientist.”

    Professor Anders Irbäck explains the work of John Hopfield and Geoffrey Hinton after being awarded the 2024 Nobel Prize in Physics at a press conference at the Royal Swedish Academy of Sciences in Stockholm, Sweden Tuesday Oct. 8, 2024. Credit: Christine Olsson/TT News Agency via AP

    Hinton didn’t stop with his pioneering 1980s work.

    “He’s always trying bold ideas—some succeed, some don’t—but all have advanced the field,” said Nick Frosst.

    A Pivotal Breakthrough in AI and the Legacy of Perseverance

    Moreover, in 2012, Hinton’s team won the ImageNet competition with a neural network, sparking widespread imitation. Fei-Fei Li called it “a pivotal moment in AI history.” Hinton, along with Yoshua Bengio and Yann LeCun, received the Turing Award in 2019. Reflecting on early doubts about his work, Hinton advised young researchers, “Don’t be discouraged if others call your work silly.”

    Many of his students entered the tech industry, founding companies like Cohere and OpenAI. Hinton regularly uses AI tools and said, “I ask GPT-4 for answers—while it can hallucinate, it’s still a useful expert.”

    Hopfield, 91, created an associative memory that can store and reconstruct images and data patterns, as noted by the Nobel committee.

    What fascinates me most is how mind arises from machine,” Hopfield said in a 2019 video after receiving a physics prize.

    Hinton expanded on Hopfield’s network with the Boltzmann machine, which the committee stated can learn to identify key features in data.

    AI’s Recognition in Traditional Science and Interdisciplinary Innovation

    Although there’s no Nobel for computer science, Fei-Fei Li pointed out that awarding a traditional science prize to AI pioneers shows the merging of disciplines. Bengio, who was mentored by Hinton and influenced by Hopfield, remarked that both winners “saw a significant, non-obvious connection between physics and learning in neural networks, forming the basis of modern AI.”

    Not all of Hinton’s colleagues agree with his views on the risks of the technology he helped create.

    Frosst has had many “spirited debates” with Hinton about AI risks and disagrees with some of his concerns, though he values Hinton’s openness. “We mainly differ on the timeline and specific technologies,” Frosst said. “I don’t think neural networks and language models currently pose an existential threat.

    Bengio, who has been vocal about AI risks, shares concerns with Hinton about the “loss of human control” and the morality of AI systems surpassing human intelligence. “We don’t have answers to these questions,” he said, “and we should ensure we do before building these machines.”

    When asked if the Nobel committee considered Hinton’s warnings, Bengio dismissed the notion, saying, “We’re discussing early work when we thought everything would be fine.”

    To conclude, the Nobel announcements began Monday with Victor Ambros and Gary Ruvkun winning the medicine prize. However, the chemistry prize will be announced Wednesday, literature on Thursday, the Nobel Peace Prize on Friday, and the economics award on October 14.

    The prize includes 11 million Swedish kronor (about $1 million) from a bequest by Alfred Nobel. The laureates will receive their awards on December 10, the anniversary of Nobel’s death.

    Watch the 2024 Nobel Prize announcement

    Read the original article on: Phys Org

    Read more: 3 Scientists Share Nobel Prize In Physics For Work In Quantum Mechanics

  • Enabling Machine Learning to Inquire Can Enhance its Intelligence

    Enabling Machine Learning to Inquire Can Enhance its Intelligence

    Credit: Pixaobay

    Researchers from Duke University’s biomedical engineering department have showcased a novel approach that significantly enhances the performance of machine learning models in the search for new molecular therapeutics, even when utilizing only a small portion of the available data. By employing an algorithm that actively detects gaps in datasets, the accuracy of the models can be more than doubled in certain instances.

    This innovative approach has the potential to simplify the identification and classification of molecules with valuable characteristics for the development of new drugs and materials. The research was published in the journal Digital Discovery by the Royal Society of Chemistry on June 23.

    Challenges of Machine Learning Algorithms in Predicting Molecular Properties

    Machine learning algorithms play an increasingly crucial role in predicting the properties of small molecules, including drug candidates and compounds. However, their effectiveness is currently limited by imperfect datasets used for training, particularly due to data bias.

    This bias arises when certain properties of molecules are overrepresented compared to others in the dataset, leading the algorithm to prioritize the overrepresented property and overlook other important features.

    Daniel Reker, an assistant professor of biomedical engineering at Duke University, compared this bias issue to training an algorithm to differentiate between pictures of dogs and cats but providing it with an overwhelming number of dog pictures and only a few cat pictures. As a result, the algorithm becomes excessively proficient at identifying dogs and ignores other important distinctions.

    Data Bias and Its Impact on Drug Discovery

    This bias poses significant challenges in drug discovery, where datasets often consist of a vast majority of “ineffective” compounds, with only a small fraction showing potential usefulness. To address this, researchers resort to data subsampling, where the algorithm learns from a smaller but hopefully representative subset of the data. However, this process can lead to the loss of crucial information, impacting the accuracy of the algorithm.

    The new method proposed by the Duke University biomedical engineers addresses this limitation by employing an algorithm that actively identifies gaps in datasets. By doing so, the researchers can enhance the accuracy of machine learning models, sometimes achieving more than double their original accuracy when using only a fraction of the available data. This breakthrough could greatly facilitate the identification and classification of molecules with desirable properties for drug development and other material applications.

    Reker and his team set out to investigate whether active machine learning could address the longstanding issue mentioned earlier.

    An Interactive Approach

    In active machine learning, the algorithm can ask questions or request more information when it encounters confusion or detects data gaps, making it highly efficient in predicting performance. While active learning algorithms are usually used to generate new data, the team wanted to explore its application on existing datasets in molecular biology and drug development.

    To assess the effectiveness of their active subsampling approach, the team compiled datasets containing molecules with various characteristics, such as those crossing the blood-brain barrier, inhibiting a protein linked to Alzheimer’s disease, and compounds inhibiting HIV replication. They compared their active-learning algorithm with models that learned from the complete dataset and 16 state-of-the-art subsampling strategies.

    The results showed that active subsampling outperformed each of the standard subsampling strategies in identifying and predicting molecular characteristics. Moreover, it was up to 139 percent more effective than the algorithm trained on the full dataset in some cases. The model also demonstrated its ability to adapt to mistakes in the data, proving especially valuable for low-quality datasets.

    Surprising Discoveries

    Interestingly, the team found that the ideal amount of data needed was much lower than expected, sometimes requiring only 10% of the available data. The active-subsampling model reached a point where additional data became detrimental to performance, even within the subsample.

    While the team intends to explore this inflection point further in future research, they also plan to utilize this new approach to identify potential therapeutic target molecules. They believe their work will enhance understanding of active machine learning and its resilience to data errors in various research fields.

    Besides boosting machine learning performance, this approach can reduce data storage needs and costs since it works with a more refined dataset, making machine learning more accessible, reproducible, and powerful for all researchers.


    Read the original article on Tech Xplore.

    Read more: Neuralink, Mind Control or Advanced Technology.

  • A Look on Data Mining and Machine Learning

    A Look on Data Mining and Machine Learning

    Python is a popular programming language used for data mining, among many other applications.
    Python is a popular programming language used for data mining, among many other applications. Data mining involves the extraction of useful patterns and insights from large data sets, and Python provides a wide range of tools and libraries that make it well-suited for this task. Some of the popular Python libraries used for data mining include NumPy, pandas, scikit-learn, TensorFlow, and PyTorch. These libraries provide functions and tools for data analysis, machine learning, and deep learning, which are all important components of data mining. Therefore, while Python is not exclusively a data mining language, it is a very capable and widely-used language for this purpose. Credit: Pexels and ChatGPT

    Data mining is the process of finding out patterns, trends, and insights from big datasets. It uses statistical and machine learning techniques to extract knowledge from data, and to solve problems across various industries.

    The step of data mining process

    The process of data mining typically involves the following steps:

    Data collection: Info is collected  and gathered from different sources, such as databases, websites, and sensors.

    Data preprocessing: This step involves cleaning and transforming the data to ensure that it is suitable for analysis. This may involve removing outliers, filling in missing values, and normalizing the data.

    Data exploration: This step involves exploring the data to identify patterns, trends, and relationships between variables. This may involve visualizations, such as scatter plots and histograms, or statistical tests to identify correlations and associations.

    Model building: This step involves building models using machine learning algorithms to predict outcomes or identify patterns in the data. This may involve techniques such as clustering, classification, and regression.

    Model evaluation: This step involves evaluating the performance of the models to ensure that they are accurate and reliable. This may involve cross-validation, hypothesis testing, and other techniques.

    Model deployment: This step involves deploying the models to make predictions or provide insights to stakeholders.

    Data mining can have many applications, including fraud detection, customer segmentation, market basket analysis, and predictive maintenance. It can help businesses make more informed decisions, identify new opportunities, and improve their operations.

    Machine Learning

    Machine learning is a part of AI that develops algorithms and models that equip computers to learn from data and can predicte or decide by experience. Its aim is to create systems that can automatically improve their performance over time by learning from experience.

    There are three main types of machine learning:

    Supervised learning: This involves teaching a model on a labeled dataset, where each data point is associated with a target variable. The goal of supervised learning is to learn a mapping between input features and the target variable, so that the model can make accurate predictions on new, unseen data.

    Unsupervised learning: This involves training a model on an unlabeled dataset, where the goal is to identify patterns or structure in the data. Unsupervised learning can be used for tasks such as clustering, anomaly detection, and dimensionality reduction.

    Reinforcement learning: This involves training a model to make decisions based on feedback from the environment. The model learns by receiving rewards or punishments for its actions, and the goal is to learn a policy that maximizes the cumulative reward over time.

    Machine learning algorithms can be applied to a much bigger array of applications, including image and speech recognition, natural language processing, recommendation systems, and autonomous vehicles. Some of the most commonly used machine learning algorithms include linear regression, logistic regression, decision trees, random forests, support vector machines, and neural networks.

    To apply machine learning, a typical workflow might include data collection, preprocessing, feature engineering, model selection and training, and evaluation. Machine learning requires a combination of statistical and programming skills, as well as a deep understanding of the problem domain and the data.


    Read more: The Amazing Of Data Analysis.

  • Uncovering the Secrets of the Big Bang With Machine Learning

    Uncovering the Secrets of the Big Bang With Machine Learning

    A quark gluon plasma after the collision of two heavy nuclei. Credit: TU Wien

    Can machine learning be used to reveal the secrets of the quark-gluon plasma?

    Yes, it can. However, only with advanced new methods.

    It can hardly be more complicated. Little particles whir around wildly with extremely high energy, many interactions happen in the matted mess of quantum particles. This leads to a state of matter called “quark-gluon plasma”. Promptly after the Big Bang, the entire universe found itself in this state. Today, it is generated by high-energy atomic nucleus collisions, as an example at CERN.

    Such processes can just be examined using high-performance computers and very complicated computer simulations whose outcomes are difficult to assess. For that reason, utilizing artificial intelligence or machine learning for this goal looks like an obvious idea. Average machine-learning algorithms, however, are not ideal for this task. The mathematical properties of particle physics call for a very special structure of neural networks. At TU Wien (Vienna), it has now been demonstrated how neural networks can be effectively used for these difficult tasks in particle physics.

    Neural networks

    ” Simulating a quark-gluon plasma as realistically as possible calls for an extremely big quantity of computing time,” claims Dr. Andreas Ipp from the Institute for Theoretical Physics at TU Wien. “Even the largest supercomputers on the planet are bewildered by this”. Consequently, it would be preferable not to calculate every detail precisely but to identify and predict specific plasma properties using artificial intelligence.

    Therefore, neural networks are used, similar to those utilized for image recognition. Artificial “neurons” are linked together on the computer similarly to neurons in the brain. This produces a network that can identify, as an example, whether a cat is evident in a specific image.

    However, there is a significant issue when employing this technique to the quark-gluon plasma. The quantum fields utilized to mathematically explain the particles, as well as the forces in between them, can be represented in various different ways. “This is described as gauge symmetries,” states Ipp. “The fundamental principle behind this is something we are acquainted with. If I adjust a measuring device differently, for example, if I utilize the Kelvin scale instead of the Celsius scale for my thermometer, I obtain entirely different numbers, even though I am describing the very same physical state. It is comparable with quantum theories– other than that, and the allowed changes are mathematically much more complex.” Mathematical objects that look totally different at first glimpse might depict the very same physical state.

    Gauge symmetries developed into the structure of the network

    ” If you do not take these gauge symmetries into account, you can not meaningfully interpret the results of the computer simulations,” claims Dr. David I. Müller. “Teaching a neural network to find out these gauge symmetries by itself would certainly be incredibly hard. It is better to begin by designing the structure of the neural network as though the gauge symmetry is immediately considered. This ensures that different depictions of the exact same physical state additionally create the exact same signals in the neural network,” says Müller. “That is exactly what we have actually now prospered in doing. We have actually established totally new network layers that immediately take gauge invariance into account.” In some examination applications, it was revealed that these networks could, in fact, learn better exactly how to manage the simulation data of the quark-gluon plasma.

    ” With such neural networks, it becomes feasible to make predictions about the system– for example, to estimate what the quark-gluon plasma will appear like at a later moment without actually having to calculate every intermediate step in time in detail,” says Andreas Ipp. “And at the same time, it is guaranteed that the system just produces results that do not oppose gauge symmetry– in other words, outcomes which make sense a minimum of in concept.”

    It will be a long time before it is possible to replicate atomic core collisions at CERN with such methods fully. Yet, the new type of neural networks offers a promising as well as totally new device for describing physical phenomena for which all other computational techniques may never ever be effective enough.


    Read the original article on Scitech Daily.

    Related “MIT Magnet Allows Path to Commercial Fusion Power”