Bad Actor AI Expected to Daily Threaten Democracies by Mid-2024

Bad Actor AI Expected to Daily Threaten Democracies by Mid-2024

A recent study forecasts that the deliberate use of AI by 'bad actors' to propagate online harm, particularly through disinformation, will occur on a daily basis by mid-2024. The results raise concerns, especially with over 50 countries, including the US, scheduled to hold national elections this year, with potential global consequences based on the outcomes.
A study is predicting that malicious AI activity will be daily by mid-2024
Depositphotos

A recent study forecasts that the deliberate use of AI by ‘bad actors’ to propagate online harm, particularly through disinformation, will occur on a daily basis by mid-2024. The results raise concerns, especially with over 50 countries, including the US, scheduled to hold national elections this year, with potential global consequences based on the outcomes.

Prior to the launch of the latest versions of Generative Pretrained Transformer (GPT) systems, AI experts predicted that by 2026, 90% of online content will be computer-generated without human involvement, leading to the proliferation of misinformation and disinformation.

The assumption that significant social media platforms, due to their large user bases, require regulation to mitigate risks is accurate, leading to legislative efforts like the EU’s Digital Services Act and AI Act. However, smaller ‘bad actors,’ including individuals, groups, and countries intentionally engaging in harmful behavior, also exploit AI.

A recent study, led by researchers at George Washington University (GW), represents the initial quantitative scientific analysis addressing how these bad actors may misuse AI and GPT systems to propagate harm worldwide across social media platforms and explores potential solutions.

Despite widespread discussions on the risks of AI, our study is the first to provide scientific backing to these concerns,” said Neil Johnson, the lead author of the study. “Understanding the battlefield is crucial in winning any battle.”

Mapping the Global Online Landscape

The researchers initiated their work by charting the dynamic network of interconnected social media communities that constitute the global online population. Users, ranging from a few to several million, join these communities based on shared interests, including potentially harmful ones. The study focused on extreme ‘anti-X’ communities, where each community features defined hate speech, extreme nationalism, and/or racism in at least two of its 20 most recent posts. Examples of such anti-X communities include those against the US, women, abortion, or with anti-Semitic sentiments. Over time, links between these communities form clusters within and across various social media platforms.

Any community A can establish a hyperlink to any community B if B’s content is of interest to A’s members,” explained the researchers. “A may hold either agreement or disagreement with B. This hyperlink directs A’s members’ attention to B, enabling them to add comments without B’s members being aware of the link. As a result, community B’s members are exposed to and potentially influenced by community A’s members.”

Analyzing Malicious AI Behavior

Utilizing a mathematical model, the researchers determined the probable occurrences and motivations behind AI activity by malicious actors. Specifically, they identified that the most basic GPT system, like GPT-2, suffices and is more likely to appeal to malicious actors compared to more advanced versions such as GPT-3 or -4. This preference arises because GPT-2 can effortlessly replicate the human style and content found in extreme online communities, allowing bad actors to use this basic tool to generate more provocative content by subtly altering the structure of an online query without altering its meaning. In contrast, GPT-3 and -4 include a filter that restricts responses to potentially contentious prompts, preventing the generation of such output.

Adding together bad actor and vulnerable mainstream communities amounts to more than one billion users
Depositphotos

Researchers warn that the online environment conducive to bad-actor-AI activity involves both directly engaged communities and the mainstream ones they connect with, creating a vulnerable online ecosystem with over one billion individuals. Real-world instances of non-AI-generated hate and extremism, linked to events like COVID-19, the Russia-Ukraine conflict, and the Israel-Hamas wars, illustrate their concerns.

The forecast is that bad-actor-AI activity will become a daily occurrence by mid-2024. The researchers based this prediction on proxy data from historical incidents involving the manipulation of online electronic systems, such as the 2008 automated algorithm attacks on US financial markets and the 2013 Chinese cyber-attacks on US infrastructure. Analyzing these datasets, they extrapolated the frequency of attacks, considering current technological advancements in AI.

Election Year Risks

Given that 2024 is a significant election year, the researchers highlight the global impact of elections in over 50 countries, including the US. They emphasize the potential for bad actors to exploit AI to disseminate and amplify disinformation during these elections, posing substantial risks to human rights, economies, international relations, and world peace.

In response to this threat, the researchers recommend that social media companies adopt strategies to contain disinformation rather than opting to remove every piece of content generated by bad actors.

In light of the dynamic nature of the AI landscape, the researchers did introduce a caveat to their study’s conclusions. However, the study underscores the substantial challenges arising from bad actors utilizing AI.

While the rapid pace of technology and the evolving online landscape make it challenging to predict the exact course of future bad-actor-AI, the predictions in this article are, strictly speaking, speculative,” noted the researchers. “Nevertheless, they are both quantitative and testable, as well as generalizable, serving as a tangible foundation for advancing discussions on policy measures related to bad-actor-AI.”


Read the original article on: New atlas

Read more: Using Generative AI to Uncover Human Memory and Imagination

Share this post