AI Fosters Online Civility in Divisive Discussions.

AI Fosters Online Civility in Divisive Discussions.

Researchers have used AI to moderate polarizing online conversations. Credit: Pixaobay

By utilizing AI, researchers have developed a method to enhance the quality and politeness of online conversations on contentious subjects. They assert that, if employed effectively, AI has the potential to establish a more compassionate and secure online environment.

Online discussions have become a pivotal aspect of public discourse. However, comment sections on social media platforms and digital news outlets are teeming with conversations that have descended into disputes, threats, and derogatory language, especially when focused on divisive topics.

Researchers at Brigham Young University (BYU) and Duke University have introduced AI technology designed to moderate online discussions, with the aim of enhancing their quality and encouraging more civil interactions.

Evaluating System Performance through a Field Experiment

To test the system’s effectiveness, they conducted a field experiment involving 1,574 participants. These individuals were tasked with engaging in an online discussion about the contentious topic of gun regulation in the United States, a subject often associated with heated political debates. Each participant was paired with someone holding an opposing viewpoint on gun policies.

In this experiment, the conversation pairs were randomly divided into two groups: the treatment group and the control group. Participants in the treatment group had the opportunity to receive three suggested message rephrasings from GPT-3 before sending their messages. They could choose to send one of these AI-generated alternatives, their original message, or make their own edits.

AI suggested rephrasing messages didn’t alter the comment’s content but provided options to the user to make a more polite statement
Vin Howe/BYU

In fact, in each conversation, an average of 12 messages were exchanged, and a total of 2,742 rephrasings proposed by AI were offered. However, participants adopted AI’s suggested rephrasings in approximately 66% of instances. Chat partners of individuals who incorporated one or more AI-recommended rephrasings noted a substantial improvement in conversation quality and demonstrated a greater openness to considering the viewpoints of their political adversaries.

We observed that the more frequently participants utilized the suggested rephrasings, the more they felt the conversation was not divisive, and they felt acknowledged and understood,” explained David Wingate, one of the co-authors of the study.

AI as a Scalable Solution for Combating Online Toxicity

The researchers contend that their findings indicate a scalable solution to combat the toxic culture prevalent on the internet. They argue that implementing AI interventions would be more practical than traditional approaches like professional training sessions on online civility, which are often limited in reach and availability. AI, in contrast, can be widely deployed across various digital platforms.

Ultimately, the research underscores that when used appropriately, AI can play a pivotal role in cultivating a more positive online environment, fostering discussions that prioritize empathy and respect.

To conclude, Wingate expressed his hope for more BYU students to develop pro-social applications like this, positioning BYU as a leader in demonstrating ethical applications of machine learning. He stated, “In a world driven by information, we need students who can harness this information in ways that are constructive and socially beneficial.”


Read the original article on: New Atlas

Read more: AI Robots Take Center Stage at the Chargers’ Game

Share this post