OpenAI’s GPT-4 Shakes Up Content Moderation Landscape: Propels Meta And X To Embrace AI Solutions

Illustration: GPT-4, 22 July 2023, Suqian, Jiangsu Province, China. OpenAI unveiled that it has been using GPT-4 to oversee content moderation and extended the suggestion that other platforms can also adopt this cutting-edge approach. CFOTO/FUTURE PUBLISHING VIA GETTY IMAGES.
Illustration: GPT-4, 22 July 2023, Suqian, Jiangsu Province, China. OpenAI unveiled that it has been using GPT-4 to oversee content moderation and extended the suggestion that other platforms can also adopt this cutting-edge approach. CFOTO/FUTURE PUBLISHING VIA GETTY IMAGES.


By Ananya Gairola

“ChatGPT-parent OpenAI has been using GPT-4, its latest publicly available large language model to moderate content – and suggested social media platforms like Meta Platforms Inc.’s (NASDAQ:META) Facebook, Instagram, and Elon Musk’s X, formerly known as Twitter, to explore new avenues of maintaining online integrity. 

What Happened: On Tuesday, OpenAI unveiled that it has been using GPT-4 to oversee content moderation and extended the suggestion other platforms can also adopt this cutting-edge approach by using the startup’s API. 

In a blog post, the company elaborated that traditionally, this responsibility has rested upon human moderators who sift through substantial content volumes to identify and eliminate harmful material, a process that is not only time-consuming but also psychologically taxing for these individuals.

GettyImages-1546733771.jpg
Illustration: GPT-4, 22 July 2023, Suqian, Jiangsu Province, China. OpenAI unveiled that it has been using GPT-4 to oversee content moderation and extended the suggestion that other platforms can also adopt this cutting-edge approach. CFOTO/FUTURE PUBLISHING VIA GETTY IMAGES.

“We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of many human moderators,” said the company. 

Casey Newton of Platformer highlighted insights from Yoel Roth, Twitter’s former trust and safety head, regarding LLM-based moderation, stating that he raised crucial questions considering ChatGPT integration:

  • How aligned are GPT -4’s conclusions with specific policies beyond vague terms like “hate”?
  • Are LLM decisions consistent, especially for Digital Services Act compliance?
  • Can ChatGPT handle coded speech and diverse languages/cultures?

“While effective, GPT-4’s content moderation incurs higher costs than alternative tools. However, it could reduce the necessity of humans to manually review content that can be termed as disturbing and lead to PTSD or worse,” said Newton further.

Why It’s Important: Earlier this year, it was reported that researchers discovered that Instagram plays an active role in facilitating connections between pedophiles and content sellers.

GettyImages-1546733771.jpg
Illustration: GPT-4, 22 July 2023, Suqian, Jiangsu Province, China. OpenAI unveiled that it has been using GPT-4 to oversee content moderation and extended the suggestion that other platforms can also adopt this cutting-edge approach. CFOTO/FUTURE PUBLISHING VIA GETTY IMAGES.

The research team from Stanford University also revealed that Instagram had over three times the number of accounts dedicated to selling child sex abuse material compared to Twitter, now rebranded as X. 

This is despite the fact that the Mark Zuckerberg-led company has been actively working to develop advanced AI systems capable of detecting and categorizing such harmful content. 

On the other hand, Musk’s X Community Notes feature, aimed at promoting transparency and collaborative context-sharing to combat potentially misleading information, has received mixed reactions from people. 

While Zuckerberg has applauded the feature, Community Notes has also faced massive criticism for enabling users to add notes to tweets that might be misleading.

© 2023 Zenger News.com. Zenger News does not provide investment advice. All rights reserved.

Produced in association with Benzinga

Edited by Judy J. Rotich and Newsdesk Manager