“OpenAI, the creator of ChatGPT, is working on bringing a new-age content moderator that can set milestones. The company is developing its GPT4 large language model (LLM) to automate the process of content moderation throughout digital platforms, especially social media.”
A high-performing content moderator is in-demand when UGC or user generated content are full of plagiarism, scams and more. Content moderation helps in detecting irrelevant illegal, obscene, harmful or insulting UGC.
OpenAI, the creator of ChatGPT, is working on bringing a new-age content moderator that can set milestones. The company is developing its GPT4 large language model (LLM) to automate the process of content moderation throughout digital platforms, especially social media.
The ChatGPT creator is exploring how it can use GPT4 ability to detect nuances and rules in long content policy documentation. OpenAI is not only keen to do this, but checking its capability to embrace immediately to policy updates.
OpenAI said, “We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators,”
ChatGPT’s Custom Moderation vs Manual Content Moderation
According to OpenAI, their GPT4 large language model can create custom content policies in a matter of hours, which is much faster than the current manual and time-consuming process of content moderation.
To achieve this goal, data scientists and engineers can refer to a policy guideline developed by experts and utilize datasets that include real-world instances of policy breaches to categorize the data.
Humans will assist in testing AI content moderation
Data scientists and engineers may need to repeat these steps before the large language model can generate satisfactory results. The continuous procedure improves content policies, which are subsequently converted into classifiers that enable policy and content moderation to be deployed on a larger scale.The company said, “Then, GPT-4 reads the policy and assigns labels to the same dataset, without seeing the answers. By examining the discrepancies between GPT-4’s judgments and those of a human, the policy experts can ask GPT4 to come up with reasoning behind its labels, analyze the ambiguity in policy definitions, resolve confusion and provide further clarification in the policy accordingly,”.