The tool could help teachers spot plagiarism or social media platforms fight disinformation bots.
By Melissa Heikkilä
January 27, 2023Hidden patterns buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we’re reading are written by a human or not.These “watermarks” are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused.
For example, since OpenAI’s chatbot ChatGPT was launched in November students have already started using it to cheat by writing essays for them. News website CNET has used ChatGPT to write articles, only to have to issue corrections amid accusations of plagiarism. But there is a promising way to spot AI text: by embedding hidden patterns that let us identify AI-generated text into these systems before they’re released.In studies, these watermarks have already shown that they can identify AI-generated text with near certainty. One, developed by a team at the University of Maryland, was able to spot text created by Meta’s open source language model, OPT-6.7B, using a detection algorithm they built. The work is described in a paper that’s yet to be peer reviewed, and the code will be available for free around February 15.
Countinue Reading: A watermark for chatbots can spot text written by an AI | MIT Technology Review
Leave a comment