In the race for generative AI, this becomes a pressing question for the tech industry. The emergence of ChatGPT, GPT-4, Google Bard, and other AI services has brought forth the ability to produce convincing and valuable written content. However, just like any technology, it can be employed for both positive and negative purposes. While it expedites and simplifies software code writing, it can also generate inaccuracies and falsehoods. Thus, it is crucial to develop methods for differentiating AI-generated text from human-written text. OpenAI, the creator of ChatGPT and GPT-4, recognized this need early on. In January, they introduced a "classifier that distinguishes between human-written and AI-written text from various providers. " Although the company acknowledged the difficulty of reliably detecting all AI-generated text, they emphasized the importance of effective classifiers in addressing several problematic scenarios. These include misleading claims of human-written AI text, automated disinformation campaigns, and academic cheating through AI tools. However, less than seven months later, the project was discontinued. OpenAI recently shared in a blog post, "As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are actively gathering feedback and exploring more effective techniques for determining text provenance. " If OpenAI struggles to recognize AI writing, how can others overcome this challenge?Various entities, such as the startup GPTZero, are working towards a solution.
Nevertheless, OpenAI, supported by Microsoft, is considered a leading authority in the AI field. Once the line between AI and human-generated text blurs, the challenges surrounding online information become even more significant. Spam websites are already employing AI models to generate automated content, often accompanied by false information. Bloomberg reported instances where these websites produced ad revenue while circulating lies, such as "Biden dead. Harris acting President, address 9 a. m. " This highlights the potential dangers inherent in an unchecked dissemination of information. However, there is a deeper concern for the AI industry. If tech companies unintentionally employ AI-generated data to train new models, some researchers fear that these models will deteriorate, resulting in an AI "Model Collapse. " A group of researchers from prestigious universities like Oxford, Cambridge, and Toronto have examined the consequences of using text produced by AI models like GPT-4 as the primary training dataset for subsequent models. Their recent research paper concludes, "We found that the utilization of model-generated content in training leads to irreversible defects in the resulting models. " Ilia Shumailov, one of the researchers, eloquently expressed on Twitter that "it needs to be taken seriously if we want to maintain the benefits of training from large-scale data scraped from the web. " They further emphasized the increasing value of data reflecting genuine human interactions to counterbalance content generated by large language models (LLMs) extracted from the Internet. Solving this existential crisis hinges on our ability to distinguish between human and machine-authored content online. I reached out to OpenAI via email to inquire about their unsuccessful AI text classifier and the implications it carries, notably in regard to Model Collapse. In response, a spokesperson simply stated, "We have nothing additional to share beyond the update provided in our blog post. " Curious to confirm the spokesperson's humanity, I replied, to which they cheerfully responded, "Hahaha, yes, I am indeed human. Thank you for checking!"
None
Examining AI ‘hallucinations’ and Sunday’s Gaza blasts Thomas Copeland, BBC Verify Live journalist As we prepare to close this live coverage, here's a summary of today's key stories
The challenge marketers face today is harnessing AI’s potential without compromising sustainability goals—a question we at Brandtech have been exploring with clients and industry peers.
By 2028, it is expected that 10 percent of sales professionals will use the time saved through artificial intelligence (AI) to engage in 'overemployment,' a practice where individuals secretly hold multiple jobs simultaneously.
OpenAI has rapidly established itself as a leading force in artificial intelligence through a series of strategically crafted partnerships with top technology and infrastructure companies worldwide.
A recent study reveals stark differences in how reputable news websites and misinformation sites manage AI crawler access via robots.txt files, a web protocol controlling crawler permissions.
On Saturday, President Donald Trump shared an AI-generated video showing him in a fighter jet dropping what appears to be feces onto U.S. protesters.
Nvidia Corp.
Automate Marketing, Sales, SMM & SEO
and get clients on autopilot — from social media and search engines. No ads needed
and get clients today