lang icon English
July 29, 2023, 4 a.m.
517

None

Brief news summary

None

Over the past few months, I've extensively studied AI glossaries to familiarize myself with the vocabulary related to generative AI, a fascinating field. Although I have delved deep into this topic and may possess more knowledge than the average American regarding AI, I assumed that the fundamental concepts associated with generative AI were widely known and understood. However, in a recent conversation with a journalism professor, it became evident that this assumption was incorrect. When I mentioned how AI tools sometimes tend to "hallucinate, " they stared at me blankly and asked for clarification. "Hallucinate" is a term that everyone should be familiar with when discussing genAI. In simple terms, it refers to the tendency of AI engines, such as OpenAI's ChatGPT, to generate content that may sound true but is not based on reality. The US Federal Trade Commission recently initiated an investigation into OpenAI due to concerns that its chatbot may have made false statements about individuals. The term "hallucinate" originated in the AI context in 2018 when Google DeepMind researchers coined it. They discovered that neural machine translation systems (NMTs) were prone to producing highly aberrant translations that deviated significantly from the source material. These translations were completely untethered from reality. Even without an engineering background, it is apparent that these systems exhibited serious flaws. Most glossaries provide a more straightforward definition of AI "hallucination. " NBC News defines it as unexpected and incorrect responses from AI programs that occur for reasons not yet fully understood. It can involve various scenarios, such as an AI model providing fruit salad recipes instead of information about planting fruit trees or generating false scholarly citations, fabricated data analysis, or fictional facts about events not covered in its training data. The underlying causes of this phenomenon are not fully comprehended, but they can be attributed to sparse data, information gaps, and misclassification. To maintain fairness, I requested a comment from ChatGPT regarding the meaning of "hallucinate" in the AI context. Here is its response: In the context of AI, "hallucinate" is a metaphorical term used to describe the generation of content or information that is not based on direct observation or real-world data. When an AI system "hallucinates, " it produces outputs that may mimic patterns or features found in the training data but are not grounded in reality. The term "hallucinate" is not exclusive to AI; it is borrowed from the field of psychology. In psychology, hallucination refers to sensory experiences or perceptions that occur without any external stimuli. These perceptions are not based on reality but are perceived as real by the individuals experiencing them. In the AI context, the term is used metaphorically to describe situations where the generated outputs deviate from reality or introduce elements absent in the original data. For instance, in image generation, an AI system may produce images with surreal or fantastical elements unrelated to observed objects or scenes in the training data. The term "hallucinate" highlights the potential for AI systems to create outputs that go beyond direct observation or explicit instructions. It reflects AI algorithms' capacity to extrapolate, combine, or invent new patterns, resulting in unexpected or imaginative results.

This lengthy and convoluted explanation from a conversational AI could be perceived as an attempt to inform that it fabricates information. In my opinion, it borders on defensiveness. Now let's explore other noteworthy developments in AI. Both Pew Research Center and McKinsey released reports this week on how AI could impact jobs and workers. While many questions remain unanswered, the Pew study reveals that US workers appear more hopeful than concerned about AI's impact on their jobs. The study aimed to identify industries and workers most exposed to AI, characterizing jobs with higher exposure as those where AI can wholly perform or assist with their most crucial activities. Information technology workers, for example, are more likely to believe AI will help rather than harm them personally. Risk of job losses due to AI remains uncertain as it can either replace or complement human work, a decision ultimately made by human managers. Bing, Microsoft's search engine, offers an AI-enhanced version used by OpenAI's ChatGPT. Nadella highlighted this collaboration, and Bing users have engaged in numerous chats and created a substantial volume of images through Bing Image Creator. Pichai emphasized how AI technology is transforming Google Search, pointing out its positive impact on user queries and the ability to answer new types of questions related to a topic. Generative AI detectors, developed by Stanford University researchers, were proven to be ineffective in distinguishing between AI-generated and human-written content. OpenAI's intention to develop an AI detection tool was encouraging, but they recently discontinued their AI Classifier due to its low accuracy rate. They are now researching more effective techniques for determining the provenance of text, and they remain committed to implementing features that enable users to identify AI-generated audio or visual content. Senate Majority Leader Chuck Schumer continues to hold sessions to inform the Senate about AI's opportunities and risks. Bipartisan interest exists in creating AI legislation that encourages innovation while implementing safeguards against potential liabilities. Concerns were raised regarding AI's potential use in biological attacks. In the entertainment industry, as actors and writers go on strike, some companies are posting job openings for AI specialists. Studios' interest in AI spans areas such as content creation, customer service, and data analysis. The demand for expertise in this field suggests that AI will undoubtedly impact jobs, but the decisions regarding compensation and rights will rely on human choices. Actor Joseph Gordon-Levitt emphasized in an op-ed the importance of acknowledging and compensating the creators whose work AI is trained on. It's worth noting that CNET is utilizing an AI engine to assist in story creation. This post provides more details on how it is being used.


Watch video about

None

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Content Maker

Our unique Content Maker allows you to create an SEO article, social media posts, and a video based on the information presented in the article

news image

Last news

The Best for your Business

Hot news

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today