Discover the VB Transform 2023 sessions in our on-demand library. Register Here Initially, when ChatGPT was released by OpenAI, it appeared to me as an all-knowing entity. Trained on extensive datasets that represented a collective sum of human knowledge and interests available online, this statistical prediction machine seemed to hold the potential to serve as a singular source of truth. Walter Cronkite, with his nightly proclamation of "That's the way it is, " had previously fulfilled this role for the American public, and was widely trusted. In our current era of polarization, misinformation, and dwindling trust in society, a reliable source of truth would undoubtedly be invaluable. However, these hopes were quickly shattered as the flaws of this technology became evident. One major drawback was its tendency to generate answers that were mere fabrications. It soon became apparent that, while the output appeared impressive, it was based solely on patterns within the training data rather than reflecting an objective truth. But that wasn't the only problem. Soon after ChatGPT's release, a flood of other chatbots emerged from tech giants like Microsoft, Google, Tencent, Baidu, Snap, SK Telecom, Alibaba, Databricks, Anthropic, Stability Labs, Meta, and others. One notable example is Sydney. What's more, these various chatbots yielded drastically different results when presented with the same query. The discrepancies stem from differences in models, training data, and any guiding principles provided to the models. These guiding principles are meant to prevent the perpetuation of biases, generation of disinformation, hate speech, and other toxic content within these systems. Unfortunately, it became clear after the launch of ChatGPT that not everyone agreed with the guiding principles set by OpenAI. Conservatives, for instance, criticized the bot's answers for displaying a distinct liberal bias. In response, Elon Musk proclaimed his intentions to build a chatbot that is less constrained and politically correct compared to ChatGPT. With his recent announcement of xAI, it is highly likely that he will fulfill that promise. A different approach was taken by Anthropic, who implemented a "constitution" for their Claude chatbots (and now Claude 2). As reported by VentureBeat, this constitution delineates a set of values and principles that Claude must adhere to when interacting with users, emphasizing helpfulness, harmlessness, and honesty. According to a blog post from the company, Claude's constitution draws inspiration from the U. N. Declaration of Human Rights while also incorporating non-western perspectives. It's safe to say that these principles are agreeable to a majority. Meta, too, recently introduced their LLaMA 2 large language model (LLM), which stands out not only for its capabilities but also for being open source. This means that anyone can freely download and use it for their own purposes. Other open-source generative AI models with minimal restrictions are also available. Using these models renders the idea of guiding principles and constitutions somewhat quaint. However, troubling research reported by The New York Times has surfaced, revealing a prompting technique that effectively undermines the guardrails of all models, whether open-source or closed-source. Fortune reported a near 100% success rate in utilizing this method against Vicuna, an open-source chatbot powered by Meta's original LLaMA. Consequently, anyone seeking detailed instructions on creating bioweapons or engaging in consumer fraud could obtain such information from various LLMs. While developers may be able to address some of these security threats, researchers maintain that preventing all such attacks is currently impossible. Beyond the safety concerns raised by this research, there is a growing number of disparate outcomes from multiple models, even when presented with the same query.
Similar to our fragmented social media and news landscape, this fragmented AI ecosystem poses a threat to truth and undermines trust. We are heading towards a future inundated with chatbots that will only contribute to the noise and chaos. The fragmentation of truth not only affects text-based information but also extends to the realm of digital human representations, which are rapidly evolving. Presently, chatbots powered by LLMs impart information via text. However, as these models progress towards multimodal capabilities, enabling the generation of images, videos, and audio, their application and effectiveness will undoubtedly expand. One potential application of multimodal AI can be observed in "digital humans, " synthetic creations that convincingly replicate human appearance. A recent article in Harvard Business Review outlined the technologies that have facilitated the emergence of digital humans, combining computer graphics with artificial intelligence. These digital beings possess high-end features that accurately mimic the appearance of real humans. Kuk Jiang, co-founder of Series D startup ZEGOCLOUD, describes digital humans as highly detailed, realistic models that surpass the limitations of traditional realism and sophistication. Jiang also highlights their ability to interact with real humans in natural and intuitive ways, efficiently assisting in virtual customer service, healthcare, and remote education settings. Another emerging use case is the adoption of digital humans as newscasters. Early deployments have already begun, with Kuwait News introducing a digital newscaster named "Fedha, " which has quickly gained popularity. Fedha initiates interactions with the audience, asking for their preferred news topics, thus introducing the possibility of personalized news feeds. China's People's Daily is also exploring AI-powered newscasters. Meanwhile, startup Channel 1 plans to leverage gen AI to establish a novel type of video news channel, aptly described as an AI-generated CNN by The Hollywood Reporter. According to reports, Channel 1 will debut this year with a 30-minute weekly show scripted using LLMs. Their ultimate aim is to produce news broadcasts tailored to individual users. The article mentions the presence of both liberal and conservative hosts, allowing for news delivery through specific ideological lenses. Scott Zabielski, co-founder of Channel 1, acknowledges that current digital human newscasters do not yet appear indistinguishable from real humans. However, he predicts that technology will advance to the point where differentiating between AI and human news presenters will become increasingly difficult, potentially indiscernible. This raises concerns highlighted in a Scientific American study, which found that synthetic faces are not only highly realistic but also deemed more trustworthy than real faces. Hany Farid, co-author of the study and a professor at the University of California, Berkeley, expresses concerns that these faces could be effectively wielded for malicious purposes. While there is no indication that Channel 1 intends to misuse the persuasiveness of personalized news videos and synthetic faces, technological advancements may enable unscrupulous actors to exploit these tools. As a society, we are already grappling with the fear that what we read could be disinformation, phone calls could employ cloned voices, and the images we encounter may be manipulated. In the near future, even video content, including purportedly credible news broadcasts, may contain subtly crafted messaging designed to manipulate opinions rather than inform or educate. The assault on truth and trust has been ongoing, and this development suggests that the trend will persist. We are a far cry from the days of Walter Cronkite delivering the evening news. Gary Grossman is the SVP of the technology practice at Edelman and serves as the global lead of the Edelman AI Center of Excellence. Welcome to the VentureBeat community!Join DataDecisionMakers to engage with experts, including technical professionals involved in data-related work, who share insights and innovations related to this field. If you seek cutting-edge ideas, up-to-date information, best practices, and a glimpse into the future of data and data tech, come join us at DataDecisionMakers.
None
C3.ai, a leading enterprise artificial intelligence software provider, has announced a major restructuring of its global sales and services organization to boost operational efficiency and better align resources with long-term growth goals.
Snack manufacturer Mondelez International is utilizing a newly developed generative artificial intelligence (AI) tool to drastically cut costs in marketing content creation, achieving a 30% to 50% reduction in production expenses, according to a senior company executive.
South Korea is poised to make a major advancement in artificial intelligence by planning to build the world’s largest AI data center, with a power capacity of 3,000 megawatts—about three times larger than the existing "Star Gate" data center.
In August 2025, OpenAI announced a major milestone: ChatGPT, its advanced conversational AI platform, had reached an impressive 700 million active weekly users.
Krafton, the well-known publisher behind popular games like PUBG and Hi-Fi Rush, is undertaking a bold strategic transformation by integrating artificial intelligence (AI) into almost every aspect of its operations.
The rise of AI-generated video content has sparked significant discussion in the digital media industry, bringing urgent ethical concerns to the forefront.
Artificial intelligence (AI) is becoming an essential tool for improving user experience and engagement through advanced search engine optimization (SEO) techniques.
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today