Meta’s group of AI researchers under FAIR has announced new releases that align with the company's aim to achieve advanced machine intelligence while promoting open science and reproducibility. The newly introduced models include an updated Segment Anything Model 2 for images and videos, as well as Meta Spirit LM, Layer Skip, SALSA, Meta Lingua, OMat24, MEXMA, and the Self Taught Evaluator. **Self Taught Evaluator** Meta describes the Self Taught Evaluator as a "strong generative reward model with synthetic data" that can validate the outputs of other AI models. According to the company, it introduces a novel method for generating preference data to train reward models independently of human annotations. As detailed in a blog post, this approach generates contrasting outputs from models and utilizes a large language model as a judge to create reasoning traces for evaluation and final decisions, featuring an iterative self-improvement mechanism. In essence, the Self Taught Evaluator autonomously produces its own data to train reward models without requiring human labeling. Meta asserts that the model generates diverse outputs from AI systems and employs another AI to evaluate and refine these results through an iterative process. The performance of this model is reported to surpass that of human-labeled data models like GPT-4 and others. **Meta Spirit LM** Spirit LM is an open-source language model designed for seamless integration of speech and text.
Typically, large language models are used to develop systems capable of converting speech to text and vice versa, but this often results in a loss of natural expressiveness from the original speech. To address this, Meta has created Spirit LM, its first open-source model that enhances the interaction between text and speech. Meta pointed out in a tweet that many current AI voice systems utilize automatic speech recognition (ASR) techniques to process speech before merging with a large language model for text generation, which can diminish the expressive qualities of speech. By employing phonetic, pitch, and tone tokens, Spirit LM overcomes these challenges for both input and output, producing more natural-sounding speech while also adapting to new tasks across ASR, text-to-speech (TTS), and speech classification. The Meta Spirit LM is trained on data from both speech and text, enabling effortless transitions between the two modes. The company offers two versions of this model: Spirit LM Base, which emphasizes speech sounds, and the full Spirit LM version that captures nuances like tone and emotion—such as anger and excitement—to enhance realism. Meta claims that this model can generate more natural-sounding speech and is capable of mastering tasks like speech recognition, text-to-speech conversion, and the classification of different speech types.
Meta Unveils Advanced AI Models: Revolutionizing Machine Intelligence
The Hitachi Group has agreed to acquire synvert, a Germany-based company, as a wholly owned subsidiary of its US subsidiary, GlobalLogic Inc., from Maxburg, a private equity fund specializing in founder-led technology firms in the German-speaking region.
This article explores the evolving interplay between AI and SEO, highlighting the ongoing importance of robust SEO practices in the era of artificial intelligence.
The company announces that it intends to use the newly acquired funding to broaden its operations and enhance its AI-driven sales training technology, which features interactive simulations.
Omneky Inc., a leading provider of AI-driven advertising solutions, has achieved SOC 2 Type II compliance, marking a major milestone in its dedication to data security and privacy.
For a more accessible video player, please use the Chrome browser.
NEW YORK, Oct.
Former Apple CEO John Sculley considers OpenAI as Apple’s first significant competitor in many years, highlighting that AI has not been a particular strength for Apple.
Automate Marketing, Sales, SMM & SEO
and get clients on autopilot — from social media and search engines. No ads needed
and get clients today