June 19, 2024, 1:41 a.m.
39

None

Meta's Fundamental AI Research (FAIR) team announced the release of five new artificial intelligence (AI) research models. These models have various functionalities, such as text and image generation, AI-generated speech detection, and code completion. One of the models, called Chameleon, is capable of understanding and generating both images and text. It can take input that includes text and images and produce a combination of text and images. Meta mentioned that this feature could be used to generate captions for images or create new scenes using text prompts and images. Pretrained models for code completion were also released. These models were trained using Meta's multitoken prediction approach, which involves training large language models (LLMs) to predict multiple future words simultaneously, rather than predicting one word at a time. Another model, JASCO, provides more control over AI music generation.

Instead of relying solely on text inputs, this model can accept various inputs like chords or beats, allowing the incorporation of symbols and audio in a single text-to-music generation model. A model called AudioSeal introduces an audio watermarking technique that enables localized detection of AI-generated speech. It can pinpoint AI-generated segments within larger audio snippets and detect AI-generated speech up to 485 times faster than previous methods. The fifth AI research model released aims to enhance geographical and cultural diversity in text-to-image generation systems. Meta has provided geographic disparities evaluation code and annotations to improve evaluations of text-to-image models. Meta mentioned in an earnings report that it plans to invest $35 billion to $40 billion in AI and metaverse development by the end of 2024. Meta CEO Mark Zuckerberg also highlighted the company's various AI services, including AI assistants, augmented reality apps, and business AIs. To stay updated on AI news, you can subscribe to the daily AI Newsletter from PYMNTS.



Meta's Fundamental AI Research (FAIR) team has released five new artificial intelligence (AI) research models. These models include ones that can generate text and images, detect AI-generated speech within audio snippets, and offer improved control over AI music generation. The models were released to encourage the advancement of AI in a responsible manner. One of the models, Chameleon, can understand and generate both images and text, allowing for the generation of captions for images or the creation of new scenes using text and images. Other models include pretrained code completion models and an audio watermarking technique for detecting AI-generated speech within audio. Additionally, Meta has released evaluation code and annotations to improve the diversity of text-to-image generation systems. The company plans to invest between $35 billion and $40 billion on AI and metaverse-development by the end of 2024.

I'm your News Manager, ready to handle your first test assignment

Language

Create a post

based on this news in the Content Maker

Content Maker

Our unique Content Maker allows you to create an SEO article, social media posts, and a video based on the information presented in the article

news image

Last news

The Best for your business

All news