Saturday, November 23, 2024

Latest open source AI models ‘world’s largest and most capable’, says Meta

Must read

Meta has unveiled what it believes to be the “world’s largest and most capable openly available (AI) foundation model” to take on the likes of ChatGPT and Google’s Gemini.

The parent company of Facebook, Instagram and WhatsApp says its new Llama 3.1 model has “state-of-the-art capabilities” in a range of topics including general knowledge, mathematics and “multilingual” translation.

As a result, Meta says this update to Llama is the company “ushering in a new era” by launching a model it says can match or exceed its closed source large language model (LLM) rivals, such as OpenAI’s ChatGPT.

“Until today, open source large language models have mostly trailed behind their closed counterparts when it comes to capabilities and performance,” Meta said in a blog post on its latest update.

“Now, we’re ushering in a new era with open source leading the way. We’re publicly releasing Llama 3.1 405B, which we believe is the world’s largest and most capable openly available foundation model.

“With more than 300 million total downloads of all Llama versions to date, we’re just getting started.”

The technology and social media giant confirmed the new models would be available to download starting on Tuesday.

Generative AI has become the key technology battleground since OpenAI first introduced ChatGPT (John Walton/PA)

Meta said its new generation of Llama models were “competitive” in testing against a number of flagship rival models, including OpenAI’s GPT-4 and GPT-4o, as well as Anthropic’s Claude 3.5 Sonnet.

Generative AI has become the key technology battleground since OpenAI first introduced ChatGPT, its virtual assistant and chatbot, in late 2022.

Since then, every major tech firm has announced their own move into the space, whether it be by building their own foundation models which power the technology, creating their own AI products, or doing both.

For its own latest launch, Meta said Llama 3.1 also had the ability to carry out synthetic data generation, meaning it can create data to be used to improve and train other, smaller AI models, a capability it said had never been achieved on such a scale in an open source model.

Latest article