As the world races to deploy AI models that are effective and safe, the demand for Open Large Language Models (LLMs) has exploded. The massive adoption of both open and closed AI models means that AI capabilities have leapfrogged our ability to understand how they are created. Releasing the OLMo framework will provide the industry with an opportunity to understand what is going on inside AI models.
Today, The Allen Institute for AI (AI2) has released OLMo 7B, a truly open, state-of-the-art large language model released alongside the pre-training data and training code, something no open models of this scale offer today. This empowers researchers and developers to use the best and open models to advance the science of language models collectively.
"Open foundation models have been critical in driving a burst of innovation and development around generative AI," said Yann LeCun, Chief AI Scientist at Meta. "The vibrant community that comes from open source is the fastest and most effective way to build the future of AI."
OLMo and the framework is designed to aid researchers in training and experimenting with large language models. They are available for direct download on Hugging Face and in GitHub. This work was made possible, in part, via a collaboration with the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University and partners including AMD, CSC (Lumi Supercomputer), the Paul G. Allen School of Computer Science & Engineering at the University of Washington and Databricks.
The framework features a suite of completely open AI development tools, including:
- Full pretraining data: The model is built on AI2's Dolma set which features three trillion token open corpus for language model pretraining, including code that produces the training data.
- Training code and model weights: The OLMo framework includes full model weights for four model variants at the 7B scale, each trained to at least 2T tokens. Inference code, training metrics and training logs are all provided.
- Evaluation: We've released the evaluation suite used in development, complete with 500+ checkpoints per model, from every 1000 steps during the training process and evaluation code under the umbrella of the Catwalk project.
"I'm enthusiastic about getting OLMo into the hands of AI researchers," said Eric Horvitz, Microsoft's Chief Scientific Officer and a founding member of the AI2 Scientific Advisory Board. "The new offering continues Allen AI's tradition of providing valuable open models, tools, and data, which have spurred numerous advancements in AI across the global community."
A truly open model
By making OLMo and its training data fully available to the public, AI2 has taken a big step towards collaboratively building the best open language model in the world. In the coming months, AI2 will continue to iterate on OLMo and will bring different model sizes, modalities, datasets, and capabilities into the OLMo family.
"Many language models today are published with limited transparency. Without having access to training data, researchers cannot scientifically understand how a model is working. It's the equivalent of drug discovery without clinical trials or studying the solar system without a telescope," said Hanna Hajishirzi, OLMo project lead, a senior director of NLP Research at AI2, and a professor in the UW's Allen School. "With our new framework, researchers will finally be able to study the science of LLMs, which is critical to building the next generation of safe and trustworthy AI."
With OLMo, AI researchers and developers will experience:
- More Precision: With full insight into the training data behind the model, researchers will be able to work faster and no longer need to depend on qualitative assumptions of how we feel the model is performing but can test it scientifically.
- Less Carbon: Currently one training run is equivalent to the emissions of nine US homes for one year. By opening the full training and evaluation ecosystem, it radically reduces developmental redundancies, which is critical in the decarbonization of AI
- Lasting results: Keeping models and their datasets in the open and not behind APIs enables researchers to learn and build from previous models and work.
"With OLMo, open actually means 'open' and everyone in the AI research community will have access to all aspects of model creation, including training code, evaluation methods, data, and so on" said Noah Smith, OLMo project lead, a senior director of NLP Research at AI2, and a professor in the UW's Allen School. "AI was once an open field centered on an active research community, but as models grew, became more expensive, and started turning into commercial products, AI work started to happen behind closed doors. With OLMo we hope to work against this trend and empower the research community to come together to better understand and engage with language models in a scientific way, leading to more responsible AI technology that benefits everyone."
"With AI2's deep expertise in natural language processing combined with AMD high-performance computing engines, the OLMo models developed on the LUMI Supercomputer powered by AMD EPYC™ CPUs and AMD Instinct™ accelerators offer a unique opportunity to truly expand AI experimentation and innovation and advance the industry like never before. This new open framework will provide the AI research community across the world with trusted resources and a platform to contribute to and work directly on language models." — Ian Ferreria, Senior Director, AI Solutions, AMD
"We are happy that we can contribute to this important initiative by providing the computing capacity from the LUMI supercomputer along with our expertise. Public supercomputers like LUMI play a vital role in the infrastructure for open and transparent AI." Dr. Pekka Manninen, Director of Science and Technology, CSC
LUMI supercomputer in Finland is hosted by CSC, and owned by EuroHPC Joint Undertaking and 10 European countries. LUMI is the fastest supercomputer in Europe, and is known for its entirely carbon-free operations and was critical in supporting the pre-training work necessary to develop OLMo.
"Databricks is excited to be collaborating with the Allen Institute for AI on the release of their OLMo open source model and framework. OLMo sets the standard for what it means to be open. Everyone in academia, industry, and the broader community will benefit enormously from access to not only the model but all of the training details, including the data, code, and intermediate checkpoints. I am especially proud that this model was developed on the Mosaic AI model training platform from Databricks. As with all great open source releases, the best is yet to come now that these artifacts and tools are in the hands of the community." — Jonathan Frankle, Chief Scientist (Neural Networks), Databricks.
Learn more
Getting started with OLMo technical blog
For more information on the OLMo framework and The Allen Institute for AI visit here.