Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Transformers are the cornerstone of the modern generative AI era, but it’s not the only way to build a model.
AI21 is out today with new versions of its Jamba model, which combines transformers with a Structured State Space (SSM) model approach. The new Jamba 1.5 mini and Jamba 1.5 large build on the initial innovations the company debuted with the release of Jamba 1.0 in March. Jamba uses an SSM approach known as Mamba. Jamba’s goal is to bring the best attributes of transformers and SSM together. The name Jamba is actually an acronym that stands for Joint Attention and Mamba (Jamba) architecture. The promise of the combined SSM Transformer architecture is better performance and accuracy than either approach can provide on its own.
“We got amazing feedback from the community, because this was basically the first and still is one of the only Mamba based production scale models that we got,” Or Dagan, VP of product at AI21 told VentureBeat. “It’s a novel architecture that I think started some debates about the future of architecture in LLMs and whether transformers are here to stay or do we need something else.”
With the Jamba 1.5 series AI21 is adding more capabilities to the model including function calling, JSON mode, structured document objects and citation mode. The company hopes that the new additions make the two models ideal for crafting agentic AI systems. Both models also have a large context window of 256K and are Mixture-of-Experts (MoE) models. Jamba 1.5 mini provides 52 billion total and 12 billion active parameters. Jamba 1.5 large has 398 billion total parameters and 94 billion active parameters.
Both Jamba 1.5 models are available under an open license. AI21 also provides commercial support and services for the models. The company also has partnerships with AWS, Google Cloud, Microsoft Azure, Snowflake, Databricks and Nvidia.
What’s new in Jamba 1.5 and how it will accelerate agentic AI
Jamba 1.5 Mini and Large introduce a number of new features designed to meet the evolving needs of AI developers:
- JSON mode for structured data handling
- Citations for enhanced accountability
- Document API for improved context management
- Function calling capabilities
According to Dagan, these additions are particularly crucial for developers working on agentic AI systems. Developers widely use JSON (JavaScript Object Notation) to access and build application workflows.
Dagan explained that adding JSON support enables developers to more easily build structured input/output relationships between different parts of a workflow. He noted that JSON support is crucial for more complex AI systems that go beyond just using the language model on its own. The citation feature on the other hand, works in conjunction with the new document API.
“We can teach the model that when you generate something and you have documents in your input, please attribute the relevant parts to the documents,” Dagan said.
How citation mode is different than RAG, providing an integrated approach for agentic AI
Users should not confuse citation mode with Retrieval Augmented Generation (RAG), though both approaches ground responses in data to improve accuracy.
Dagan explained that the citation mode in Jamba 1.5 is designed to work in conjunction with the model’s document API, providing a more integrated approach compared to traditional RAG workflows. In a typical RAG setup, developers connect the language model to a vector database to access relevant documents for a given query or task.The model would then need to learn to effectively incorporate that retrieved information into its generation.
In contrast, the citation mode in Jamba 1.5 is more tightly integrated with the model itself. This means the model is trained to not only retrieve and incorporate relevant documents, but also to explicitly cite the sources of the information it uses in its output. This provides more transparency and traceability compared to a traditional LLM workflow, where the model’s reasoning may be more opaque.
AI21 does support RAG as well. Dagan noted that his company offers its own end-to-end RAG solution as a managed service that includes the document retrieval, indexing, and other required components.
Looking forward, Dagan said that AI21 will continue to work on advancing its models to serve customer needs. There will also be a continued focus on enabling agentic AI.
“We also understand that we need to operate and push the envelope with agentic AI systems and how planning and execution is handled in that domain,” Dagan said.