French AI lab Mistral is stepping up its game with the introduction of its first family of reasoning models, Magistral. Similar to other models like OpenAI’s o3 and Google’s Gemini 2.5 Pro, Magistral aims to provide improved consistency and reliability in solving problems related to math and physics.
Introducing Magistral Small and Magistral Medium, with the former being 24 billion parameters in size and available for download on the AI dev platform Hugging Face. Magistral Medium, a more advanced model, is currently in preview on Mistral’s Le Chat chatbot platform and API, as well as on third-party partner clouds.
Mistral describes Magistral as suitable for a variety of enterprise use cases, offering enhanced interpretability and a traceable thought process in the user’s language. Despite its late entry into developing reasoning models, Mistral has positioned Magistral as a valuable tool for structured calculations, decision trees, and rule-based systems.
In a bid to emphasize Magistral’s strengths, Mistral highlights its speed and multilingual support, claiming to deliver answers at 10 times the speed of competitors in Le Chat. The company envisions Magistral as a tool for research, strategic planning, and data-driven decision making, catering to various scenarios such as risk assessment and operational optimization.
The release of Magistral follows Mistral’s recent launch of Mistral Code, a “vibe coding” client, as well as coding-focused models and the introduction of Le Chat Enterprise, a corporate chatbot service integrated with tools like an AI agent builder. Mistral continues to innovate and expand its AI-powered services, supported by venture investors and a dedicated team of experts.
