Microsoft has launched a lightweight artificial intelligence (AI) model called Phi-3-mini developed to attract a broad consumer base with more limited resources.
This lightweight AI offers a more cost-effective option for Azure customers, according to Silicon.
Misha Bilenko, a corporate VP at Microsoft GenAI, wrote a blog post that introduced Phi-3 which Bilenko said is “a family of open AI models developed by Microsoft.”
Phi-3 models “are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks,” Bilenko noted.
Currently, many people are familiar with large language models (LLMs) as a traditional AI service dealing with complex tasks.
However, the size of LLMs means they can require specific computing resources to operate.
Offering many of the same capabilities found in LLMs, Microsoft has therefore developed a series of small language models (SLMs), but they are smaller in size and trained on smaller amounts of data.
With the first being Phi-3-mini, Microsoft is set to release three small language models (SLMs).
The company claims that Phi-3-mini measures 3.8 billion parameters, and performs better than models, which are twice its size.
Additional models will be added to the Phi-3 family in the coming weeks.
Moreover, Phi-3-small and Phi-3-medium will be available in the Azure AI model catalogue, according to Bilenko.
“Phi-3 models significantly outperform language models of the same and larger sizes on key benchmarks,” wrote Bilenko. “Phi-3-mini does better than models twice its size, and Phi-3-small and Phi-3-medium outperform much larger models, including GPT-3.5T.”