A series of open-source large language models (LLM) called OpenELM (Open-source Efficient Language Models) have been made available by Apple for powering "on-device" AI capabilities.
Rather than using cloud services, the tech behemoth with headquarters in Cupertino plans to run this natively on its devices, according to Money Control.
The Hugging Face page, a forum for exchanging AI codes, offers the OpenELM AI models.
Apple stated while introducing the SLMs: "We introduce OpenELM, a family of Open-source Efficient Language Models. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy."
Apple claims that OpenELM is a cutting-edge language model that efficiently distributes parameters throughout the layers of the transformer model, increasing accuracy.
OpenELM is said to contain eight models with four different parameter sizes: 270M, 450M, 1.1B, and 3B. Publicly accessible datasets have been used to train every model.
On the other hand, Apple is reported to be developing its own large language model (LLM) to power on-device generative artificial intelligence (AI) features for its new iPhone series.
It is possible that Apple’s AI model will run entirely on-device, according to Bloomberg.
Without necessitating internet connectivity, it essentially means Apple’s inaugural AI features would work offline.