Apple’s Ferret-UI Lite is a 3B-parameter model optimized for mobile and desktop screens, designed to interpret screen images, understand UI elements such as icons and text, and interact with apps by, e.g., reading messages, checking health data, and more.

The study centers on building compact, on-device GUI agents capable of directly interacting with graphical user interfaces (GUIs) across platforms, including mobile, web, and desktop.

In the related paper, the researchers observe that “the majority of existing methods on GUI agents, contrarily, focus on large foundation models”, such as GPT and Gemini, granting these agents “impressive capabilities in diverse GUI navigation tasks”. However, this comes at the cost of “modeling complexity, compute budget requirements, and inference time” as well as higher latency, reduced privacy guarantees, and dependency on network connectivity. This motivated the authors to investigate the development of competitive, small, on-device end-to-end agents, which remain challenging.

Utilizing techniques optimized for developing small models, we build our 3B Ferret-UI Lite agent through curating a diverse GUI data mixture from real and synthetic sources, strengthening inference-time performance through chain-of-thought reasoning and visual tool-use, and reinforcement learning with designed rewards.

Ferret-UI Lite, the researchers explain, uses screen image cropping and chain-of-thought prompting to improve accuracy in understanding complex layouts with small UI elements. This strategy brings “competitive, or in some cases superior, performance compared to larger models”, achieving 91.6% in GUI grounding tasks (which involve locating and identifying specific UI elements based on natural-language instructions) on ScreenSpot-V2, 53.3% on ScreenSpot-Pro, and 61.2% on OSWorld-G. For GUI navigation tasks, it achieved success rates of 28.0% on AndroidWorld and 19.8% on OSWorld.

For training, the researchers employed a two-stage pipeline. The first stage leveraged supervised fine-tuning (SFT) on a diverse mixture of real and synthetic GUI interaction data. In the second stage, they applied reinforcement learning with verifiable rewards (RLVR) to optimize for task success rather than strict imitation. Additionally, they standardized action formats and included inference-time techniques such as “zoom-in” and chain-of-thought reasoning, to enhance the model’s perceptual accuracy.

The researchers conclude that GUI grounding and navigation data can complement each other, and that the curation of synthetic data from diverse sources significantly improves performance in both tasks. Furthermore, while chain-of-thought reasoning and visual tools bring improvements, their benefit is limited. On the downside, small models continue to struggle with long-horizon, multi-step tasks and are sensitive to reward design.

The researchers suggest that Ferret-UI Lite could function as an on-device “intelligent” agent, enabling Apple to reduce dependence on Google Cloud for Siri while offering a “privacy shield”.