AutoDroid-V2: Boosting SLM-based GUI Agents via Code Generation

AutoDroid-V2: Boosting SLM-based GUI Agents via Code Generation

Discover an innovative approach to mobile UI agents with a cutting-edge solution from Tsinghua University that leverages the power of Small Language Models (SLMs) to automate tasks on-device. Our method addresses the privacy and cost concerns associated with large language models (LLMs) by offering a domain-specific, compact model trained with high-quality data. This breakthrough transforms the UI task automation challenge into a code generation problem, efficiently tackled by an SLM and executed with an on-device code interpreter. Our document-centered strategy automatically constructs detailed API documentation for each app, creating diverse task samples to guide the agent in learning to generate accurate and efficient scripts for unseen tasks. Experience the future of mobile UI interactions with our solution, boasting significantly higher success rates, lower latency, and reduced token consumption compared to state-of-the-art mobile UI agents. Stay ahead with our open-source code, set to revolutionize the field.

AutoDroid-V2: Boosting SLM-based GUI Agents via Code Generation

Szczegółowe wprowadzenie

Large language models (LLMs) have brought exciting new advances to mobile UI agents, a long-standing research field that aims to complete arbitrary natural language tasks through mobile UI interactions. However, existing UI agents usually demand high reasoning capabilities of powerful large models that are difficult to be deployed locally on end-users' devices, which raises huge concerns about user privacy and centralized serving cost. One way to reduce the required model size is to customize a smaller domain-specific model with high-quality training data, e.g. large-scale human demonstrations of diverse types of apps and tasks, while such datasets are extremely difficult to obtain. Inspired by the remarkable coding abilities of recent small language models (SLMs), we propose to convert the UI task automation problem to a code generation problem, which can be effectively solved by an on-device SLM and efficiently executed with an on-device code interpreter. Unlike normal coding tasks that can be extensively pretrained with public datasets, generating UI automation code is challenging due to the diversity, complexity, and variability of target apps. Therefore, we adopt a document-centered approach that automatically builds fine-grained API documentation for each app and generates diverse task samples based on this documentation. By guiding the agent with the synthetic documents and task samples, it learns to generate precise and efficient scripts to complete unseen tasks. Based on detailed comparisons with state-of-the-art mobile UI agents, our approach effectively improves the mobile task automation with significantly higher success rates and lower latency/token consumption. Code will be open-sourced.

Więcej
Sztuczna inteligencja

Keywords

Mobile UI agentsLarge language models (LLMs)Small language models (SLMs)On-device task automationCode generation problemDomain-specific modelHigh-quality training dataUser privacyCentralized serving costAPI documentationTask samplesScript generationMobile task automationSuccess ratesLatencyToken consumptionOpen-source code

Udostępnij