Discover an innovative approach to mobile UI agents with a cutting-edge solution from Tsinghua University that leverages the power of Small Language Models (SLMs) to automate tasks on-device. Our method addresses the privacy and cost concerns associated with large language models (LLMs) by offering a domain-specific, compact model trained with high-quality data. This breakthrough transforms the UI task automation challenge into a code generation problem, efficiently tackled by an SLM and executed with an on-device code interpreter. Our document-centered strategy automatically constructs detailed API documentation for each app, creating diverse task samples to guide the agent in learning to generate accurate and efficient scripts for unseen tasks. Experience the future of mobile UI interactions with our solution, boasting significantly higher success rates, lower latency, and reduced token consumption compared to state-of-the-art mobile UI agents. Stay ahead with our open-source code, set to revolutionize the field.
Large language models (LLMs) have brought exciting new advances to mobile UI agents, a long-standing research field that aims to complete arbitrary natural language tasks through mobile UI interactions. However, existing UI agents usually demand high reasoning capabilities of powerful large models that are difficult to be deployed locally on end-users' devices, which raises huge concerns about user privacy and centralized serving cost. One way to reduce the required model size is to customize a smaller domain-specific model with high-quality training data, e.g. large-scale human demonstrations of diverse types of apps and tasks, while such datasets are extremely difficult to obtain. Inspired by the remarkable coding abilities of recent small language models (SLMs), we propose to convert the UI task automation problem to a code generation problem, which can be effectively solved by an on-device SLM and efficiently executed with an on-device code interpreter. Unlike normal coding tasks that can be extensively pretrained with public datasets, generating UI automation code is challenging due to the diversity, complexity, and variability of target apps. Therefore, we adopt a document-centered approach that automatically builds fine-grained API documentation for each app and generates diverse task samples based on this documentation. By guiding the agent with the synthetic documents and task samples, it learns to generate precise and efficient scripts to complete unseen tasks. Based on detailed comparisons with state-of-the-art mobile UI agents, our approach effectively improves the mobile task automation with significantly higher success rates and lower latency/token consumption. Code will be open-sourced.
MentalManip数据集是由Wang等人(2024b)引入的,专门用于检测和分类心理操纵的对话数据集。该数据集包含4000个多轮虚构对话,来源于在线电影剧本,并进行了多层次的标注,包括操纵的存在、操纵技巧和目标脆弱性。数据集的创建旨在通过高质量的标注确保数据的一致性和准确性,从而支持心理操纵检测的研究。
The SimpleToM dataset is designed to evaluate models' ability to reason about beliefs and actions in various scenarios. It includes a variety of situations with multiple choice questions and answers, covering topics such as food items, personal belongings, and service industries.
Therachat is a digital companion designed to enhance mental health practices. It offers clinical tools, activities, and HIPAA/PIPEDA compliance to support therapists and clients.