OS-ATLAS: Een Fundamenteel Actiemodel voor Algemene GUI-agenten

OS-ATLAS: A Foundation Action Model for Generalist GUI Agents

October 30, 2024
Auteurs: Zhiyong Wu, Zhenyu Wu, Fangzhi Xu, Yian Wang, Qiushi Sun, Chengyou Jia, Kanzhi Cheng, Zichen Ding, Liheng Chen, Paul Pu Liang, Yu Qiao
cs.AI

Samenvatting

Bestaande inspanningen om GUI-agenten te bouwen zijn sterk afhankelijk van de beschikbaarheid van robuuste commerciële Vision-Language Modellen (VLM's) zoals GPT-4o en GeminiProVision. Praktijkmensen zijn vaak terughoudend om open-source VLM's te gebruiken vanwege hun aanzienlijke prestatieverschil in vergelijking met hun gesloten-source tegenhangers, met name in GUI-gronding en Out-Of-Distribution (OOD) scenario's. Om toekomstig onderzoek op dit gebied te vergemakkelijken, hebben we OS-Atlas ontwikkeld - een fundamenteel GUI-actiemodel dat uitblinkt in GUI-gronding en OOD-agenttaken door innovaties in zowel data als modellering. We hebben aanzienlijke technische inspanningen geïnvesteerd in het ontwikkelen van een open-source toolkit voor het synthetiseren van GUI-grondingsdata over meerdere platforms, waaronder Windows, Linux, MacOS, Android en het web. Door gebruik te maken van deze toolkit, brengen we het grootste open-source cross-platform GUI-grondingscorpus tot nu toe uit, dat meer dan 13 miljoen GUI-elementen bevat. Deze dataset, gecombineerd met innovaties in modeltraining, biedt een solide basis voor OS-Atlas om GUI-screenshots te begrijpen en te generaliseren naar ongeziene interfaces. Door uitgebreide evaluatie over zes benchmarks die drie verschillende platforms bestrijken (mobiel, desktop en web), toont OS-Atlas aanzienlijke prestatieverbeteringen ten opzichte van eerdere state-of-the-art modellen. Onze evaluatie onthult ook waardevolle inzichten in het voortdurend verbeteren en schalen van de agentcapaciteiten van open-source VLM's.
English
Existing efforts in building GUI agents heavily rely on the availability of robust commercial Vision-Language Models (VLMs) such as GPT-4o and GeminiProVision. Practitioners are often reluctant to use open-source VLMs due to their significant performance lag compared to their closed-source counterparts, particularly in GUI grounding and Out-Of-Distribution (OOD) scenarios. To facilitate future research in this area, we developed OS-Atlas - a foundational GUI action model that excels at GUI grounding and OOD agentic tasks through innovations in both data and modeling. We have invested significant engineering effort in developing an open-source toolkit for synthesizing GUI grounding data across multiple platforms, including Windows, Linux, MacOS, Android, and the web. Leveraging this toolkit, we are releasing the largest open-source cross-platform GUI grounding corpus to date, which contains over 13 million GUI elements. This dataset, combined with innovations in model training, provides a solid foundation for OS-Atlas to understand GUI screenshots and generalize to unseen interfaces. Through extensive evaluation across six benchmarks spanning three different platforms (mobile, desktop, and web), OS-Atlas demonstrates significant performance improvements over previous state-of-the-art models. Our evaluation also uncovers valuable insights into continuously improving and scaling the agentic capabilities of open-source VLMs.

Summary

AI-Generated Summary

Paper Overview

OS-Atlas is a pioneering GUI action model that excels in GUI grounding and OOD agentic tasks. It introduces a novel toolkit for synthesizing GUI grounding data, resulting in the largest open-source cross-platform GUI grounding corpus. The model operates in three distinct modes and outperforms existing models across various platforms.

Core Contribution

OS-Atlas innovates by addressing the limitations of existing VLM-based GUI action models through a multi-platform GUI grounding data synthesis toolkit and a substantial GUI grounding corpus.

Research Context

The research fills a gap in the field by enhancing GUI grounding and OOD performance, crucial for real-world applicability of GUI agent models. It significantly advances the benchmarking and evaluation of GUI agents.

Keywords

GUI grounding, OOD tasks, VLM-based models, multi-platform data synthesis, action modeling

Background

The paper focuses on developing OS-Atlas to overcome the shortcomings of existing VLM-based GUI action models. The research aims to enhance GUI grounding and OOD performance, critical for practical GUI agent applications.

Research Gap

Existing VLM-based models lack efficiency in GUI grounding and OOD scenarios, limiting their usability in real-world applications.

Technical Challenges

Capturing desktop and mobile screenshots, simulating human interactions for data collection, and developing platform-specific data infrastructures posed technical challenges.

Prior Approaches

Previous models have been criticized for poor GUI grounding and OOD performance, necessitating the development of OS-Atlas with a focus on multi-platform data synthesis.

Methodology

The methodology of OS-Atlas involves GUI grounding pre-training and action fine-tuning phases, utilizing diverse data collection methods across platforms to enhance model performance.

Theoretical Foundation

OS-Atlas is based on a robust theoretical framework that integrates GUI grounding pre-training and action fine-tuning to improve model understanding and action execution.

Technical Architecture

The model's technical architecture includes a multi-platform data collection approach, rule-based data filtering, and simulation environments for data synthesis.

Implementation Details

Various methods and tools were employed for data collection on different platforms, with a focus on GUI grounding and action execution for effective agent performance.

Innovation Points

OS-Atlas introduces a novel approach to GUI grounding data synthesis, a large-scale GUI grounding corpus, and distinct modes for enhanced agent performance.

Experimental Validation

The experimental validation of OS-Atlas involved rigorous testing across different platforms and datasets to evaluate its performance in GUI grounding and agent tasks.

Setup

Data collection involved crawling web pages, extracting elements, and segmenting screenshots, resulting in a diverse dataset of over 13 million GUI grounding instances.

Metrics

Evaluation metrics included action type prediction, coordinate prediction, and step success rate, showcasing the model's effectiveness in various scenarios.

Results

OS-Atlas outperformed previous models in GUI grounding and agent tasks across different platforms, demonstrating superior performance in zero-shot OOD and supervised fine-tuning settings.

Comparative Analysis

Detailed comparisons with existing models and benchmarks highlighted OS-Atlas's significant improvements in GUI grounding and action execution, showcasing its potential for real-world applications.

Impact and Implications

The research on OS-Atlas has far-reaching implications for GUI agent development, benchmarking, and evaluation, offering a promising open-source alternative to commercial VLMs.

Key Findings

OS-Atlas demonstrated superior performance in addressing unseen tasks, zero-shot OOD scenarios, and multitask fine-tuning, showcasing its potential for diverse applications.

Limitations

While OS-Atlas shows significant improvements, challenges remain in scaling data synthesis and fine-tuning processes for optimal performance.

Future Directions

Future research opportunities include enhancing data scalability, improving fine-tuning mechanisms, and exploring broader applications of OS-Atlas in GUI agent development.

Practical Significance

OS-Atlas's advancements in GUI grounding and action modeling have practical implications for developing efficient and versatile GUI agents across various platforms, enhancing user interaction experiences.

Uitgelichte Papers

DeepSeek-R1: Het stimuleren van redeneervermogen in LLM's via Reinforcement Learning
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y. K. Li, Y. Q. Wang, Y. X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y. X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, Zhen ZhangJan 22, 20253685

Technisch Rapport Qwen2.5
Qwen2.5 Technical Report

Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zihan QiuDec 19, 202436311

MiniMax-01: Schalen van Foundation Modellen met Bliksem Aandacht
MiniMax-01: Scaling Foundation Models with Lightning Attention

MiniMax, Aonian Li, Bangwei Gong, Bo Yang, Boji Shan, Chang Liu, Cheng Zhu, Chunhao Zhang, Congchao Guo, Da Chen, Dong Li, Enwei Jiao, Gengxin Li, Guojun Zhang, Haohai Sun, Houze Dong, Jiadai Zhu, Jiaqi Zhuang, Jiayuan Song, Jin Zhu, Jingtao Han, Jingyang Li, Junbin Xie, Junhao Xu, Junjie Yan, Kaishun Zhang, Kecheng Xiao, Kexi Kang, Le Han, Leyang Wang, Lianfei Yu, Liheng Feng, Lin Zheng, Linbo Chai, Long Xing, Meizhi Ju, Mingyuan Chi, Mozhi Zhang, Peikai Huang, Pengcheng Niu, Pengfei Li, Pengyu Zhao, Qi Yang, Qidi Xu, Qiexiang Wang, Qin Wang, Qiuhui Li, Ruitao Leng, Shengmin Shi, Shuqi Yu, Sichen Li, Songquan Zhu, Tao Huang, Tianrun Liang, Weigao Sun, Weixuan Sun, Weiyu Cheng, Wenkai Li, Xiangjun Song, Xiao Su, Xiaodong Han, Xinjie Zhang, Xinzhu Hou, Xu Min, Xun Zou, Xuyang Shen, Yan Gong, Yingjie Zhu, Yipeng Zhou, Yiran Zhong, Yongyi Hu, Yuanxiang Fan, Yue Yu, Yufeng Yang, Yuhao Li, Yunan Huang, Yunji Li, Yunpeng Huang, Yunzhi Xu, Yuxin Mao, Zehan Li, Zekang Li, Zewei Tao, Zewen Ying, Zhaoyang Cong, Zhen Qin, Zhenhua Fan, Zhihang Yu, Zhuo Jiang, Zijia WuJan 14, 20252836

PDF503November 13, 2024