대규모 언어 모델이 구조화된 추론을 조율하여 캐글 그랜드마스터 수준을 달성합니다.
Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level
November 5, 2024
저자: Antoine Grosnit, Alexandre Maraval, James Doran, Giuseppe Paolo, Albert Thomas, Refinath Shahul Hameed Nabeezath Beevi, Jonas Gonzalez, Khyati Khandelwal, Ignacio Iacobacci, Abdelhakim Benechehab, Hamza Cherkaoui, Youssef Attia El-Hili, Kun Shao, Jianye Hao, Jun Yao, Balazs Kegl, Haitham Bou-Ammar, Jun Wang
cs.AI
초록
우리는 자동화, 최적화 및 일반화를 목적으로 설계된 Agent K v1.0을 소개합니다. Agent K v1.0은 다양한 데이터 과학 작업에 걸쳐 자동화되고 최적화되며 일반화되도록 설계된 end-to-end 자율 데이터 과학 에이전트입니다. 완전히 자동화된 Agent K v1.0은 경험으로부터 학습함으로써 전체 데이터 과학 수명주기를 관리합니다. Agent K v1.0은 높은 유연성을 가진 구조화된 추론 프레임워크를 활용하여 중첩 구조에서 메모리를 동적으로 처리할 수 있도록 하여, 복잡한 추론 작업을 처리하기 위해 저장된 누적 경험으로부터 효과적으로 학습합니다. Agent K v1.0은 환경적 보상에 기초하여 미래 결정을 안내하기 위해 핵심 정보를 선택적으로 저장하고 검색함으로써 장기 및 단기 기억을 최적화합니다. 이 반복적인 접근 방식을 통해 Agent K v1.0은 미세 조정이나 역전파 없이 결정을 정제하고 경험적 학습을 통해 지속적인 개선을 달성합니다. 우리는 Kaggle 대회를 사례 연구로 사용하여 에이전트의 능력을 평가합니다. 완전히 자동화된 프로토콜을 따라 Agent K v1.0은 베이지안 최적화를 사용하여 하이퍼파라미터 조정 및 피처 엔지니어링을 수행하며 복잡하고 다중 모달 데이터 과학 작업을 체계적으로 다룹니다. 우리의 새로운 평가 프레임워크는 Agent K v1.0의 end-to-end 능력을 엄격하게 평가하여 Kaggle 대회 URL에서 시작하여 제출을 생성하고 보냅니다. 결과는 Agent K v1.0이 표 형식, 컴퓨터 비전, NLP 및 다중 모달 도메인을 포함한 작업 전반에 걸쳐 92.5%의 성공률을 달성한다는 것을 보여줍니다. 5,856명의 인간 Kaggle 경쟁자와 Elo-MMR 점수를 계산하여 벤치마킹할 때, Agent K v1.0은 상위 38%에 랭크되어 전문가 수준 사용자와 유사한 전반적인 기술 수준을 보여줍니다. 특히, Elo-MMR 점수는 인간 그랜드마스터들이 달성한 점수의 제1사분위와 제3사분위 사이에 위치하고 있음을 나타냅니다. 더 나아가, 우리의 결과는 Agent K v1.0이 Kaggle 그랜드마스터와 동등한 성능 수준에 도달했으며, Kaggle의 진행 시스템에 따라 6개의 금메달, 3개의 은메달 및 7개의 동메달을 기록했다는 것을 보여줍니다.
English
We introduce Agent K v1.0, an end-to-end autonomous data science agent
designed to automate, optimise, and generalise across diverse data science
tasks. Fully automated, Agent K v1.0 manages the entire data science life cycle
by learning from experience. It leverages a highly flexible structured
reasoning framework to enable it to dynamically process memory in a nested
structure, effectively learning from accumulated experience stored to handle
complex reasoning tasks. It optimises long- and short-term memory by
selectively storing and retrieving key information, guiding future decisions
based on environmental rewards. This iterative approach allows it to refine
decisions without fine-tuning or backpropagation, achieving continuous
improvement through experiential learning. We evaluate our agent's apabilities
using Kaggle competitions as a case study. Following a fully automated
protocol, Agent K v1.0 systematically addresses complex and multimodal data
science tasks, employing Bayesian optimisation for hyperparameter tuning and
feature engineering. Our new evaluation framework rigorously assesses Agent K
v1.0's end-to-end capabilities to generate and send submissions starting from a
Kaggle competition URL. Results demonstrate that Agent K v1.0 achieves a 92.5\%
success rate across tasks, spanning tabular, computer vision, NLP, and
multimodal domains. When benchmarking against 5,856 human Kaggle competitors by
calculating Elo-MMR scores for each, Agent K v1.0 ranks in the top 38\%,
demonstrating an overall skill level comparable to Expert-level users. Notably,
its Elo-MMR score falls between the first and third quartiles of scores
achieved by human Grandmasters. Furthermore, our results indicate that Agent K
v1.0 has reached a performance level equivalent to Kaggle Grandmaster, with a
record of 6 gold, 3 silver, and 7 bronze medals, as defined by Kaggle's
progression system.Summary
AI-Generated Summary
DeepSeek-R1: 강화 학습을 통해 LLMs의 추론 능력을 유도하기DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via
Reinforcement Learning
DeepSeek-R1: 강화 학습을 통해 LLMs의 추론 능력을 유도하기
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via
Reinforcement Learning
DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y. K. Li, Y. Q. Wang, Y. X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y. X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, Zhen Zhang•Jan 22, 2025•3735
Qwen2.5 기술 보고서Qwen2.5 Technical Report
Qwen2.5 기술 보고서
Qwen2.5 Technical Report
Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zihan Qiu•Dec 19, 2024•36311
MiniMax-01: 번개 주의를 사용하여 Foundation 모델의 스케일링MiniMax-01: Scaling Foundation Models with Lightning Attention
MiniMax-01: 번개 주의를 사용하여 Foundation 모델의 스케일링
MiniMax-01: Scaling Foundation Models with Lightning Attention
MiniMax, Aonian Li, Bangwei Gong, Bo Yang, Boji Shan, Chang Liu, Cheng Zhu, Chunhao Zhang, Congchao Guo, Da Chen, Dong Li, Enwei Jiao, Gengxin Li, Guojun Zhang, Haohai Sun, Houze Dong, Jiadai Zhu, Jiaqi Zhuang, Jiayuan Song, Jin Zhu, Jingtao Han, Jingyang Li, Junbin Xie, Junhao Xu, Junjie Yan, Kaishun Zhang, Kecheng Xiao, Kexi Kang, Le Han, Leyang Wang, Lianfei Yu, Liheng Feng, Lin Zheng, Linbo Chai, Long Xing, Meizhi Ju, Mingyuan Chi, Mozhi Zhang, Peikai Huang, Pengcheng Niu, Pengfei Li, Pengyu Zhao, Qi Yang, Qidi Xu, Qiexiang Wang, Qin Wang, Qiuhui Li, Ruitao Leng, Shengmin Shi, Shuqi Yu, Sichen Li, Songquan Zhu, Tao Huang, Tianrun Liang, Weigao Sun, Weixuan Sun, Weiyu Cheng, Wenkai Li, Xiangjun Song, Xiao Su, Xiaodong Han, Xinjie Zhang, Xinzhu Hou, Xu Min, Xun Zou, Xuyang Shen, Yan Gong, Yingjie Zhu, Yipeng Zhou, Yiran Zhong, Yongyi Hu, Yuanxiang Fan, Yue Yu, Yufeng Yang, Yuhao Li, Yunan Huang, Yunji Li, Yunpeng Huang, Yunzhi Xu, Yuxin Mao, Zehan Li, Zekang Li, Zewei Tao, Zewen Ying, Zhaoyang Cong, Zhen Qin, Zhenhua Fan, Zhihang Yu, Zhuo Jiang, Zijia Wu•Jan 14, 2025•2836