ChatPaper.aiChatPaper

Craw4LLM:面向大语言模型预训练的高效网络爬虫

Craw4LLM: Efficient Web Crawling for LLM Pretraining

February 19, 2025
作者: Shi Yu, Zhiyuan Liu, Chenyan Xiong
cs.AI

摘要

网络爬取是大规模语言模型(LLMs)预训练数据的主要来源,然而,由于数据质量较低,大多数爬取的网页在预训练过程中被舍弃。本文提出了Crawl4LLM,一种高效的网络爬取方法,该方法基于LLM预训练的需求探索网络图。具体而言,它利用网页在LLM预训练中的影响力作为爬虫调度器的优先级评分,替代了传统的基于图连通性的优先级标准。我们在一个包含商业搜索引擎索引中9亿网页的网络图上进行的实验表明,Crawl4LLM在获取高质量预训练数据方面表现出色。仅需爬取21%的URL,基于Crawl4LLM数据预训练的LLMs即可达到以往爬取数据的下游任务性能,显著减少了爬取浪费,并减轻了对网站的压力。我们的代码已公开于https://github.com/cxcscmu/Crawl4LLM。
English
Web crawl is a main source of large language models' (LLMs) pretraining data, but the majority of crawled web pages are discarded in pretraining due to low data quality. This paper presents Crawl4LLM, an efficient web crawling method that explores the web graph based on the preference of LLM pretraining. Specifically, it leverages the influence of a webpage in LLM pretraining as the priority score of the web crawler's scheduler, replacing the standard graph connectivity based priority. Our experiments on a web graph containing 900 million webpages from a commercial search engine's index demonstrate the efficiency of Crawl4LLM in obtaining high-quality pretraining data. With just 21% URLs crawled, LLMs pretrained on Crawl4LLM data reach the same downstream performances of previous crawls, significantly reducing the crawling waste and alleviating the burdens on websites. Our code is publicly available at https://github.com/cxcscmu/Crawl4LLM.

Summary

AI-Generated Summary

PDF272February 20, 2025