ChatPaper.aiChatPaper

How far can we go with ImageNet for Text-to-Image generation?

February 28, 2025
Authors: L. Degeorge, A. Ghosh, N. Dufour, D. Picard, V. Kalogeiton
cs.AI

Abstract

Recent text-to-image (T2I) generation models have achieved remarkable results by training on billion-scale datasets, following a `bigger is better' paradigm that prioritizes data quantity over quality. We challenge this established paradigm by demonstrating that strategic data augmentation of small, well-curated datasets can match or outperform models trained on massive web-scraped collections. Using only ImageNet enhanced with well-designed text and image augmentations, we achieve a +2 overall score over SD-XL on GenEval and +5 on DPGBench while using just 1/10th the parameters and 1/1000th the training images. Our results suggest that strategic data augmentation, rather than massive datasets, could offer a more sustainable path forward for T2I generation.

Summary

AI-Generated Summary

PDF252March 3, 2025