Understanding data storage and ingestion for large-scale deep recommendation model training: Industrial product

Mark Zhao Stanford

Niket Agarwal

Aarti Basant

Buğra Gedik

Satadru Pan

Mustafa Ozdal

Rakesh Komuravelli

Jerry Pan

Tianshu Bao

Haowei Lu

Sundaram Narayanan

Jack Langman

Kevin Wilfong

Harsha Rastogi

Carole-Jean Wu

Christos Kozyrakis Stanford

Parik Pol

International Symposium on Computer Architecture (ISCA), 2022


Abstract

Datacenter-scale AI training clusters consisting of thousands of domain-specific accelerators (DSA) are used to train increasingly-complex deep learning models. These clusters rely on a data storage and ingestion (DSI) pipeline, responsible for storing exabytes of training data and serving it at tens of terabytes per second. As DSAs continue to push training efficiency and throughput, the DSI pipeline is becoming the dominating factor that constrains the overall training performance and capacity. Innovations that improve the efficiency and performance of DSI systems and hardware are urgent, demanding a deep understanding of DSI characteristics and infrastructure at scale.This paper presents Meta’s end-to-end DSI pipeline, composed of a central data warehouse built on distributed storage and a Data PreProcessing Service that scales to eliminate data stalls. We characterize how hundreds of models are …