cedar; : Optimized and Unified Machine Learning Input Data Pipelines Journal Article uri icon

Overview

abstract

  • The input data pipeline is an essential component of each machine learning (ML) training job. It is responsible for reading massive amounts of training data, processing batches of samples using complex transformations, and loading them onto training nodes at low latency and high throughput. Performant input data systems are becoming increasingly critical due to skyrocketing data volumes and training throughput demands. Unfortunately, current input data systems cannot fully leverage key performance optimizations, resulting in hugely inefficient infrastructures that require significant resources - or worse - underutilize expensive accelerators.; ; To address these demands, we present; cedar; , an optimized and unified programming framework for ML input data pipelines.; cedar; allows users to define a training job's data pipeline using composable operators that support arbitrary ML frameworks and libraries.; cedar; 's extensible optimizer systematically combines and applies performance optimizations to the pipeline.; cedar; then orchestrates pipeline processing across configurable local and distributed compute resources to efficiently meet the training job's data throughput demands. Across eight pipelines,; cedar; improves performance by up to 1.87× to 10.65× compared to state-of-the-art input data systems.;

publication date

  • October 1, 2024

Date in CU Experts

  • January 28, 2026 11:24 AM

Full Author List

  • Zhao M; Adamiak E; Kozyrakis C

author count

  • 3

Other Profiles

International Standard Serial Number (ISSN)

  • 2150-8097

Additional Document Info

start page

  • 488

end page

  • 502

volume

  • 18

issue

  • 2