Dataset Distillation as Data Compression: A Rate-Utility Perspective

1Harbin Institute of Technology, Shenzhen 2Peng Cheng Laboratory 3City University of Hong Kong

Visualization of our synthetic samples

Abstract

Driven by the “scale-is-everything” paradigm, modern machine learning increasingly demands ever-larger datasets and models, yielding prohibitive computational and storage requirements. Dataset distillation mitigates this by compressing an original dataset into a small set of synthetic samples, while preserving its full utility. Yet, existing methods either maximize performance under fixed storage budgets or pursue suitable synthetic data representations for redundancy removal, without jointly optimizing both objectives. In this work, we propose a joint rate-utility optimization method for dataset distillation. We parameterize synthetic samples as optimizable latent codes decoded by extremely lightweight networks. We estimate the Shannon entropy of quantized latents as the rate measure and plug any existing distillation loss as the utility measure, trading them off via a Lagrange multiplier. To enable fair, cross-method comparisons, we introduce bits per class (bpc), a precise storage metric that accounts for sample, label, and decoder parameter costs. On CIFAR-10, CIFAR-100, and ImageNet-128, our method achieves up to 170× greater compression than standard distillation at comparable accuracy. Across diverse bpc budgets, distillation losses, and backbone architectures, our approach consistently establishes better rate-utility trade-offs.

System Overview

Overview of our proposed method

Visualization of synthetic samples

Visualization of synthetic samples

Rate-Utility curve

Comparison of the rate-utility curves on the Nette subset of ImageNet, CIFAR-10 and CIFAR-100 with the rate axis displayed on a logarithmic scale.

Rate-Utility Curve: ImageNet

ImageNet

Rate-Utility Curve: CIFAR-10

CIFAR-10

Rate-Utility Curve: CIFAR-100

CIFAR-100

BibTeX

@inproceedings{bao2025ruo,
  author    = {Youneng Bao and Yiping Liu and Zhuo Chen and Yongsheng Liang and Mu Li and Kede Ma},
  title     = {Dataset Distillation as Data Compression: A Rate-Utility Perspective},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2025},
}