POET: Training Neural Networks on Tiny Devices
with Integrated Rematerialization and Paging

Video (5 min) Paper GitHub (Coming soon)

Train BERT and other large models on smartphones

POET system overview POET optimizes state-of-the-art ML models for training on Edge devices. Operators of the ML model are profiled on target edge device to obtain fine-grained profiles. POET adopts an integrated integrated rematerialization and paging to produce an energy-optimal training schedule.


Overview

  • There is a growing trend to finetune models on edge devices. Fine-tuning models on the edge satisfies privacy constraints and enables offline operation.
  • Challenge: Limited memory on edge makes training new deep learning models infeasible.
  • Given a memory budget and a run-time constraint for ML training, POET (Private Optimal Energy Training) finds a provably energy-optimal plan for scheduling nodes of the training graph.
  • With POET, we are the first to demonstrate how to train memory-hungry SOTA ML models such as BERT and ResNets on smartphones and tiny ARM Cortex-M devices!


Abstract

Fine-tuning models on edge devices like mobile phones would enable privacy-preserving personalization over sensitive data. However, edge training has historically been limited to relatively small models with simple architectures because training is both memory and energy intensive. We present POET, an algorithm to enable training large neural networks on memory-scarce battery-operated edge devices. POET jointly optimizes the integrated search search spaces of rematerialization and paging, two algorithms to reduce the memory consumption of backpropagation. Given a memory budget and a run-time constraint, we formulate a mixed-integer linear program (MILP) for energy-optimal training. Our approach enables training significantly larger models on embedded devices while reducing energy consumption while not modifying mathematical correctness of backpropagation. We demonstrate that it is possible to fine-tune both ResNet-18 and BERT within the memory constraints of a Cortex-M class embedded device while outperforming current edge training methods in energy efficiency. POET is an open-source project available at https://github.com/ShishirPatil/poet (Soon..!)


Citation

@inproceedings{patil2022poet,
  title={POET: Training Neural Networks on Tiny Devices with
  Integrated Rematerialization and Paging},
  author={Patil, Shishir G and Jain, Paras and Dutta, Prabal and Stoica, Ion
  and Gonzalez, Joseph},
  booktitle={International Conference on Machine Learning},
  pages={17573--17583},
  year={2022},
  organization={PMLR}
}