Fine-tuning models on edge devices like mobile phones would enable privacy-preserving personalization over sensitive data.
However, edge training has historically been limited to relatively small models with simple architectures because training
is both memory and energy intensive. We present POET, an algorithm to enable training large neural networks on memory-scarce
battery-operated edge devices. POET jointly optimizes the integrated search search spaces of rematerialization and paging,
two algorithms to reduce the memory consumption of backpropagation. Given a memory budget and a run-time constraint, we
formulate a mixed-integer linear program (MILP) for energy-optimal training. Our approach enables training significantly
larger models on embedded devices while reducing energy consumption while not modifying mathematical correctness of
backpropagation. We demonstrate that it is possible to fine-tune both ResNet-18 and BERT within the memory constraints
of a Cortex-M class embedded device while outperforming current edge training methods in energy efficiency. POET is an
open-source project available at https://github.com/ShishirPatil/poet.
@inproceedings{patil2022poet,
title={POET: Training Neural Networks on Tiny Devices with
Integrated Rematerialization and Paging},
author={Patil, Shishir G and Jain, Paras and Dutta, Prabal and Stoica, Ion
and Gonzalez, Joseph},
booktitle={International Conference on Machine Learning},
pages={17573--17583},
year={2022},
organization={PMLR}
}