Continual Learning on TinyML Hardware: Benchmarking Study
Explore a benchmark study on lightweight online learning for microcontrollers, focusing on concept drift adaptation in Industrial IoT and edge devices.
Evaluating Streaming Continual Learning on TinyML Hardware Under Concept Drift
MSc Thesis Proposal
Alba Huti
JKU – Streaming AI Initiative
PROBLEM
STATEMENT
Static TinyML Models
Most deployed TinyML models are trained offline and cannot adapt after deployment. When input distributions drift, accuracy collapses.
Concept Drift in Industrial IoT
Tiny embedded devices (MCUs) sense changing environments (e.g. worker activity, machine wear), but static models fail to handle these shifts.
Costly Cloud Retraining
Current workaround is expensive cloud-based retraining, which contradicts goals of sustainability, privacy, and edge autonomy.
No Systematic Benchmark Exists
No existing work has comprehensively evaluated multiple lightweight online-learning methods on real MCUs under drift simultaneously measuring accuracy, latency, and power.
Research Objective
Design and benchmark lightweight online continual learning methods that run entirely on microcontroller-class hardware — enabling on-device adaptation to concept drift without cloud retraining.
01
Algorithm Implementation
SGD Perceptron, Experience Replay & EWC-lite deployed on Arm Cortex-M4 MCU
02
Real-World Evaluation
UCI HAR sensor data streamed sequentially with controlled distribution shifts
03
Reproducible Framework
Trade-off curves across accuracy · memory · latency · energy
EXPECTED OUTCOME
70% accuracy at 10KB RAM
85% recovery
Research Questions
Which lightweight continual-learning methods are most effective at maintaining predictive performance under concept drift on microcontroller-class TinyML hardware?
What trade-offs arise between predictive performance, memory usage, latency, and energy consumption when implementing streaming learning on resource-constrained MCUs?
How much additional computational and memory overhead is required for continual-learning methods to recover from concept drift compared with static TinyML models?
04
Methodology
Slide 05
- tinyml
- continual-learning
- microcontrollers
- edge-ai
- iot
- machine-learning
- concept-drift
- benchmarking