Continual Learning on Edge Microcontrollers: TinyML Thesis
Explore a research proposal evaluating lightweight streaming AI methods for concept drift recovery on Arm Cortex-M4 microcontrollers using TinyML.
MSc Thesis Proposal — Alba Huti
Research Objective
Research Question:
Which lightweight continual-learning methods are most effective at maintaining predictive performance under concept drift on microcontroller-class TinyML hardware?
What trade-offs arise between accuracy, memory, latency, and energy in streaming learning on MCUs?
How much computational overhead do continual-learning methods need to recover from concept drift vs. static TinyML models?
JKU Streaming AI Initiative | Arm Cortex-M4 | TinyML | Concept Drift
MSc Thesis Proposal — Alba Huti
Proposed Analysis Tools
Online SGD (Perceptron)
Single-layer linear model updated via gradient descent per sample. Zero additional memory beyond model weights. Pure online learning.
Memory: ~10 KB RAM
Experience Replay
Fixed-size ring buffer storing 50–100 labeled samples. Periodic mini-batch gradient updates using stored data.
Memory: +100 KB RAM
EWC-Lite (Optional)
Elastic Weight Consolidation — penalizes changes to important weights. Minimal regularization overhead.
Memory: small overhead
Static Baseline
Offline-trained TinyML model. No parameter updates during streaming. Conventional deployment paradigm.
No adaptation
Evaluation Dataset: UCI HAR
Accelerometer time-series, 50 Hz, 128-sample windows (2.56s). Streamed sequentially with controlled concept drift.
Sequential streaming
Target Hardware: Arm Cortex-M4
Arduino Nano 33 BLE Sense. 256 KB SRAM, 1 MB Flash, 64 MHz, FPU. Bare-metal C/C++ or TF Lite Micro.
TinyML Platform
Metrics: Prequential Accuracy · Drift Recovery Speed · Inference Latency · RAM/Flash · Energy per Update
MSc Thesis Proposal — Alba Huti
Calendar of Activities
Literature Review & Gap Analysis
Algorithm Selection & Design
Dataset Preparation & Streaming Protocol
Implementation (C/C++ on MCU)
Experiment Execution & Data Collection
Analysis & Visualization
Writing & Thesis Draft
Revision & Submission
Month 1
Month 2
Month 3
Month 4
Month 5
Month 6
JKU Streaming AI Initiative | Arm Cortex-M4 | TinyML | Concept Drift
MSc Thesis Proposal — Alba Huti
Expected Contribution
📦 Benchmark Framework
A publicly available suite (code + instructions) for evaluating TinyML continual learning on microcontrollers. Consistent hardware setup, streaming protocol, and evaluation metrics ensure reproducibility.
📊 Empirical Trade-off Data
Concrete accuracy vs. resource measurements per algorithm. Example: Perceptron → ~70% accuracy / <10 KB RAM; Replay buffer → ~85% accuracy / +100 KB RAM / 5× update time.
💡 Practical Design Guidelines
Developer-ready recommendations: 'With 50 KB free RAM, use Online SGD for fastest recovery; with 150 KB, replay buffer yields best final accuracy.' Direct support for real-world hardware-constrained deployments.
🌱 Advancing Streaming AI
Demonstrates on-device incremental learning reduces cloud retraining dependency → lower CO₂ footprint, less data transmission. Confirms distributed few-shot learning is feasible on real MCUs.
Supervisor: JKU Streaming AI Initiative | Platform: Arm Cortex-M4 | Dataset: UCI HAR
- tinyml
- continual-learning
- microcontrollers
- edge-ai
- concept-drift
- streaming-ai
- embedded-systems