Made byBobr AI

Continual Learning on TinyML Hardware: Benchmarking Study

Explore a benchmark study on lightweight online learning for microcontrollers, focusing on concept drift adaptation in Industrial IoT and edge devices.

#tinyml#continual-learning#microcontrollers#edge-ai#iot#machine-learning#concept-drift#benchmarking
Watch
Pitch

Evaluating Streaming Continual Learning on TinyML Hardware Under Concept Drift

MSc Thesis Proposal

Alba Huti

JKU – Streaming AI Initiative

Made byBobr AI

PROBLEM
STATEMENT

Static TinyML Models

Most deployed TinyML models are trained offline and cannot adapt after deployment. When input distributions drift, accuracy collapses.

Concept Drift in Industrial IoT

Tiny embedded devices (MCUs) sense changing environments (e.g. worker activity, machine wear), but static models fail to handle these shifts.

Costly Cloud Retraining

Current workaround is expensive cloud-based retraining, which contradicts goals of sustainability, privacy, and edge autonomy.

No Systematic Benchmark Exists

No existing work has comprehensively evaluated multiple lightweight online-learning methods on real MCUs under drift simultaneously measuring accuracy, latency, and power.

02
Made byBobr AI
Research Objective

Design and benchmark lightweight online continual learning methods that run entirely on microcontroller-class hardware — enabling on-device adaptation to concept drift without cloud retraining.

01
Algorithm Implementation
SGD Perceptron, Experience Replay & EWC-lite deployed on Arm Cortex-M4 MCU
02
Real-World Evaluation
UCI HAR sensor data streamed sequentially with controlled distribution shifts
03
Reproducible Framework
Trade-off curves across accuracy · memory · latency · energy
EXPECTED OUTCOME
Perceptron → 70% accuracy at 10KB RAM  ·  Experience Replay → 85% recovery at 5× update time
03
Made byBobr AI

Research Questions

RQ1

Which lightweight continual-learning methods are most effective at maintaining predictive performance under concept drift on microcontroller-class TinyML hardware?

RQ2

What trade-offs arise between predictive performance, memory usage, latency, and energy consumption when implementing streaming learning on resource-constrained MCUs?

RQ3

How much additional computational and memory overhead is required for continual-learning methods to recover from concept drift compared with static TinyML models?

04
Made byBobr AI

Methodology

Slide 05
01

Hardware Platform

Arm Cortex-M4 MCU
(Arduino Nano 33 BLE Sense)
64 MHz, 256 KB SRAM, 1 MB Flash
Strict TinyML compute & memory constraints
02

Algorithms (3)

Online SGD Perceptron
Gradient descent per sample, minimal RAM
Experience Replay
Ring buffer (50–100 samples), mini-batches
EWC-lite Optional
Regularization penalizing weight changes
03

Dataset & Streaming

UCI HAR Dataset
Accelerometer, 50 Hz, 128-sample windows
Streamed sample-by-sample in chronological order
Prequential evaluation (Test-then-train)
04

Concept Drift Scenarios

Four injected drift types:
Sudden Gradual Incremental Recurring
Controlled injection points to cleanly measure recovery speed
05

Evaluation Metrics

Prequential accuracy over time
Drift recovery speed
Inference & update latency
RAM & Flash usage profiling
Energy consumption
(power monitor / CPU cycle est.)
Made byBobr AI
Bobr AI

DESIGNER-MADE
PRESENTATION,
GENERATED FROM
YOUR PROMPT

Create your own professional slide deck with real images, data charts, and unique design in under a minute.

Generate For Free

Continual Learning on TinyML Hardware: Benchmarking Study

Explore a benchmark study on lightweight online learning for microcontrollers, focusing on concept drift adaptation in Industrial IoT and edge devices.

Evaluating Streaming Continual Learning on TinyML Hardware Under Concept Drift

MSc Thesis Proposal

Alba Huti

JKU – Streaming AI Initiative

PROBLEM

STATEMENT

Static TinyML Models

Most deployed TinyML models are trained offline and cannot adapt after deployment. When input distributions drift, accuracy collapses.

Concept Drift in Industrial IoT

Tiny embedded devices (MCUs) sense changing environments (e.g. worker activity, machine wear), but static models fail to handle these shifts.

Costly Cloud Retraining

Current workaround is expensive cloud-based retraining, which contradicts goals of sustainability, privacy, and edge autonomy.

No Systematic Benchmark Exists

No existing work has comprehensively evaluated multiple lightweight online-learning methods on real MCUs under drift simultaneously measuring accuracy, latency, and power.

Research Objective

Design and benchmark lightweight online continual learning methods that run entirely on microcontroller-class hardware — enabling on-device adaptation to concept drift without cloud retraining.

01

Algorithm Implementation

SGD Perceptron, Experience Replay & EWC-lite deployed on Arm Cortex-M4 MCU

02

Real-World Evaluation

UCI HAR sensor data streamed sequentially with controlled distribution shifts

03

Reproducible Framework

Trade-off curves across accuracy · memory · latency · energy

EXPECTED OUTCOME

70% accuracy at 10KB RAM

85% recovery

Research Questions

Which lightweight continual-learning methods are most effective at maintaining predictive performance under concept drift on microcontroller-class TinyML hardware?

What trade-offs arise between predictive performance, memory usage, latency, and energy consumption when implementing streaming learning on resource-constrained MCUs?

How much additional computational and memory overhead is required for continual-learning methods to recover from concept drift compared with static TinyML models?

04

Methodology

Slide 05

  • tinyml
  • continual-learning
  • microcontrollers
  • edge-ai
  • iot
  • machine-learning
  • concept-drift
  • benchmarking