# MoCap to IMU Knowledge Distillation for Activity Recognition
> A framework for training lightweight IMU models using MoCap teacher supervision. Research on robust edge-deployable human activity recognition for industry.

Tags: knowledge-distillation, human-activity-recognition, imu-sensors, mocap, edge-ai, deep-learning, ms-g3d, tcn
## Proposed Analysis Tools
* **Datasets:** LARa and OpenPack synchronized MoCap and IMU recordings.
* **Teacher Model:** MS-G3D (Spatio-temporal graph convolution on MoCap skeleton).
* **Student Model:** Lightweight TCN / LSTM for compact 2-4 IMU edge deployment.
* **Methodology:** Knowledge Distillation using KL-divergence and MSE multi-objective loss.
* **Evaluation:** Robustness testing (cross-user, cross-scenario) and deployment metrics (latency, model size, macro-F1).

## Calendar of Activities
* **Timeline:** Project spans April '26 to September '26.
* **Phases:** Literature review, teacher/student implementation, distillation training, robustness/feasibility testing, and thesis finalization.

## Expected Contributions
* **Transfer Pipeline:** Reproducible MoCap-to-IMU distillation framework for temporal motion data.
* **Generalization:** Evidence that MoCap supervision improves cross-user and cross-scenario robustness.
* **Sensor Optimization:** Guidance on the minimal sufficient IMU sensor count for industrial tasks.
* **Benchmarking:** Standardized protocol using synthetic perturbations and industry-relevant metrics.
* **Edge Insights:** Data on inference latency and memory footprint for embedded exoskeleton systems.
---
This presentation was created with [Bobr AI](https://bobr.ai) — an AI presentation generator.