Spaces:
Running
Running
models/
Feature extraction + training for FocusGuard (17 features → 10 for MLP/XGB; geometric/hybrid/L2CS paths — see root README.md).
What is here
face_mesh.py: MediaPipe landmarkshead_pose.py: yaw/pitch/roll and face-orientation scoreseye_scorer.py: EAR, gaze offsets, MARcollect_features.py: writes per-session.npzfeature filesmlp/: MLP training and utilitiesxgboost/: XGBoost training and utilities
1) Setup
From repo root:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
2) Collect training data (if needed)
python -m models.collect_features --name <participant_name>
This writes files under data/collected_<participant_name>/.
3) Train models
Both scripts read config from config/default.yaml (split ratios, seeds, hyperparameters).
MLP
python -m models.mlp.train
Outputs:
- checkpoint:
checkpoints/mlp_best.pt(best by validation F1) - scaler/meta:
checkpoints/scaler_mlp.joblib,checkpoints/meta_mlp.npz - log:
evaluation/logs/face_orientation_training_log.json
XGBoost
python -m models.xgboost.train
Outputs:
- checkpoint:
checkpoints/xgboost_face_orientation_best.json - log:
evaluation/logs/xgboost_face_orientation_training_log.json
4) Run evaluation after training
python -m evaluation.justify_thresholds
python -m evaluation.grouped_split_benchmark --quick
python -m evaluation.feature_importance --quick --skip-lofo
Generated reports:
evaluation/THRESHOLD_JUSTIFICATION.mdevaluation/GROUPED_SPLIT_BENCHMARK.mdevaluation/feature_selection_justification.md
5) Optional: ClearML tracking
Run training with ClearML logging:
USE_CLEARML=1 python -m models.mlp.train
USE_CLEARML=1 python -m models.xgboost.train
Remote execution via agent queue:
USE_CLEARML=1 CLEARML_QUEUE=gpu python -m models.mlp.train
USE_CLEARML=1 CLEARML_QUEUE=gpu python -m models.xgboost.train