wav2sleep: A Unified Multi-Modal Approach to Sleep Stage Classification from Physiological Signals
Paper
•
2411.04644
•
Published
EOG-based sleep staging (5-class: Wake, N1, N2, N3, REM)
This is a wav2sleep model for automatic sleep stage classification from electrooculography (EOG). wav2sleep is a unified multi-modal deep learning approach that can process various combinations of physiological signals for sleep staging.
| Property | Value |
|---|---|
| Input Signals | EOG-L, EOG-R |
| Output Classes | 5 |
| Architecture | Non-causal (bidirectional) |
| Signal | Samples per 30s epoch |
|---|---|
| ECG, PPG | 1,024 |
| ABD, THX | 256 |
| EOG-L, EOG-R | 4,096 |
from wav2sleep import load_model
# Load model from Hugging Face Hub
model = load_model("hf://joncarter/wav2sleep-eog")
# Or load from local checkpoint
model = load_model("/path/to/checkpoint")
For inference on new data:
from wav2sleep import load_model, predict_on_folder
model = load_model("hf://joncarter/wav2sleep-eog")
predict_on_folder(
input_folder="/path/to/edf_files",
output_folder="/path/to/predictions",
model=model,
)
The model was trained on polysomnography data from multiple publicly available datasets managed by the National Sleep Research Resource (NSRR), including SHHS and MESA.
@misc{carter2024wav2sleep,
title={wav2sleep: A Unified Multi-Modal Approach to Sleep Stage Classification from Physiological Signals},
author={Jonathan F. Carter and Lionel Tarassenko},
year={2024},
eprint={2411.04644},
archivePrefix={arXiv},
primaryClass={cs.LG},
}
MIT