wav2sleep-eog

EOG-based sleep staging (5-class: Wake, N1, N2, N3, REM)

Model Description

This is a wav2sleep model for automatic sleep stage classification from electrooculography (EOG). wav2sleep is a unified multi-modal deep learning approach that can process various combinations of physiological signals for sleep staging.

Model Details

Property Value
Input Signals EOG-L, EOG-R
Output Classes 5
Architecture Non-causal (bidirectional)

Signal Specifications

Signal Samples per 30s epoch
ECG, PPG 1,024
ABD, THX 256
EOG-L, EOG-R 4,096

Usage

from wav2sleep import load_model

# Load model from Hugging Face Hub
model = load_model("hf://joncarter/wav2sleep-eog")

# Or load from local checkpoint
model = load_model("/path/to/checkpoint")

For inference on new data:

from wav2sleep import load_model, predict_on_folder

model = load_model("hf://joncarter/wav2sleep-eog")
predict_on_folder(
    input_folder="/path/to/edf_files",
    output_folder="/path/to/predictions",
    model=model,
)

Training Data

The model was trained on polysomnography data from multiple publicly available datasets managed by the National Sleep Research Resource (NSRR), including SHHS and MESA.

Citation

@misc{carter2024wav2sleep,
    title={wav2sleep: A Unified Multi-Modal Approach to Sleep Stage Classification from Physiological Signals},
    author={Jonathan F. Carter and Lionel Tarassenko},
    year={2024},
    eprint={2411.04644},
    archivePrefix={arXiv},
    primaryClass={cs.LG},
}

License

MIT

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for joncarter/wav2sleep-eog