Whisper Small IsiZulu
This model is a fine-tuned version of openai/whisper-small on the ISIZULU-ASR-TRAIN dataset. It achieves the following results on the evaluation set:
- Loss: 1.1039
- Wer: 84.8668
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 3.5743 | 1.0 | 18 | 3.0139 | 187.2881 |
| 2.6299 | 2.0 | 36 | 2.1201 | 99.0315 |
| 1.7123 | 3.0 | 54 | 1.6249 | 116.5860 |
| 1.1523 | 4.0 | 72 | 1.3463 | 97.5787 |
| 0.7429 | 5.0 | 90 | 1.1763 | 72.2760 |
| 0.4828 | 6.0 | 108 | 1.1024 | 68.4019 |
| 0.2375 | 7.0 | 126 | 1.0793 | 64.7700 |
| 0.1053 | 8.0 | 144 | 1.0770 | 63.5593 |
| 0.0498 | 9.0 | 162 | 1.0751 | 102.4213 |
| 0.0264 | 10.0 | 180 | 1.0856 | 74.9395 |
| 0.0152 | 11.0 | 198 | 1.0842 | 84.0194 |
| 0.0101 | 12.0 | 216 | 1.0852 | 83.4140 |
| 0.0076 | 13.0 | 234 | 1.0987 | 61.8644 |
| 0.0068 | 14.0 | 252 | 1.1034 | 62.1065 |
| 0.0064 | 15.0 | 270 | 1.1039 | 84.8668 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.2.0
- Tokenizers 0.22.1
- Downloads last month
- 11
Model tree for zionia/whisper-small-isizulu
Base model
openai/whisper-smallDataset used to train zionia/whisper-small-isizulu
Evaluation results
- Wer on ISIZULU-ASR-TRAINself-reported84.867