Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

whisper-40hrs-meta

This model is a fine-tuned version of openai/whisper-large-v2 on the JASMIN-CGN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3563
  • Wer: 17.0161

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 48
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 98
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.2139 0.1529 50 1.2026 39.7759
1.1738 0.3058 100 1.0859 37.0182
1.0085 0.4587 150 0.8664 34.3007
0.8304 0.6116 200 0.6222 30.5130
0.6634 0.7645 250 0.4938 25.6181
0.6328 0.9174 300 0.4322 21.8137
0.5947 1.0703 350 0.4087 19.3646
0.5847 1.2232 400 0.3962 18.9553
0.6664 1.3761 450 0.3874 20.6998
0.5805 1.5291 500 0.3789 18.4889
0.5745 1.6820 550 0.3735 18.2776
0.5431 1.8349 600 0.3687 17.5328
0.5532 1.9878 650 0.3654 17.3416
0.556 2.1407 700 0.3627 17.2543
0.5581 2.2936 750 0.3607 17.1302
0.6142 2.4465 800 0.3588 17.0530
0.5406 2.5994 850 0.3574 17.0665
0.5222 2.7523 900 0.3567 17.0061
0.5753 2.9052 950 0.3563 17.0161

Framework versions

  • PEFT 0.16.0
  • Transformers 4.52.0
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for greenw0lf/whisper-40hrs-meta

Adapter
(277)
this model

Evaluation results