HF计算机视觉

sayakpaul/glpn-nyu-finetuned-diode-221116-110652

glpn-nyu-finetuned-diode-221116-110652

This model is a fine-tuned version of vinvino02/glpn-nyu on the diode-subset dataset.
It achieves the following results on the evaluation set:

  • Loss: 0.4018
  • Mae: 0.3272
  • Rmse: 0.4546
  • Abs Rel: 0.3934
  • Log Mae: 0.1380
  • Log Rmse: 0.1907
  • Delta1: 0.4598
  • Delta2: 0.7659
  • Delta3: 0.9082

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 24
  • eval_batch_size: 48
  • seed: 2022
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training LossEpochStepValidation LossMaeRmseAbs RelLog MaeLog RmseDelta1Delta2Delta3
1.39841.0721.16063.21543.27104.69270.66270.70820.00.00530.0893
0.83052.01440.54450.60350.84040.80130.21020.27260.27470.53580.7609
0.46013.02160.44840.40410.53760.54170.16170.21880.37710.69320.8692
0.42114.02880.42510.36340.49140.48000.14990.20690.41360.72700.8931
0.41625.03600.41700.35370.48330.44830.14550.20050.43030.74440.8992
0.37766.04320.41150.34910.46920.45580.14490.19990.42810.74710.9018
0.37297.05040.40580.33370.45900.41350.13960.19350.45170.76520.9072
0.32358.05760.40350.33040.46020.40430.13830.19290.46130.76790.9073
0.33829.06480.39900.32540.45460.39370.13650.19000.46710.77170.9102
0.326510.07200.40180.32720.45460.39340.13800.19070.45980.76590.9082

数据统计

相关导航

暂无评论

暂无评论...