HF计算机视觉

sayakpaul/glpn-nyu-finetuned-diode-221116-104421

glpn-nyu-finetuned-diode-221116-104421

This model is a fine-tuned version of vinvino02/glpn-nyu on the diode-subset dataset.
It achieves the following results on the evaluation set:

  • Loss: 0.3736
  • Mae: 0.3079
  • Rmse: 0.4321
  • Abs Rel: 0.3666
  • Log Mae: 0.1288
  • Log Rmse: 0.1794
  • Delta1: 0.4929
  • Delta2: 0.7934
  • Delta3: 0.9234

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 24
  • eval_batch_size: 48
  • seed: 2022
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training LossEpochStepValidation LossMaeRmseAbs RelLog MaeLog RmseDelta1Delta2Delta3
1.36441.0721.11492.92992.98784.32370.63290.68030.00010.02410.1147
0.77012.01440.51150.49770.64350.68060.19670.25430.31130.54620.7732
0.43513.02160.41290.35910.48680.45870.14950.20340.42140.71940.8869
0.40014.02880.40030.34210.47110.42480.14180.19530.45090.74460.8999
0.39235.03600.39280.33340.45730.41390.13880.19060.45620.75700.9098
0.3636.04320.38060.31760.44190.38730.13280.18400.47570.77860.9188
0.35167.05040.37600.30910.43460.36970.12910.18040.49330.79270.9224
0.3038.05760.37980.31310.44010.38110.13070.18330.49130.78860.9189
0.31919.06480.37660.31040.43560.37380.12980.18110.49070.79010.9214
0.310210.07200.37360.30790.43210.36660.12880.17940.49290.79340.9234

数据统计

相关导航

暂无评论

暂无评论...