Model Details: DPT-Large

Dense Prediction Transformer (DPT) model trained on 1.4 million images for monocular depth estimation.
It was introduced in the paper Vision Transformers for Dense Prediction by Ranftl et al. (2021) and first released in this repository.
DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for monocular depth estimation.

The model card has been written in combination by the Hugging Face team and Intel.

Model DetailDescription
Model Authors – CompanyIntel
DateMarch 22, 2022
Version1
TypeComputer Vision – Monocular Depth Estimation
Paper or Other ResourcesVision Transformers for Dense Prediction and GitHub Repo
LicenseApache 2.0
Questions or CommentsCommunity Tab and Intel Developers Discord

数据统计

相关导航

暂无评论

暂无评论...