DeBERTa: Decoding-enhanced BERT with Disentangled Attention

DeBERTa improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the official repository for more details and updates.
This the DeBERTa xlarge model(750M) fine-tuned with mnli task.

Fine-tuning on NLU tasks

We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.

ModelSQuAD 1.1SQuAD 2.0MNLI-m/mmSST-2QNLICoLARTEMRPCQQPSTS-B
F1/EMF1/EMAccAccAccMCCAccAcc/F1Acc/F1P/S
BERT-Large90.9/84.181.8/79.086.6/-93.292.360.670.488.0/-91.3/-90.0/-
RoBERTa-Large94.6/88.989.4/86.590.2/-96.493.968.086.690.9/-92.2/-92.4/-
XLNet-Large95.1/89.790.6/87.990.8/-97.094.969.085.990.8/-92.3/-92.5/-
DeBERTa-Large195.5/90.190.7/88.091.3/91.196.595.369.591.092.6/94.692.3/-92.8/92.5
DeBERTa-XLarge1-/--/-91.5/91.297.093.192.1/94.392.9/92.7
DeBERTa-V2-XLarge195.8/90.891.4/88.991.7/91.697.595.871.193.992.0/94.292.3/89.892.9/92.9
DeBERTa-V2-XXLarge1,296.1/91.492.2/89.791.7/91.997.296.072.093.593.1/94.992.7/90.393.2/93.1

数据统计

相关导航

暂无评论

暂无评论...