HF多模态

indobenchmark/indobert-base-p2

IndoBERT Base Model (phase2 – uncased)

IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.

All Pre-trained Models

Model#paramsArch.Training data
indobenchmark/indobert-base-p1124.5MBaseIndo4B (23.43 GB of text)
indobenchmark/indobert-base-p2124.5MBaseIndo4B (23.43 GB of text)
indobenchmark/indobert-large-p1335.2MLargeIndo4B (23.43 GB of text)
indobenchmark/indobert-large-p2335.2MLargeIndo4B (23.43 GB of text)
indobenchmark/indobert-lite-base-p111.7MBaseIndo4B (23.43 GB of text)
indobenchmark/indobert-lite-base-p211.7MBaseIndo4B (23.43 GB of text)
indobenchmark/indobert-lite-large-p117.7MLargeIndo4B (23.43 GB of text)
indobenchmark/indobert-lite-large-p217.7MLargeIndo4B (23.43 GB of text)

数据统计

相关导航

暂无评论

暂无评论...