whatlaunched

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova | 2018年01月01日

摘要

This paper introduced BERT (Bidirectional Encoder Representations from Transformers), a pre-trained bidirectional transformer model that achieved state-of-the-art results on eleven NLP tasks. BERT's bidirectional training and fine-tuning approach revolutionized natural language processing.

📄 论文链接