知识融合"/>
【原创】BERT知识融合
-
预训练
-
【清华-ACL2019】ERNIE: Enhanced Language Representation with Informative Entities
- 知识图谱中实体关系embedding和文本embedding融合;
-
【百度-2019】ERNIE: Enhanced Representation through Knowledge Integration
- 三种mask方式:bert原始的token mask,短语mask,实体mask
-
【百度-AAAI2020】ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding
- 三类七种预训练任务,每种预训练任务针对不同的下游任务更有效;
-
【百度-2021】ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
- 输入加入了知识图谱三元组,并在原有mask基础上,新增关系mask(预测关系)
-
【复旦-COLING 2020】CoLAKE: Contextualized Language and Knowledge Embedding
- 把知识图谱中的实体关系三元组与原始文本连接,构成全连通图(与kbert类似)
-
-
Fine-tuning
-
【印度-ACL2020】Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings
-
【北大腾讯-AAAI2020】K-BERT: Enabling Language Representation with Knowledge Graph
- 把知识图谱中的实体关系三元组与原始文本连接,构成链表结构(类似hash冲突里的拉链法)
-
【腾讯云小微-ACL2021Findings】Improving BERT with Syntax-aware Local Attention
- 融合依存句法知识到BERT
-
【吉林大学-2021】Using Prior Knowledge to Guide BERT’s Attention in Semantic Textual Matching Tasks
-
融合同义词知识到文本匹配BERT
-
-
【腾讯微信-2020】Keyword-Attentive Deep Semantic Matching
- 融合关键词知识到文本匹配BERT
-
-
参考文献
更多推荐
【原创】BERT知识融合
发布评论