bert、transformers、sentence

编程入门 行业动态 更新时间:2024-10-07 04:25:21

<a href=https://www.elefans.com/category/jswz/34/1768058.html style=bert、transformers、sentence"/>

bert、transformers、sentence

Transformer和RNN,CNN一样,是一种特征提取器,bert是使用了transformer的模型,transformers是汇总了很多使用Transformer模型的包,sentence-transformers则是是在transformers上面再进行了加工,用于求句子相识度

transformers:

  • 情感分析(Sentiment analysis):一段文本是正面还是负面的情感倾向
  • 文本生成(Text generation):给定一段文本,让模型补充后面的内容
  • 命名实体识别(Name entity recognition):识别文字中出现的人名地名的命名实体
  • 问答(Question answering):给定一段文本以及针对它的一个问题,从文本中抽取答案
  • 填词(Filling masked text):把一段文字的某些部分mask住,然后让模型填空
  • 摘要(Summarization):根据一段长文本中生成简短的摘要
  • 翻译(Translation):把一种语言的文字翻译成另一种语言
  • 特征提取(Feature extraction):把一段文字用一个向量来表示
import torch
from transformers  import AdamW
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=1e-5)labels = torch.tensor([1,0]).unsqueeze(0)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs.loss
#调用一次backward和step就相当进行一次训练
loss.backward()
optimizer.step()#也可以使用Trainer
from transformers import BertForSequenceClassification, Trainer, TrainingArguments
model = BertForSequenceClassification.from_pretrained("bert-large-uncased")
training_args = TrainingArguments(output_dir='./results',          # output directorynum_train_epochs=3,              # total # of training epochsper_device_train_batch_size=16,  # batch size per device during trainingper_device_eval_batch_size=64,   # batch size for evaluationwarmup_steps=500,                # number of warmup steps for learning rate schedulerweight_decay=0.01,               # strength of weight decaylogging_dir='./logs',            # directory for storing logs
)
trainer = Trainer(model=model,args=training_args,train_dataset=train_dataset,eval_dataset=test_dataset
)
trainer.train()
trainer.evaluate()
# 训练模型:核心:models,输入数据(DataLoader),损失函数(losses),评估函数(evaluation)
from sentence_transformers import SentenceTransformer,models,InputExample,losses,evaluation
from torch.utils.data import  DataLoader 
word_embedding_model = models.Transformer(model_path)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())model_path = r"E:\pycharm_project\xp-emb-train\xp-emb-train\models\chinese_roberta_L-4_H-256"
model = SentenceTransformer(model_path)train_examples = [InputExample(texts=["my first example","my last example"],label=0.8)]
#定义训练数据
train_dataloader = DataLoader(sentence_examples,shuffle=True,batch_size=16)
#定义损失函数
train_loss = losses.CosineSimilarityLoss(model)
#模型训练
model.fit(train_objectives=[(train_dataloader, train_loss)], epochs=10, warmup_steps=100)

更多推荐

bert、transformers、sentence

本文发布于:2024-02-06 15:17:28,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1749957.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:bert   transformers   sentence

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!