昇思25天学习打卡营第23天|基于mindspore bert对话情绪识别
【代码】昇思25天学习打卡营第23天|基于mindspore bert对话情绪识别。
·

Interesting thing!
About Bert you just need to know that it is like gpt, but focus on pre-training Encoder instead of decoder. It has a mask method which enhances its precision remarkbably. (judge not only the word before the blank but the later one )
model : BertForSequenceClassfication constructs the model and load the config and set the sentiment classification to 3 kinds
model = BertForSequenceClassification.from_pretrained('bert-base-chinese', num_labels = 3)
model = auto_mixed_precision(model, '01')
optimizer = nn.Adam(model.trainable_params(), learning_rate = 2e-5)
metric = Accuracy()
ckpoint_cb = CheckpointCallback(save_path = 'checkpoint', ckpt_name = 'bert_emotect', epochs = 1, keep_checkpoint_max = 2)
best_model_cb = BestModelCallback(save_path = 'checkpoint', ckpt_name = 'bert_emotect_best', auto_load = True)
trainer = Trainer(network = model, train_dataset = dataset_train,
eval_dataset=dataset_val, metrics = metric,
epochs = 5, optimizer = optimizer, callback = [ckpoint_cb, best_model_cb])
trainer.run(tgt_columns = 'labels')
the model validation and prediction are the same mostly like Sentiment by any model:
evaluator = Evaluator(network = model, eval_dataset = dataset_test, metrics= metric)
evaluator.run(tgt_columns='labels')
dataset_infer = SentimentDataset('data/infer.tsv')
def predict(text, label = None):
label_map = {0:'消极', 1:'中性', 2:'积极'}
text_tokenized = Tensor([tokenizer(text).input_ids])
logits = model(text_tokenized)
predict_label = logits[0].asnumpy().argmax()
info = f"inputs:'{text}',predict:
'{label_map[predict_label]}'"
if label is not None:
info += f", label:'{label_map[label]}'"
print(info)
鲲鹏昇腾开发者社区是面向全社会开放的“联接全球计算开发者,聚合华为+生态”的社区,内容涵盖鲲鹏、昇腾资源,帮助开发者快速获取所需的知识、经验、软件、工具、算力,支撑开发者易学、好用、成功,成为核心开发者。
更多推荐

所有评论(0)