我有下面这句话:
I want to ____ the car because it is cheap.我想使用NLP模型来预测缺失的单词。我应该使用哪种NLP模型?谢谢。
发布于 2019-03-04 17:02:08
TL;DR
试试这个:https://github.com/huggingface/pytorch-pretrained-BERT
首先,您必须正确地使用
pip install -U pytorch-pretrained-bert然后你可以使用BERT算法中的“掩蔽语言模型”,例如
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
# OPTIONAL: if you want to have more information on what's happening, activate the logger as follows
import logging
logging.basicConfig(level=logging.INFO)
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = '[CLS] I want to [MASK] the car because it is cheap . [SEP]'
tokenized_text = tokenizer.tokenize(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
# Create the segments tensors.
segments_ids = [0] * len(tokenized_text)
# Convert inputs to PyTorch tensors
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
# Load pre-trained model (weights)
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.eval()
# Predict all tokens
with torch.no_grad():
predictions = model(tokens_tensor, segments_tensors)
predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token)输出
buy在Long中
要真正理解为什么需要[CLS]、[MASK]和segment张量,请仔细阅读本文,https://arxiv.org/abs/1810.04805
如果你很懒,你可以读一读https://lilianweng.github.io/lil-log/2019/01/31/generalized-language-models.html的Lilian Weng写的这篇很好的博文
除了BERT之外,还有很多其他模型可以完成填补空白的任务。一定要查看pytorch-pretrained-BERT存储库中的其他模型,但更重要的是深入研究“语言建模”的任务,即预测给定历史的下一个单词的任务。
发布于 2019-03-04 16:00:45
你可能会用到很多模型。但我认为最近用于这类序列学习问题的模型是双向RNN(比如双向LSTM),你可以从here得到一些提示
但请注意,双向RNN的训练成本非常高。根据你要解决的问题,我强烈建议使用一些预先训练好的模型。祝好运!
https://stackoverflow.com/questions/54978443
复制相似问题