python-pytorch基础之bert模型分词器tokenizer填充的两种方法
python-pytorch基础之bert模型分词器tokenizer填充的两种方法
·
方法一
from transformers import AutoTokenizer
tokenizer=AutoTokenizer.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english")
x_train_tokenized=x_train[0].apply(lambda ii:tokenizer.encode(ii, add_special_tokens = True))
# 填充方法
max_len=0
for i in x_train_tokenized.values:
if len(i) > max_len:
max_len = len(i)
x_train_tokenized = np.array([i + [0] * (max_len - len(i)) for i in x_train_tokenized.values])
方法二
from transformers import AutoTokenizer
tokenizer=AutoTokenizer.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english")
x_train_tokenized=x_train[0].apply(lambda ii:tokenizer(ii,
padding="max_length",
truncation=True,
return_tensors="pt",
max_length=66))
输出类似
tensor([[ 101, 5342, 2047, 3595, 8496, 2013, 1996, 18643, 3197, 102,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
更多推荐




所有评论(0)