方法一

from transformers import AutoTokenizer
tokenizer=AutoTokenizer.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english")

x_train_tokenized=x_train[0].apply(lambda ii:tokenizer.encode(ii, add_special_tokens = True))

# 填充方法
max_len=0
for i in x_train_tokenized.values:
    if len(i) > max_len:
        max_len = len(i)
x_train_tokenized = np.array([i + [0] * (max_len - len(i)) for i in x_train_tokenized.values])

方法二

from transformers import AutoTokenizer
tokenizer=AutoTokenizer.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english")


x_train_tokenized=x_train[0].apply(lambda ii:tokenizer(ii,
                       padding="max_length",
                       truncation=True,
                       return_tensors="pt",
                       max_length=66))
输出类似
 tensor([[  101,  5342,  2047,  3595,  8496,  2013,  1996, 18643,  3197,   102,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
              0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
Logo

汇聚全球AI编程工具,助力开发者即刻编程。

更多推荐