利用MLP构建二元语言模型,参考下面的模型进行构建:

image-20250507145906481

context window 长度是3,也就是前三个字符预测后一个字符,首先用字符索引对应的embedding,然后经过一层网络,tanh处理后再经过一层网络,做softmax操作得到每个字符的概率

所以模型的参数包括:embeddingC,中间层和最后的神经网络的参数

下面是具体的代码

1
2
3
4
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt # for making figures
%matplotlib inline

读文件,是一堆名字组成的list

1
2
3
# read in all the words
words = open('names.txt', 'r').read().splitlines()
words[:8]
['emma', 'olivia', 'ava', 'isabella', 'sophia', 'charlotte', 'mia', 'amelia']
1
len(words)
32033

构建索引,string2int以及int2string

1
2
3
4
5
6
# build the vocabulary of characters and mappings to/from integers
chars = sorted(list(set(''.join(words))))
stoi = {s:i+1 for i,s in enumerate(chars)}
stoi['.'] = 0
itos = {i:s for s,i in stoi.items()}
print(itos)
{1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e', 6: 'f', 7: 'g', 8: 'h', 9: 'i', 10: 'j', 11: 'k', 12: 'l', 13: 'm', 14: 'n', 15: 'o', 16: 'p', 17: 'q', 18: 'r', 19: 's', 20: 't', 21: 'u', 22: 'v', 23: 'w', 24: 'x', 25: 'y', 26: 'z', 0: '.'}

构建数据集,context window length为3,也就是这里的block_size

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# build the dataset

block_size = 3 # context length: how many characters do we take to predict the next one?
X, Y = [], []
for w in words:

#print(w)
context = [0] * block_size
for ch in w + '.':
ix = stoi[ch]
X.append(context)
Y.append(ix)
#print(''.join(itos[i] for i in context), '--->', itos[ix])
context = context[1:] + [ix] # crop and append

X = torch.tensor(X)
Y = torch.tensor(Y)
1
X.shape, X.dtype, Y.shape, Y.dtype
(torch.Size([228146, 3]), torch.int64, torch.Size([228146]), torch.int64)

下面是规范的构建数据集的过程,这里还对数据集进行了划分,训练集 开发集 测试集 80% 10% 10%

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# build the dataset
block_size = 3 # context length: how many characters do we take to predict the next one?

def build_dataset(words):
X, Y = [], []
for w in words:

#print(w)
context = [0] * block_size
for ch in w + '.':
ix = stoi[ch]
X.append(context)
Y.append(ix)
#print(''.join(itos[i] for i in context), '--->', itos[ix])
context = context[1:] + [ix] # crop and append

X = torch.tensor(X)
Y = torch.tensor(Y)
print(X.shape, Y.shape)
return X, Y

import random
random.seed(42)
random.shuffle(words)
n1 = int(0.8*len(words))
n2 = int(0.9*len(words))

# training split, dev/validation split, test split
# 80%, 10%, 10%

Xtr, Ytr = build_dataset(words[:n1])
Xdev, Ydev = build_dataset(words[n1:n2])
Xte, Yte = build_dataset(words[n2:])
torch.Size([182625, 3]) torch.Size([182625])
torch.Size([22655, 3]) torch.Size([22655])
torch.Size([22866, 3]) torch.Size([22866])

尝试将embedding长度设置为2,也就是将27个字符压缩到2维空间表示,之前是用one-hot编码

1
C = torch.randn((27, 2))
1
2
emb = C[X] # 索引方式获取embedding
emb.shape
torch.Size([228146, 3, 2])

为了方便计算,需要将3个字符的向量进行拼接也就是把shape变成torch.Size([228146, 6]),然后和W1相乘

1
2
W1 = torch.randn((6, 100)) # 100相当于hidden_size
b1 = torch.randn(100)
1
h = torch.tanh(emb.view(-1, 6) @ W1 + b1) # view高效的进行维度的变换,要求最后一维是6
1
h
tensor([[-0.9635, -0.9175, -0.9873,  ..., -0.9958,  0.8321, -0.9806],
        [ 0.9335, -0.0820,  0.6465,  ..., -0.9810, -0.6320,  0.8664],
        [ 0.9764,  1.0000, -0.9878,  ...,  0.6863, -0.9981,  0.9999],
        ...,
        [-0.7309, -0.3933,  0.9428,  ...,  0.6721, -0.9989,  0.9627],
        [ 0.9991, -0.9995,  1.0000,  ..., -0.9868, -0.9941,  0.9993],
        [ 0.2479,  0.9941,  0.4721,  ...,  0.9912,  0.7920,  1.0000]])
1
h.shape
torch.Size([228146, 100])

下面是输出层的神经网络,因为hidden_size是100,所以第一个维度是100

1
2
W2 = torch.randn((100, 27))
b2 = torch.randn(27)
1
logits = h @ W2 + b2
1
logits.shape
torch.Size([228146, 27])

计算后接每个字符的概率prob

1
2
3
counts = logits.exp()
prob = counts / counts.sum(1, keepdims=True)
prob.shape
torch.Size([228146, 27])

计算损失,和之前一样采用的是负log平均

1
2
loss = -prob[torch.arange(228146), Y].log().mean()
loss
tensor(16.1537)

下面是更为正规的书写,因为之前的embedding设置为2维效果不是很好,设置为10维的

1
# ------------ now made respectable :) ---------------
1
Xtr.shape, Ytr.shape # dataset
(torch.Size([182625, 3]), torch.Size([182625]))

网络的设置

1
2
3
4
5
6
7
g = torch.Generator().manual_seed(2147483647) # for reproducibility
C = torch.randn((27, 10), generator=g)
W1 = torch.randn((30, 200), generator=g) # 3*100, hidden_size
b1 = torch.randn(200, generator=g)
W2 = torch.randn((200, 27), generator=g) # hidden_size, output_size
b2 = torch.randn(27, generator=g)
parameters = [C, W1, b1, W2, b2]
1
sum(p.nelement() for p in parameters) # number of parameters in total
11897
1
2
for p in parameters:
p.requires_grad = True

插曲:如何选择lr


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
lre = torch.linspace(-3, 0, 1000) 
lrs = 10**lre #让lrs处于0.001到1的区间递增

lri = []
lossi = []
stepi = []

for i in range(1000): #轮数和lrs长度一致

# minibatch construct
ix = torch.randint(0, Xtr.shape[0], (32,))

# forward pass
emb = C[Xtr[ix]] # (32, 3, 10)
h = torch.tanh(emb.view(-1, 30) @ W1 + b1) # (32, 200)
logits = h @ W2 + b2 # (32, 27)
loss = F.cross_entropy(logits, Ytr[ix])
#print(loss.item())

# backward pass
for p in parameters:
p.grad = None
loss.backward()

# update
lr = lrs[i]
for p in parameters:
p.data += -lr * p.grad

# track stats
lri.append(lre[i])
lossi.append(loss.log10().item())

#print(loss.item())

plt.plot(lri, lossi)

绘制loss-lre图,最低点差不多在-1.0的位置,也就是lr取0.1就很棒了

image-20250507152058610


回到训练~,前向传播,计算loss,梯度清零,计算梯度,这里的cross_entropy直接代替了之前的softmax然后负log平均

这里用了mini batch的优化,也就是每次随机选batch_size个样本,计算这些样本的loss,然后调整模型参数,多次 mini batch的调整效果也很棒

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
lossi = []
stepi = []
for i in range(200000):

# minibatch construct
ix = torch.randint(0, Xtr.shape[0], (32,))

# forward pass
emb = C[Xtr[ix]] # (32, 3, 10)
h = torch.tanh(emb.view(-1, 30) @ W1 + b1) # (32, 200)
logits = h @ W2 + b2 # (32, 27)
loss = F.cross_entropy(logits, Ytr[ix])

# backward pass
for p in parameters:
p.grad = None
loss.backward()

# update
lr = 0.1 if i < 100000 else 0.01
for p in parameters:
p.data += -lr * p.grad

# track stats
stepi.append(i)
lossi.append(loss.log10().item())

#print(loss.item())
1
plt.plot(stepi, lossi)

png

模型是否训练好还可以看train集和dev集上的loss差异,如果train集和dev集上的loss差不多,说明模型并没有过拟合,效果差只是因为欠拟合或者模型太弱

1
2
3
4
5
emb = C[Xtr] # (32, 3, 2)
h = torch.tanh(emb.view(-1, 30) @ W1 + b1) # (32, 100)
logits = h @ W2 + b2 # (32, 27)
loss = F.cross_entropy(logits, Ytr)
loss
tensor(2.1426, grad_fn=<NllLossBackward0>)
1
2
3
4
5
emb = C[Xdev] # (32, 3, 2)
h = torch.tanh(emb.view(-1, 30) @ W1 + b1) # (32, 100)
logits = h @ W2 + b2 # (32, 27)
loss = F.cross_entropy(logits, Ydev)
loss
tensor(2.1830, grad_fn=<NllLossBackward0>)

调整后的embedding绘图,不过适合之前的二维,embedding的距离也代表一些含义

1
2
3
4
5
6
# visualize dimensions 0 and 1 of the embedding matrix C for all characters
plt.figure(figsize=(8,8))
plt.scatter(C[:,0].data, C[:,1].data, s=200)
for i in range(C.shape[0]):
plt.text(C[i,0].item(), C[i,1].item(), itos[i], ha="center", va="center", color='white')
plt.grid('minor')

png

利用训练好的模型生成新的名字,先取embedding,然后经过两层神经网络,最后做softmax获得下一个字符的概率,然后随机抽样,反复进行直到抽到.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# sample from the model
g = torch.Generator().manual_seed(2147483647 + 10)

for _ in range(20):

out = []
context = [0] * block_size # initialize with all ...
while True:
emb = C[torch.tensor([context])] # (1,block_size,d)
h = torch.tanh(emb.view(1, -1) @ W1 + b1)
logits = h @ W2 + b2
probs = F.softmax(logits, dim=1)
ix = torch.multinomial(probs, num_samples=1, generator=g).item()
context = context[1:] + [ix]
out.append(ix)
if ix == 0:
break

print(''.join(itos[i] for i in out))
carmahfahbelle.
frimrin.
taty.
skansh.
emmahnen.
delynn.
jaree.
corronia.
chaiiv.
kaleigh.
ham.
por.
dessan.
sulin.
alianni.
wazeloniearynn.
jaxeenissa.
mel.
edi.
abettefer.