受限玻尔兹曼机(Restricted Boltzmann Machine, RBM)是一种基于能量模型的神经网络模型,在Hinton提出针对其的训练算法(对比分歧算法)后,RBM得到了更多的关注,利用RBM的堆叠可以构造出深层的神经网络模型——深度信念网(Deep Belief Net, DBN)。下面简单介绍二值型RBM的主要内容。
RBM的网络结构如下图所示:
RBM中包括两层,即:
由上图可知,在同一层中,如上图中的可见层,在可见层中,其节点之间是没有连接的,而在层与层之间,其节点是全连接的,这是RBM最重要的结构特征:层内无连接,层间全连接
。
在RBM的模型中,有如下的性质:
当给定可见层神经元的状态时。各隐藏层神经元的之间是否激活是条件独立的;反之也同样成立。
下面给出RBM模型的数学化定义:
如图:
(图片来自参考文献1)
假设可见层的神经元的个数为nvn_v,隐藏层的神经元的个数为nhn_h,v\mathbf{v}表示的是可见层神经元的状态,v=(v1,v2,⋯,vnv)T\mathbf{v}=\left ( v_1,v_2,\cdots ,v_{n_v} \right )^T。h\mathbf{h}表示的是隐藏层神经元的状态,h=(h1,h2,⋯,hnh)T\mathbf{h}=\left ( h_1,h_2,\cdots ,h_{n_h} \right )^T。a\mathbf{a}表示的是可见层神经元的偏置,a=(a1,a2,⋯,anv)T∈Rnv\mathbf{a}=\left ( a_1,a_2,\cdots ,a_{n_v} \right )^T\in \mathbb{R}^{n_v}。b\mathbf{b}表示的是隐藏层神经元的偏置,b=(b1,b2,⋯,bnh)T∈Rnh\mathbf{b}=\left ( b_1,b_2,\cdots ,b_{n_h} \right )^T\in \mathbb{R}^{n_h}。W=(wi,j)∈Rnh×nvW=\left ( w_{i,j} \right )\in \mathbb{R}^{n_h\times n_v}表示的是隐藏层与可见层之间的连接权重。同时,我们记θ=(W,a,b)\theta =\left ( W,\mathbf{a},\mathbf{b} \right )。
对于一组给定的状态(v,h)\left ( \mathbf{v},\mathbf{h} \right ),定义如下的能量函数:
Eθ(v,h)=−∑i=1nvaivi−∑j=1nhbjhj−∑i=1nv∑j=1nhhjwj,ivi
E_\theta \left ( \mathbf{v},\mathbf{h} \right )=-\sum_{i=1}^{n_v}a_iv_i-\sum_{j=1}^{n_h}b_jh_j-\sum_{i=1}^{n_v}\sum_{j=1}^{n_h}h_jw_{j,i}v_i
利用该能量公式,可以定义如下的联合概率分布:
Pθ(v,h)=1Zθe−Eθ(v,h)
P_\theta \left ( \mathbf{v},\mathbf{h} \right )=\frac{1}{Z_\theta }e^{-E_\theta \left ( \mathbf{v},\mathbf{h} \right )}
其中:
Zθ=∑v,he−Eθ(v,h)
Z_\theta =\sum_{\mathbf{v},\mathbf{h}}e^{-E_\theta \left ( \mathbf{v},\mathbf{h} \right )}
称为归一化因子。
当有了联合概率分布,我们便可以定义边缘概率分布,即:
Pθ(v)=∑hPθ(v,h)=1Zθ∑he−Eθ(v,h)
P_\theta \left ( \mathbf{v} \right )=\sum_{\mathbf{h}}P_\theta \left ( \mathbf{v},\mathbf{h} \right )=\frac{1}{Z_\theta }\sum_{\mathbf{h}}e^{-E_\theta \left ( \mathbf{v},\mathbf{h} \right )}
Pθ(h)=∑vPθ(v,h)=1Zθ∑ve−Eθ(v,h)
P_\theta \left ( \mathbf{h} \right )=\sum_{\mathbf{v}}P_\theta \left ( \mathbf{v},\mathbf{h} \right )=\frac{1}{Z_\theta }\sum_{\mathbf{v}}e^{-E_\theta \left ( \mathbf{v},\mathbf{h} \right )}
有了上述的联合概率分布以及边缘概率分布,我们需要知道当给定可见层的状态时,隐藏层上的某一个神经元被激活的概率,即P(hk=1∣v)P\left ( h_k=1\mid \mathbf{v} \right ),或者当给定了隐藏层的状态时,可见层上的某一神经元被激活的概率,即P(vk=1∣h)P\left ( v_k=1\mid \mathbf{h} \right )。
首先定义如下的一些标记:
h−k=Δ(h1,h2,⋯,hk−1,hk+1,⋯,hnh)T
\mathbf{h}_{-k}\overset{\Delta }{=}\left ( h_1,h_2,\cdots ,h_{k-1},h_{k+1},\cdots ,h_{n_h} \right )^T
上式表示的是在h\mathbf{h}中去除了分量hkh_k后得到的向量。
αk(v)=Δbk+∑i=1nvwk,ivi
\alpha _k\left ( \mathbf{v} \right )\overset{\Delta }{=}b_k+\sum_{i=1}^{n_v}w_{k,i}v_i
β(v,h−k)=Δ∑i=1nvaivi+∑j=1,j≠knhbjhj+∑i=1nv∑j=1,j≠knhhjwj,ivi
\beta \left ( \mathbf{v}, \mathbf{h}_{-k} \right )\overset{\Delta }{=}\sum_{i=1}^{n_v}a_iv_i+\sum_{j=1,j\neq k}^{n_h}b_jh_j+\sum_{i=1}^{n_v}\sum_{j=1,j\neq k}^{n_h}h_jw_{j,i}v_i
有了如上的一些公式,我们可以得到能量公式的如下表示方法:
E(v,h)=−β(v,h−k)−hkαk(v)
E\left ( \mathbf{v}, \mathbf{h} \right )=-\beta \left ( \mathbf{v}, \mathbf{h}_{-k} \right )-h_k\alpha _k\left ( \mathbf{v} \right )
那么,当给定可见层的状态时,隐藏层上的某一个神经元被激活的概率P(hk=1∣v)P\left ( h_k=1\mid \mathbf{v} \right )为:
P(hk=1∣v)=P(hk=1∣h−k,v)=P(hk=1,h−k,v)P(h−k,v)=P(hk=1,h−k,v)P(hk=0,h−k,v)+P(hk=1,h−k,v)=e−E(hk=1,h−k,v)e−E(hk=0,h−k,v)+e−E(hk=1,h−k,v)=11+e−E(hk=0,h−k,v)+E(hk=1,h−k,v)=11+e[β(v,h−k)+0⋅αk(v)]+[−β(v,h−k)−1⋅αk(v)]=11+e−αk(v)
\begin{align*} P\left ( h_k=1\mid \mathbf{v} \right ) &= P\left ( h_k=1\mid \mathbf{h}_{-k},\mathbf{v} \right )\\ &=\frac{P\left ( h_k=1,\mathbf{h}_{-k},\mathbf{v} \right )}{P\left ( \mathbf{h}_{-k},\mathbf{v} \right )} \\ &= \frac{P\left ( h_k=1,\mathbf{h}_{-k},\mathbf{v} \right )}{P\left ( h_k=0,\mathbf{h}_{-k},\mathbf{v} \right )+P\left ( h_k=1,\mathbf{h}_{-k},\mathbf{v} \right )} \\ &=\frac{e^{-E\left ( h_k=1,\mathbf{h}_{-k},\mathbf{v} \right )}}{e^{-E\left ( h_k=0,\mathbf{h}_{-k},\mathbf{v} \right )}+e^{-E\left ( h_k=1,\mathbf{h}_{-k},\mathbf{v} \right )}}\\ &=\frac{1}{1+e^{-E\left ( h_k=0,\mathbf{h}_{-k},\mathbf{v} \right )+E\left ( h_k=1,\mathbf{h}_{-k},\mathbf{v} \right )}}\\ &=\frac{1}{1+e^{\left [ \beta \left ( \mathbf{v},\mathbf{h}^{-k} \right )+0\cdot \alpha _k\left ( \mathbf{v} \right ) \right ]+\left [ -\beta \left ( \mathbf{v},\mathbf{h}^{-k} \right )-1\cdot \alpha _k\left ( \mathbf{v} \right ) \right ]}}\\ &=\frac{1}{1+e^{-\alpha _k\left ( \mathbf{v} \right )}} \end{align*}
由Sigmoid函数可知:
Sigmoid(x)=11+e−x
Sigmoid\left ( x \right )=\frac{1}{1+e^{-x}}
则:
P(hk=1∣v)=Sigmoid(αk(v))=Sigmoid(bk+∑i=1nvwk,ivi)
\begin{align*} P\left ( h_k=1\mid \mathbf{v} \right ) &= Sigmoid\left ( \alpha _k\left ( \mathbf{v} \right ) \right )\\ &= Sigmoid\left ( b_k+\sum_{i=1}^{n_v}w_{k,i}v_i \right ) \end{align*}
同理,可以求得当给定了隐藏层的状态时,可见层上的某一神经元被激活的概率P(vk=1∣h)P\left ( v_k=1\mid \mathbf{h} \right ):
P(vk=1∣h)=Sigmoid(αk(h))=Sigmoid⎛⎝ak+∑j=1nhwj,khj⎞⎠
\begin{align*} P\left ( v_k=1\mid \mathbf{h} \right ) &= Sigmoid\left ( \alpha _k\left ( \mathbf{h} \right ) \right )\\ &= Sigmoid\left ( a_k+\sum_{j=1}^{n_h}w_{j,k}h_j \right ) \end{align*}
对于RBM模型,其参数主要是可见层和隐藏层之间的权重,可见层的偏置以及隐藏层的偏置,即θ=(W,a,b)\theta =\left ( W,\mathbf{a},\mathbf{b} \right ),对于给定的训练样本,通过训练得到参数θ\theta ,使得在该参数下,由RBM表示的概率分布尽可能与训练数据相符合。
假设给定的训练集为:
X={v1,v2,⋯,vns}
\mathbf{X}=\left \{ \mathbf{v}^1, \mathbf{v}^2, \cdots , \mathbf{v}^{n_s} \right \}
其中,nsn_s表示的是训练样本的数目,vi=(vi1,vi2,⋯,vinv)T\mathbf{v}^i=\left ( v_1^i,v_2^i,\cdots ,v_{n_v}^i \right )^T。为了能够学习出模型中的参数,我们希望利用模型重构出来的数据能够尽可能与原始数据一致,则训练RBM的目标就是最大化如下的似然函数:
Lθ=∏i=1nsP(vi)
L_\theta =\prod_{i=1}^{n_s}P\left ( \mathbf{v}^i \right )
对于如上的似然函数的最大化问题,通常是取其log函数的形式:
lnLθ=ln∏i=1nsP(vi)=∑i=1nslnP(vi)
lnL_\theta =ln\prod_{i=1}^{n_s}P\left ( \mathbf{v}^i \right )=\sum_{i=1}^{n_s}lnP\left ( \mathbf{v}^i \right )
对于上述的最优化问题,可以使用梯度上升法进行求解,梯度上升法的形式为:
θ=θ+η∂lnLθ∂θ
\theta =\theta +\eta \frac{\partial lnL_\theta }{\partial \theta }
其中,η>0\eta > 0称为学习率。对于∂lnLθ∂θ \frac{\partial lnL_\theta }{\partial \theta }的求解,简单的情况,只考虑一个样本的情况,则:
lnLθ=lnP(v)=ln(1Z∑he−E(v,h))=ln∑he−E(v,h)−lnZ=ln∑he−E(v,h)−ln∑v,he−E(v,h)
\begin{align*} lnL_\theta &= lnP\left ( \mathbf{v} \right )\\ &= ln\left ( \frac{1}{Z}\sum _{\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )} \right )\\ &= ln\sum _{\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )}-lnZ\\ &= ln\sum _{\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )}-ln\sum _{\mathbf{v},\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )} \end{align*}
则∂lnLθ∂θ \frac{\partial lnL_\theta }{\partial \theta }为:
∂lnLθ∂θ=∂lnP(v)∂θ=∂∂θ(ln∑he−E(v,h))−∂∂θ⎛⎝ln∑v,he−E(v,h)⎞⎠=−1∑he−E(v,h)∑he−E(v,h)∂E(v,h)∂θ+1∑v,he−E(v,h)∑v,he−E(v,h)∂E(v,h)∂θ
\begin{align*} \frac{\partial lnL_\theta }{\partial \theta }&= \frac{\partial lnP\left ( \mathbf{v} \right )}{\partial \theta }\\ &= \frac{\partial }{\partial \theta }\left ( ln\sum _{\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )} \right )-\frac{\partial }{\partial \theta }\left ( ln\sum _{\mathbf{v},\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )} \right )\\ &= -\frac{1}{\sum _{\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )}}\sum _{\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )}\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta }+\frac{1}{\sum _{\mathbf{v},\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )}}\sum _{\mathbf{v},\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )}\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta } \end{align*}
而:
e−E(v,h)∑he−E(v,h)=e−E(v,h)Z∑he−E(v,h)Z=P(v,h)P(v)=P(h∣v)
\frac{e^{-E\left ( \mathbf{v},\mathbf{h} \right )}}{\sum _{\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )}}=\frac{\frac{e^{-E\left ( \mathbf{v},\mathbf{h} \right )}}{Z}}{\frac{\sum _{\mathbf{h}}e^{-E\left ( \mathbf{v},\mathbf{h} \right )}}{Z}}=\frac{P\left ( \mathbf{v},\mathbf{h} \right )}{P\left ( \mathbf{v} \right )}=P\left ( \mathbf{h}\mid \mathbf{v} \right )
因此上式可以表示为:
∂lnLθ∂θ=−∑hP(h∣v)∂E(v,h)∂θ+∑v,hP(v,h)∂E(v,h)∂θ
\frac{\partial lnL_\theta }{\partial \theta }=-\sum _{\mathbf{h}}P\left ( \mathbf{h}\mid \mathbf{v} \right )\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta }+\sum _{\mathbf{v},\mathbf{h}}P\left ( \mathbf{v},\mathbf{h} \right )\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta }
其中,∑hP(h∣v)∂E(v,h)∂θ\sum _{\mathbf{h}}P\left ( \mathbf{h}\mid \mathbf{v} \right )\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta }表示的是能量梯度函数∂E(v,h)∂θ\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta }在条件分布P(h∣v)P\left ( \mathbf{h}\mid \mathbf{v} \right )下的期望;∑v,hP(v,h)∂E(v,h)∂θ\sum _{\mathbf{v},\mathbf{h}}P\left ( \mathbf{v},\mathbf{h} \right )\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta } 表示的是能量梯度函数∂E(v,h)∂θ\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta }在联合分布P(v,h)P\left ( \mathbf{v},\mathbf{h} \right )下的期望。
对于∑v,hP(v,h)∂E(v,h)∂θ\sum _{\mathbf{v},\mathbf{h}}P\left ( \mathbf{v},\mathbf{h} \right )\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta } ,可以表示为:
∑v,hP(v,h)∂E(v,h)∂θ=∑v∑hP(v)P(h∣v)∂E(v,h)∂θ=∑vP(v)∑hP(h∣v)∂E(v,h)∂θ
\begin{align*} \sum _{\mathbf{v},\mathbf{h}}P\left ( \mathbf{v},\mathbf{h} \right )\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta } &= \sum _{\mathbf{v}}\sum _{\mathbf{h}}P\left ( \mathbf{v} \right )P\left ( \mathbf{h}\mid \mathbf{v} \right )\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta } \\ &= \sum _{\mathbf{v}}P\left ( \mathbf{v} \right )\sum _{\mathbf{h}}P\left ( \mathbf{h}\mid \mathbf{v} \right )\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta } \end{align*}
因此,只需要计算∑hP(h∣v)∂E(v,h)∂θ\sum _{\mathbf{h}}P\left ( \mathbf{h}\mid \mathbf{v} \right )\frac{\partial E\left ( \mathbf{v},\mathbf{h} \right )}{\partial \theta },这部分的计算分为三个,分别为:
上述的三个部分计算的方法如下:
已知:
Eθ(v,h)=−∑i=1nvaivi−∑j=1nhbjhj−∑i=1nv∑j=1nhhjwj,ivi
E_\theta \left ( \mathbf{v},\mathbf{h} \right )=-\sum_{i=1}^{n_v}a_iv_i-\sum_{j=1}^{n_h}b_jh_j-\sum_{i=1}^{n_v}\sum_{j=1}^{n_h}h_jw_{j,i}v_i
则:
因此,∂lnLθ∂θ\frac{\partial lnL_\theta }{\partial \theta }为:
∂lnLθ∂wj,i=P(hj=1∣v)vi−∑vP(v)P(hj=1∣v)vi
\frac{\partial lnL_\theta }{\partial w_{j,i} }=P\left ( h_j=1\mid \mathbf{v} \right )v_i-\sum_{\mathbf{v}}P\left ( \mathbf{v} \right )P\left ( h_j=1\mid \mathbf{v} \right )v_i
∂lnLθ∂ai=vi−∑vP(v)vi
\frac{\partial lnL_\theta }{\partial a_i }=v_i-\sum_{\mathbf{v}}P\left ( \mathbf{v} \right )v_i
∂lnLθ∂bj=P(hj=1∣v)−∑vP(v)P(hj=1∣v)
\frac{\partial lnL_\theta }{\partial b_j }=P\left ( h_j=1\mid \mathbf{v} \right )-\sum_{\mathbf{v}}P\left ( \mathbf{v} \right )P\left ( h_j=1\mid \mathbf{v} \right )
Hinton提出了高效的训练RBM的算法——对比散度(Contrastive Divergence, CD)算法。
kk步CD算法的具体步骤为:
对∀v\forall \mathbf{v},取初始值:v(0):=v\mathbf{v}^{\left ( 0 \right )}:=\mathbf{v},然后执行kk步Gibbs采样,其中第tt步先后执行:
上述两个过程分别记为:sample_h_given_v和sample_v_given_h。记pvj=P(hj=1∣v),j=1,2,⋯,nhp_j^{\mathbf{v}}=P\left ( h_j=1\mid \mathbf{v} \right ),j=1,2,\cdots ,n_h,则sample_h_given_v中的计算可以表示为:
同样,对于sample_v_given_h,记phi=P(vi=1∣h),i=1,2,⋯,nvp_i^{\mathbf{h}}=P\left ( v_i=1\mid \mathbf{h} \right ),i=1,2,\cdots ,n_v,则sample_h_given_v中的计算可以表示为:
实验代码
# coding:UTF-8
import numpy as np
import random as rd
def load_data(file_name):
data = []
f = open(file_name)
for line in f.readlines():
lines = line.strip().split("\t")
tmp = []
for x in lines:
tmp.append(float(x) / 255.0)
data.append(tmp)
f.close()
return data
def sigmrnd(P):
m, n = np.shape(P)
X = np.mat(np.zeros((m, n)))
P_1 = sigm(P)
for i in xrange(m):
for j in xrange(n):
r = rd.random()
if P_1[i, j] >= r:
X[i, j] = 1
return X
def sigm(P):
return 1.0 / (1 + np.exp(-P))
# step_1: load data
datafile = "b.txt"
data = np.mat(load_data(datafile))
m, n = np.shape(data)
# step_2: initialize
num_epochs = 10
batch_size = 100
input_dim = n
hidden_sz = 100
alpha = 1
momentum = 0.1
W = np.mat(np.zeros((hidden_sz, input_dim)))
vW = np.mat(np.zeros((hidden_sz, input_dim)))
b = np.mat(np.zeros((input_dim, 1)))
vb = np.mat(np.zeros((input_dim, 1)))
c = np.mat(np.zeros((hidden_sz, 1)))
vc = np.mat(np.zeros((hidden_sz, 1)))
# step_3: training
print "Start to train RBM: "
num_batches = int(m / batch_size)
for i in xrange(num_epochs):
kk = np.random.permutation(range(m))
err = 0.0
for j in xrange(num_batches):
batch = data[kk[j * batch_size:(j + 1) * batch_size], ]
v1 = batch
h1 = sigmrnd(np.ones((batch_size, 1)) * c.T + v1 * W.T)
v2 = sigmrnd(np.ones((batch_size, 1)) * b.T + h1 * W)
h2 = sigm(np.ones((batch_size, 1)) * c.T + v2 * W.T)
c1 = h1.T * v1
c2 = h2.T * v2
vW = momentum * vW + alpha * (c1 - c2) / batch_size
vb = momentum * vb + alpha * sum(v1 - v2).T / batch_size
vc = momentum * vc + alpha * sum(h1 - h2).T / batch_size
W = W + vW
b = b + vb
c = c + vc
#cal_err
err_result = v1 - v2
err_1 = 0.0
m_1, n_1 = np.shape(err_result)
for x in xrange(m_1):
for y in xrange(n_1):
err_1 = err_1 + err_result[x, y] ** 2
err = err + err_1 / batch_size
#print i,j,err
print i, err / num_batches
#print W
m_2,n_2 = np.shape(W)
for i in xrange(m_2):
for j in xrange(n_2):
print str(W[i, j]) + " ",
print "\n",