神经元网络的权重初始化

错误信息

User error: Failed to connect to memcache server: 127.0.0.1:11211 in dmemcache_object() (line 415 of /mnt/wwwroot/clients/balanstech.com/sites/all/modules/memcache/dmemcache.inc).
24 提交 / 0个新回复
最新回复
神经元网络的权重初始化

普通的神经元网络的权重初始化是随机的
rmb和autoencoder可以得到更好的w1
我也进行了相关的实验,明白了这样的预训练的意义,对收敛的加速作用。
我想问的问题是:
既然sparse autoencoder学到的w1看起来非常类似或大或小的数字局部图像,
为什么不可以直接随机在训练集中抽一些样本,用他们的像素数据作为w1数据呢?
比如用10个0的平均点阵数据作为某个隐藏层神经元的的对应的权重。
10个0 的平均 - 10个6的平均就更像了
可以乘个比例稀疏,变小一点。
这样不说替代autoencder,
起码这样初始化autoencoder 可以收敛的快点
 
作者 think__123
 
程序

load mnist_uint8;

train_x = double(train_x(1:10000,:)) / 255;
test_x = double(test_x(1:10000,:)) / 255;
train_y = double(train_y(1:10000,:));
test_y = double(test_y(1:10000,:));
 

rng('default')
nn = nnsetup([784 196 784]);
nn.activation_function = 'sigm'; % Sigmoid activation function
nn.learningRate = 1; % Sigm require a lower learning rate
nn.weightPenaltyL2 = 3e-3; % L2 weight decay

nn.nonSparsityPenalty=0.5;
nn.sparsityTarget= 0.1;

opts.numepochs = 20; % Number of full sweeps through data
opts.batchsize = 100; % Take a mean gradient step over this many samples

% for i=1:196
% for j=1:784
% nn.W{1}(i,j+1)=train_x(i,j)-0.5;
% end
% end

nn = nntrain(nn, train_x, train_x, opts);
visualize(nn.W{1}(:,2:end)');

rng('default')
nn1 = nnsetup([784 196 30 10]);
nn1.W{1}=nn.W{1};
nn1.activation_function = 'sigm'; % Sigmoid activation function
nn1.learningRate = 1; % Sigm require a lower learning rate
opts1.numepochs = 10; % Number of full sweeps through data
opts1.batchsize = 100; % Take a mean gradient step over this many samples
nn1 = nntrain(nn1, train_x, train_y, opts1);
[er, bad] = nntest(nn1, test_x, test_y);
fprintf('ex1: %f\n',er);