当前位置:
文档之家› MATLAB神经网络及其应用
MATLAB神经网络及其应用
(3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.
参数说明
The learning function BLF can be either of the backpropagation learning functions such as LEARNGD, or LEARNGDM.
P - NixTS cell array, each element P{i,ts} is an RixQ matrix. T - NtxTS cell array, each element P{i,ts} is an VixQ matrix. Pi - NixID cell array, each element Pi{i,k} is an RixQ matrix. Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix. Y - NOxTS cell array, each element Y{i,ts} is an UixQ matrix. E - NtxTS cell array, each element P{i,ts} is an VixQ matrix. Pf - NixID cell array, each element Pf{i,k} is an RixQ matrix. Af - NlxLD cell array, each element Af{i,k} is an SixQ matrix.
TRAIN(NET,P,T,Pi,Ai) takes,
NET - Network. P - Network inputs. T - Network targets, default = zeros. Pi - Initial input delay conditions, default = zeros. Ai - Initial layer delay conditions, default = zeros. VV - Structure of validation vectors, default = []. TV - Structure of test vectors, default = [].
% in(:,1)=c1; % in(:,2)=c2; % in(:,3)=c3; % in(:,4)=c4; % in(:,5)=c5; % in(:,6)=c6; % in(:,7)=c7; % in(:,8)=c8; %
续
% c1_max=max(c1); % c5_max=max(c5);
PR - Rx2 matrix of min and max values for R input elements. Si - Size of ith layer, for Nl layers. TFi - Transfer function of ith layer, default = 'tansig'. BTF - Backprop network training function, default = 'trainlm'. BLF - Backprop weight/bias learning function, default =
5 实现
数据处理和准备
把WORD数据转换成TXT文件格式 利用dlmread读取数据 是否进行归一化处理?
生成网络
为调用newff命令做好各种准备
pr矩阵的形成 网络结构确定:网络层数以及每层的神经元
个数 每一层的传输函数的确定
注意参数的含义
进行网络训练
为调用train命令进行数据准备
% c4_max=max(c4); % c8_max=max(c8);
c4_min=min(c4); c8_min=min(c8);
续
pr=[c1_min,c1_max;c2_min, c2_max;c3_min,c3_max;c4_ min,c4_max;c5_min,c5_max; c6_min,c6_max;c7_min,c7_ max;c8_min,c8_max;];
Syntax
[net,tr,Y,E,Pf,Af] = train(NET,P,T,Pi,Ai,VV,TV)
Description
TRAIN trains a network NET according to NET.trainFcn and NET.trainParam.
输入参数说明
MATLAB中的神经网络及其应 用:以BP为例
主讲:王茂芝 副教授
1 一个预测问题
已知:一组标准输入和输出数据(见附件) 求解:预测另外一组输入对应的输出 背景:略
2 BP网络
3 MATLAB中的newff命令
NEWFF Create a feed-forward backpropagation network.
输入样本的确定 标准输出的确定 网络训练参数(次数)的确定
net. trainParam.epochs=100
调用网络训练命令:net=train(net,p,t);
进行输出模拟
调用y=sim(net,p)进行输出模拟 画图进行对比
查看网络参数及权值
net net参数引用和查看
Syntax
net = newff net = newff(PR,[S1 S2...SNl],{TF1
TF2...TFNl},BTF,BLF,PF)
命令newff中的参数说明
NET = NEWFF creates a new network with a dialog box.
NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
输出参数说明
and returns,
NET - New network. TR - Training record (epoch and perf). Y - Network outputs. E - Network errors. Pf - Final input delay conditions. Af - Final layer delay conditions.
说明
Note that T is optional and need only be used for networks that require targets.
Pi and Pf are also optional and need only be used for networks that have input or layer delays.
p=in'; t=out';
net.trainParam.epochs=100; net=train(net,p,t); y=sim(net,p);
The performance function can be any of the differentiable performance functions such as MSE or MSEREG.
4 MATLAB中的train命令
TRAIN Train a neural network.
输入参数数据结构说明
The cell array format is easiest to describe. It is most convenient for networks with multiple inputs and outputs, and allows sequences of inputs to be presented:
输入参数数据结构说明
Where:
Ni = net.numInputs Nl = net.numLayers Nt = net.numTargets ID = net.numInputDelays LD = net.numLayerDelays TS = number of time steps Q = batch size Ri = net.inputs{i}.size Si = yers{i}.size Vi = net.targets{i}.size
'learngdm'. PF - Performance function, default = 'mse'.
and returns an N layer feed-forward backprop network.
参数说明
The transfer functions TFi can be any differentiable transfer function such as TANSIG, LOGSIG, or PURELIN.
6 预测及分析
sim输出 重新训练并sim输出 画图对比
7 程序实现
clc clear all clear net load data; load data_pre;
c1=in(:,1); c2=in(:,2); c3=in(:,3); c4=in(:,4); c5=in(:,5); c6=in(:,6); c7=in(:,7); c8=in(:,8);
The training function BTF can be any of the backprop training functions suபைடு நூலகம்h as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.
参数说明
*WARNING*: TRAINLM is the default training function because it is very fast, but it requires a lot of memory to run. If you get an "out-of-memory" error when training try doing one of these: