博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Alexnet参数解释
阅读量:6425 次
发布时间:2019-06-23

本文共 6374 字,大约阅读时间需要 21 分钟。

name: "AlexNet" layer {
name: "data" type: "Data" top: "data" top: "label" include {
phase: TRAIN } transform_param { # 对输入做227*227的随机裁剪,同时做镜像来扩大样本数量,来降低过拟合的问题。按照alex论文的说法,TRAIN会扩大2048倍的样本裁剪,TEST会生成10个新样本,四个角和居中裁剪,以及镜像(但是这个caffe实现test不做镜像) mirror: true crop_size: 227 mean_file: "data/ilsvrc12/imagenet_mean.binaryproto" #对图片做零均值化。这个我目前的理解是一种归一化的手段。训练图片有的颜色浓,有的颜色淡,通过该方法,可以将每张照片的数据分布基于(0,0)坐标原点来分布,可以标准化训练样本。参见https://my.oschina.net/findbill/blog/661817 } data_param {
source: "examples/imagenet/ilsvrc12_train_lmdb" batch_size: 256 #一次处理的图片数量 backend: LMDB } } layer {
name: "data" type: "Data" top: "data" top: "label" include {
phase: TEST } transform_param {
mirror: false crop_size: 227 mean_file: "data/ilsvrc12/imagenet_mean.binaryproto" } data_param {
source: "examples/imagenet/ilsvrc12_val_lmdb" batch_size: 50 backend: LMDB } } layer {
name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param {# weigith的学习速率设置 lr_mult: 1 # 学习速率系数,这个乘以solver.prototxt 配置文件中的 base_lr,就是这一层的初始学习率 decay_mult: 1 #衰减系数,避免过拟合,但是细节原理还不懂TODO } param {# bias的学习速率设置 lr_mult: 2 decay_mult: 0 } convolution_param {
num_output: 96 #多少个卷积核,也就说输入图片通过卷积生成多少个特征。 kernel_size: 11 # 卷积核的大小 stride: 4 #卷积滑动的步长 weight_filler { #权重初始化 type: "gaussian" # 权重初始化使用高斯分布 std: 0.01 #标准差为0.01, 均值默认为0 } bias_filler { #偏置初始化 type: "constant" value: 0 } } } layer {
name: "relu1" type: "ReLU" # 过滤掉CONV1输出<0的输出,这个我从其他文章看过来自己的理解是因为relu更接近人类神经元的激活函数。人类的神经元的对一个输入的激活只有5%,通过relu可以降低神经元的激活数量 bottom: "conv1" top: "conv1" } layer {
name: "norm1" type: "LRN" #局部归一化,还不懂TODO bottom: "conv1" top: "norm1" lrn_param {
local_size: 5 alpha: 0.0001 beta: 0.75 } } layer {
name: "pool1" type: "Pooling" #池化,用来降低位置相关性,我认为也是讲特征进一步标准化的手段。 bottom: "norm1" top: "pool1" pooling_param {
pool: MAX # max(0,x) kernel_size: 3 #核的尺寸 stride: 2 #步长,这个步长形成了一个重叠滑动的池化动作,alex论文讲这样比不做重叠可以稍微改善过拟合的情况 } } layer {
name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param {
lr_mult: 1 decay_mult: 1 } param {
lr_mult: 2 decay_mult: 0 } convolution_param {
num_output: 256 pad: 2 # 给输入四个边都加上2个空白像素 kernel_size: 5 group: 2 # 将输入卷积运算分成两个组运算。这个是解决GPU内存不足来用的 weight_filler {
type: "gaussian" std: 0.01 } bias_filler {
type: "constant" value: 0.1 } } } layer {
name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer {
name: "norm2" type: "LRN" bottom: "conv2" top: "norm2" lrn_param {
local_size: 5 alpha: 0.0001 beta: 0.75 } } layer {
name: "pool2" type: "Pooling" bottom: "norm2" top: "pool2" pooling_param {
pool: MAX kernel_size: 3 stride: 2 } } layer {
name: "conv3" type: "Convolution" bottom: "pool2" top: "conv3" param {
lr_mult: 1 decay_mult: 1 } param {
lr_mult: 2 decay_mult: 0 } convolution_param {
num_output: 384 pad: 1 kernel_size: 3 weight_filler {
type: "gaussian" std: 0.01 } bias_filler {
type: "constant" value: 0 } } } layer {
name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer {
name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param {
lr_mult: 1 decay_mult: 1 } param {
lr_mult: 2 decay_mult: 0 } convolution_param {
num_output: 384 pad: 1 kernel_size: 3 group: 2 weight_filler {
type: "gaussian" std: 0.01 } bias_filler {
type: "constant" value: 0.1 } } } layer {
name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer {
name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param {
lr_mult: 1 decay_mult: 1 } param {
lr_mult: 2 decay_mult: 0 } convolution_param {
num_output: 256 pad: 1 kernel_size: 3 group: 2 weight_filler {
type: "gaussian" std: 0.01 } bias_filler {
type: "constant" value: 0.1 } } } layer {
name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer {
name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param {
pool: MAX kernel_size: 3 stride: 2 } } layer {
name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" param {
lr_mult: 1 decay_mult: 1 } param {
lr_mult: 2 decay_mult: 0 } inner_product_param {
num_output: 4096 weight_filler {
type: "gaussian" std: 0.005 } bias_filler {
type: "constant" value: 0.1 } } } layer {
name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer {
name: "drop6" type: "Dropout" # 丢掉一定数量的神经元,不做输出,也不参与反向传播的权值计算。这个的目的是降低神经元之间的固定依赖性。每次迭代神经元的链接通道都是随机的,从而避免固定依赖。 bottom: "fc6" top: "fc6" dropout_param {
dropout_ratio: 0.5 # 丢弃50%的神经元 } } layer {
name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param {
lr_mult: 1 decay_mult: 1 } param {
lr_mult: 2 decay_mult: 0 } inner_product_param {
num_output: 4096 weight_filler {
type: "gaussian" std: 0.005 } bias_filler {
type: "constant" value: 0.1 } } } layer {
name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer {
name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param {
dropout_ratio: 0.5 } } layer {
name: "fc8" type: "InnerProduct" bottom: "fc7" top: "fc8" param {
lr_mult: 1 decay_mult: 1 } param {
lr_mult: 2 decay_mult: 0 } inner_product_param {
num_output: 1000 weight_filler {
type: "gaussian" std: 0.01 } bias_filler {
type: "constant" value: 0 } } } layer {
name: "accuracy" type: "Accuracy" bottom: "fc8" bottom: "label" top: "accuracy" include {
phase: TEST } } layer {
name: "loss" type: "SoftmaxWithLoss" bottom: "fc8" bottom: "label" top: "loss" }

转载于:https://www.cnblogs.com/gabrialrx/p/7093474.html

你可能感兴趣的文章
Linux基础系列(四)Linux系统软链接硬链接知识
查看>>
jvm 调优参数
查看>>
Linux中find常见用法示例
查看>>
我眼中的苹果iphone
查看>>
discuz论坛安装异次元分享工具条的方法
查看>>
在Linux下用 eric4+python+pyqt 编写一个多窗口程序
查看>>
Java Annotation基础详解
查看>>
go语言学习-文件操作 path path/filepath
查看>>
DZX1.5加解密函数authcode分享
查看>>
Nginx Rewrite 规则
查看>>
我的朗科运维第四课(1)
查看>>
脱离 Spring 实现复杂嵌套事务,之八(MANDATORY - 要求存在事务)
查看>>
CentOS 配置Cacti监控整理
查看>>
我的友情链接
查看>>
邮件系统方案摘要
查看>>
爱若和布若
查看>>
newifi mini 刷 OpenWRT
查看>>
eclipse部署tigase源码
查看>>
mysql 5.6 主从复制配制
查看>>
iPhoneX隐藏状态栏
查看>>