前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >ResNet50及其Keras实现

ResNet50及其Keras实现

作者头像
Steve Wang
发布于 2019-05-28 09:56:52
发布于 2019-05-28 09:56:52
6.5K00
代码可运行
举报
文章被收录于专栏:从流域到海域从流域到海域
运行总次数:0
代码可运行

如果原理你已经了解,请直接到跳转ResNet50实现:卷积神经网络 第三周作业:Residual+Networks±+v1

你或许看过这篇访问量过12万的博客ResNet解析,但该博客的第一小节ResNet和吴恩达的叙述完全不同,因此博主对这篇博文持怀疑态度,你可以在这篇博文最下面找到提出该网络的论文链接,这篇博文可以作为研读这篇论文的基础。

ResNet = Residual Network
所有非残差网络都被称为平凡网络,这是一个原论文提出来的相对而言的概念。

残差网络是2015年由著名的Researcher Kaiming He(何凯明)提出的深度卷积网络,一经出世,便在ImageNet中斩获图像分类、检测、定位三项的冠军。 残差网络更容易优化,并且能够通过增加相当的深度来提高准确率。核心是解决了增加深度带来的副作用(退化问题),这样能够通过单纯地增加网络深度,来提高网络性能。

Motivation of ResNet

理论上来说,越深的深度神经网络能够计算出的特征越丰富,越能取得良好的效果,更深的神经网络的唯一缺点仅仅是你需要训练的参数十分庞大,导致其需要大量的计算资源。

但实际上,随着网络的加深,你会发现你梯度的大小(范数)急剧下降,这被称为梯度消失,这会导致学习速率非常缓慢。在极少数情况下也会出现梯度急剧上升,即梯度爆炸现象。表现在训练集上的准确度相较于浅层网络不但没有提高,反而会下降。

残差网络就是一种为了解决网络加深梯度消失现象而提出的网络。

What does Residual mean ?

假设经过网络中某几层所隐含的映射是H(X)H(X)H(X),其中X表示这几层网络的首层输入。如果多个非线性层表示一个足够复杂的Hypothesis,那么H(X)等价于一个同样逐渐逼近该Hypothesis的残差函数(residual function)F(X)=H(X)−XF(X) = H(X) - XF(X)=H(X)−X, 原函数可以表示为F(X)+XF(X) +XF(X)+X。H(X)H(X)H(X)和F(X)F(X)F(X)本质上都是对Hypothesis的一种逼近(近似)。

(以上译自论文原文)

基于该原理,ResNet中提出了2种映射,恒等映射(identity mapping)和残差映射(residual mapping), 恒等映射就是上图中跳过2层权重而把X直接送到后2层relu部分的映射,残差映射指平凡网络原来的部分。之所以称为恒等,因为你跳过了权重层,没有经过任何计算,即G(X)=XG(X)=XG(X)=X。

吴恩达在视频里对此的解释是,我们直接取某层之后的输出X作为输入,直接跳过一些连续的网络层,送到后面某层的relu之前。这么做使得一个Residual Block很容易的学习(因为它只做了一个relu操作)。

当然这需要满足X的维度和relu的维度一致。上图给出如何在维度不一致情况下通过Ws×alW_s\times a^{l}Ws​×al使得维度一致,WsW_sWs​可以仅仅是一个用0填充的矩阵,或者是需要学习的参数矩阵。

Shortcut Connection & Residual Block

吴恩达课程所提供的Residual,进行了简化,下面是原论文里的详细表示:

如图,曲线即表示了Shortcut Connection(近道连接),它跳过了2个权重层(抄了近道)。平凡网络的一部分层加上shortcut connection即构成了一个Residual Block。shortcut使得每一个残差块很容易地学习到恒等映射函数,并且在反向传播时使得梯度直接传播到更浅的层。

在平凡网络上多次使用Residual Block,就形成了Residual Network。当然使用多少次,在什么网络的什么位置使用,就需要高深的洞察力和对深度神经网络的充分了解,目前大家参考的就是原论文和相近论文,改变这些也可以构造出新的近似网络。

论文里提出的2种Residual Block。

其中左边的residual block保留了输入的dimension,而右边则是一个"bottleneck design",第一层把输入256维降低到64维,然后在第三层回复到256维,而shortcut/skip connection跳过这三层直接把输入送到了relu部分。

ResNet50

ResNet即共50层的参差网络,其中没有需要训练的参数的层,比如pooling layer,不参与计数。

原论文提出的常见的几种参差网络,主要是层数不同,50层和101层是最常见的。

50层的ResNet包含了Identity block(恒等块)和convolutional block(卷积块)2种结构,如下所示。

Identity block. Skip connection “skips over” 3 layers
The convolutional block

2种结构的主要差别是shortcut connection上是否进行了卷积操作。

以这两种模块为构建搭建的ResNet50如下图所示:

ResNet大致包括了5个Stage,或者叫做5种参数不同的卷积阶段,如上图所示。

(注:原论文把max pooling作为Stage 2的起始阶段)

filter of size 是三个卷积块的filter数目,而不是卷积核大小f,参数f如上表中50层ResNet那列所示,下面也有说明。

  • Zero-padding pads the input with a pad of (3,3)
  • Stage 1:
    • The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is “conv1”.
    • BatchNorm is applied to the channels axis of the input.
    • MaxPooling uses a (3,3) window and a (2,2) stride.
  • Stage 2:
    • The convolutional block uses three set of filters of size 64,64,256, “f” is 3, “s” is 1 and the block is “a”.
    • The 2 identity blocks use three set of filters of size 64,64,256, “f” is 3 and the blocks are “b” and “c”.
  • Stage 3:
    • The convolutional block uses three set of filters of size 128,128,512, “f” is 3, “s” is 2 and the block is “a”.
    • The 3 identity blocks use three set of filters of size 128,128,512, “f” is 3 and the blocks are “b”, “c” and “d”.
  • Stage 4:
    • The convolutional block uses three set of filters of size 256, 256, 1024, “f” is 3, “s” is 2 and the block is “a”.
    • The 5 identity blocks use three set of filters of size 256, 256, 1024, “f” is 3 and the blocks are “b”, “c”, “d”, “e” and “f”.
  • Stage 5:
    • The convolutional block uses three set of filters of size 512, 512, 2048, “f” is 3, “s” is 2 and the block is “a”.
    • The 2 identity blocks use three set of filters of size 256, 256, 2048, “f” is 3 and the blocks are “b” and “c”.
    • The 2D Average Pooling uses a window of shape (2,2) and its name is “avg_pool”.

Keras实现

这个实现即为吴恩达深度学习系列视频的作业,如果你想完全掌握的话,强烈建议你参考这篇包含了作业完整过程和说明的博文:

卷积神经网络 第三周作业:Residual+Networks±+v1

下面是代码部分:

导入相应的库:
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import numpy as np
import tensorflow as tf
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
# import pydot
# from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline

import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)

恒等块

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# GRADED FUNCTION: identity_block

def identity_block(X, f, filters, stage, block):
    """
    Implementation of the identity block as defined in Figure 4
    
    Arguments:
    X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
    f -- integer, specifying the shape of the middle CONV's window for the main path
    filters -- python list of integers, defining the number of filters in the CONV layers of the main path
    stage -- integer, used to name the layers, depending on their position in the network
    block -- string/character, used to name the layers, depending on their position in the network
    
    Returns:
    X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
    """
    
    # defining name basis
    conv_name_base = "res" + str(stage) + block + "_branch"
    bn_name_base   = "bn"  + str(stage) + block + "_branch"
    
    # Retrieve Filters
    F1, F2, F3 = filters
    
    # Save the input value. You'll need this later to add back to the main path. 
    X_shortcut = X
    
    # First component of main path
    X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(1, 1), padding="valid", 
               name=conv_name_base+"2a", kernel_initializer=glorot_uniform(seed=0))(X)
    #valid mean no padding / glorot_uniform equal to Xaiver initialization - Steve 
    
    X = BatchNormalization(axis=3, name=bn_name_base + "2a")(X)
    X = Activation("relu")(X)
    ### START CODE HERE ###
    
    # Second component of main path (3 lines)
    X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding="same",
               name=conv_name_base+"2b", kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base+"2b")(X)
    X = Activation("relu")(X)
    # Third component of main path (2 lines)


    # Final step: Add shortcut value to main path, and pass it through a RELU activation (2 lines)
    X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding="valid",
               name=conv_name_base+"2c", kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base+"2c")(X)
    
    X = Add()([X, X_shortcut])
    X = Activation("relu")(X)
    ### END CODE HERE ###
    
    return X

卷积块

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# GRADED FUNCTION: convolutional_block

def convolutional_block(X, f, filters, stage, block, s = 2):
    """
    Implementation of the convolutional block as defined in Figure 4
    
    Arguments:
    X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
    f -- integer, specifying the shape of the middle CONV's window for the main path
    filters -- python list of integers, defining the number of filters in the CONV layers of the main path
    stage -- integer, used to name the layers, depending on their position in the network
    block -- string/character, used to name the layers, depending on their position in the network
    s -- Integer, specifying the stride to be used
    
    Returns:
    X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
    """
    
    # defining name basis
    conv_name_base = 'res' + str(stage) + block + '_branch'
    bn_name_base = 'bn' + str(stage) + block + '_branch'
    
    # Retrieve Filters
    F1, F2, F3 = filters
    
    # Save the input value
    X_shortcut = X


    ##### MAIN PATH #####
    # First component of main path 
    X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', padding='valid', kernel_initializer = glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
    X = Activation('relu')(X)
    
    ### START CODE HERE ###

    # Second component of main path (3 lines)
    X = Conv2D(F2, (f, f), strides = (1, 1), name = conv_name_base + '2b',padding='same', kernel_initializer = glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
    X = Activation('relu')(X)

    # Third component of main path (2 lines)
    X = Conv2D(F3, (1, 1), strides = (1, 1), name = conv_name_base + '2c',padding='valid', kernel_initializer = glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)

    ##### SHORTCUT PATH #### (2 lines)
    X_shortcut = Conv2D(F3, (1, 1), strides = (s, s), name = conv_name_base + '1',padding='valid', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
    X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)

    # Final step: Add shortcut value to main path, and pass it through a RELU activation (2 lines)
    X = layers.add([X, X_shortcut])
    X = Activation('relu')(X)
    
    ### END CODE HERE ###
    
    return X

ResNet本尊

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# GRADED FUNCTION: ResNet50

def ResNet50(input_shape = (64, 64, 3), classes = 6):
    """
    Implementation of the popular ResNet50 the following architecture:
    CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
    -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER

    Arguments:
    input_shape -- shape of the images of the dataset
    classes -- integer, number of classes

    Returns:
    model -- a Model() instance in Keras
    """
    
    # Define the input as a tensor with shape input_shape
    X_input = Input(input_shape)

    
    # Zero-Padding
    X = ZeroPadding2D((3, 3))(X_input)
    
    # Stage 1
    X = Conv2D(filters=64, kernel_size=(7, 7), strides=(2, 2), name="conv",
               kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name="bn_conv1")(X)
    X = Activation("relu")(X)
    X = MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(X)

    # Stage 2
    X = convolutional_block(X, f=3, filters=[64, 64, 256], stage=2, block="a", s=1)
    X = identity_block(X, f=3, filters=[64, 64, 256], stage=2, block="b")
    X = identity_block(X, f=3, filters=[64, 64, 256], stage=2, block="c")
    ### START CODE HERE ###

    # Stage 3 (4 lines)
    # The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
    # The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
    X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block="a", s=1)
    X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block="b")
    X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block="c")
    X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block="d")
    
    # Stage 4 (6 lines)
    # The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
    # The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
    X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block="a", s=2)
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block="b")
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block="c")
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block="d")
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block="e")
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block="f")
    

    # Stage 5 (3 lines)
    # The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
    # The 2 identity blocks use three set of filters of size [256, 256, 2048], "f" is 3 and the blocks are "b" and "c".
    X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block="a", s=2)
    X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block="b")
    X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block="c")
    
    # filters should be [256, 256, 2048], but it fail to be graded. Use [512, 512, 2048] to pass the grading
    

    # AVGPOOL (1 line). Use "X = AveragePooling2D(...)(X)"
    # The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
    X = AveragePooling2D(pool_size=(2, 2), padding="same")(X)
    
    ### END CODE HERE ###

    # output layer
    X = Flatten()(X)
    X = Dense(classes, activation="softmax", name="fc"+str(classes), kernel_initializer=glorot_uniform(seed=0))(X)
    
    
    # Create model
    model = Model(inputs=X_input, outputs=X, name="ResNet50")

    return model

定义模型

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
model = ResNet50(input_shape=(64, 64, 3), classes=6)

编译模型

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

导入训练数据

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.

"""
def convert_to_one_hot(Y, C):
    Y = np.eye(C)[Y.reshape(-1)].T
    return Y
"""
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T

print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)

训练模型 (可能耗费比较长的时间)

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
model.fit(X_train, Y_train, epochs = 20, batch_size = 32)
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
Epoch 1/20
1080/1080 [==============================] - 268s 248ms/step - loss: 2.9721 - acc: 0.2898
Epoch 2/20
1080/1080 [==============================] - 270s 250ms/step - loss: 1.8968 - acc: 0.3639
Epoch 3/20
1080/1080 [==============================] - 268s 248ms/step - loss: 1.5796 - acc: 0.4463
Epoch 4/20
1080/1080 [==============================] - 251s 233ms/step - loss: 1.2796 - acc: 0.5213
Epoch 5/20
1080/1080 [==============================] - 260s 241ms/step - loss: 0.9278 - acc: 0.6722
Epoch 6/20
1080/1080 [==============================] - 261s 242ms/step - loss: 0.7286 - acc: 0.7315
Epoch 7/20
1080/1080 [==============================] - 258s 239ms/step - loss: 0.4950 - acc: 0.8324
Epoch 8/20
1080/1080 [==============================] - 261s 241ms/step - loss: 0.3646 - acc: 0.8889
Epoch 9/20
1080/1080 [==============================] - 258s 238ms/step - loss: 0.3135 - acc: 0.9019
Epoch 10/20
1080/1080 [==============================] - 255s 237ms/step - loss: 0.1291 - acc: 0.9639
Epoch 11/20
1080/1080 [==============================] - 253s 235ms/step - loss: 0.0814 - acc: 0.9704
Epoch 12/20
1080/1080 [==============================] - 260s 240ms/step - loss: 0.0901 - acc: 0.9685
Epoch 13/20
1080/1080 [==============================] - 260s 240ms/step - loss: 0.0848 - acc: 0.9694
Epoch 14/20
1080/1080 [==============================] - 261s 242ms/step - loss: 0.0740 - acc: 0.9741
Epoch 15/20
1080/1080 [==============================] - 258s 239ms/step - loss: 0.0488 - acc: 0.9833
Epoch 16/20
1080/1080 [==============================] - 260s 241ms/step - loss: 0.0257 - acc: 0.9981
Epoch 17/20
1080/1080 [==============================] - 259s 240ms/step - loss: 0.0029 - acc: 1.0000
Epoch 18/20
1080/1080 [==============================] - 260s 241ms/step - loss: 0.0014 - acc: 1.0000
Epoch 19/20
1080/1080 [==============================] - 257s 238ms/step - loss: 8.9325e-04 - acc: 1.0000
Epoch 20/20
1080/1080 [==============================] - 255s 236ms/step - loss: 6.9667e-04 - acc: 1.0000
<keras.callbacks.History at 0x21761b11710>

评估准确度

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
120/120 [==============================] - 6s 49ms/step
Loss = 0.11732131155828635
Test Accuracy = 0.9666666666666667

打印出模型的结构

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
model.summary()
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_3 (InputLayer)            (None, 64, 64, 3)    0                                            
__________________________________________________________________________________________________
zero_padding2d_3 (ZeroPadding2D (None, 70, 70, 3)    0           input_3[0][0]                    
__________________________________________________________________________________________________
conv (Conv2D)                   (None, 32, 32, 64)   9472        zero_padding2d_3[0][0]           
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 32, 32, 64)   256         conv[0][0]                       
__________________________________________________________________________________________________
activation_96 (Activation)      (None, 32, 32, 64)   0           bn_conv1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D)  (None, 15, 15, 64)   0           activation_96[0][0]              
__________________________________________________________________________________________________
res2a_branch2a (Conv2D)         (None, 15, 15, 64)   4160        max_pooling2d_3[0][0]            
__________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizati (None, 15, 15, 64)   256         res2a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_97 (Activation)      (None, 15, 15, 64)   0           bn2a_branch2a[0][0]              
__________________________________________________________________________________________________
res2a_branch2b (Conv2D)         (None, 15, 15, 64)   36928       activation_97[0][0]              
__________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizati (None, 15, 15, 64)   256         res2a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_98 (Activation)      (None, 15, 15, 64)   0           bn2a_branch2b[0][0]              
__________________________________________________________________________________________________
res2a_branch2c (Conv2D)         (None, 15, 15, 256)  16640       activation_98[0][0]              
__________________________________________________________________________________________________
res2a_branch1 (Conv2D)          (None, 15, 15, 256)  16640       max_pooling2d_3[0][0]            
__________________________________________________________________________________________________
bn2a_branch2c (BatchNormalizati (None, 15, 15, 256)  1024        res2a_branch2c[0][0]             
__________________________________________________________________________________________________
bn2a_branch1 (BatchNormalizatio (None, 15, 15, 256)  1024        res2a_branch1[0][0]              
__________________________________________________________________________________________________
add_32 (Add)                    (None, 15, 15, 256)  0           bn2a_branch2c[0][0]              
                                                                 bn2a_branch1[0][0]               
__________________________________________________________________________________________________
activation_99 (Activation)      (None, 15, 15, 256)  0           add_32[0][0]                     
__________________________________________________________________________________________________
res2b_branch2a (Conv2D)         (None, 15, 15, 64)   16448       activation_99[0][0]              
__________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizati (None, 15, 15, 64)   256         res2b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_100 (Activation)     (None, 15, 15, 64)   0           bn2b_branch2a[0][0]              
__________________________________________________________________________________________________
res2b_branch2b (Conv2D)         (None, 15, 15, 64)   36928       activation_100[0][0]             
__________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizati (None, 15, 15, 64)   256         res2b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_101 (Activation)     (None, 15, 15, 64)   0           bn2b_branch2b[0][0]              
__________________________________________________________________________________________________
res2b_branch2c (Conv2D)         (None, 15, 15, 256)  16640       activation_101[0][0]             
__________________________________________________________________________________________________
bn2b_branch2c (BatchNormalizati (None, 15, 15, 256)  1024        res2b_branch2c[0][0]             
__________________________________________________________________________________________________
add_33 (Add)                    (None, 15, 15, 256)  0           bn2b_branch2c[0][0]              
                                                                 activation_99[0][0]              
__________________________________________________________________________________________________
activation_102 (Activation)     (None, 15, 15, 256)  0           add_33[0][0]                     
__________________________________________________________________________________________________
res2c_branch2a (Conv2D)         (None, 15, 15, 64)   16448       activation_102[0][0]             
__________________________________________________________________________________________________
bn2c_branch2a (BatchNormalizati (None, 15, 15, 64)   256         res2c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_103 (Activation)     (None, 15, 15, 64)   0           bn2c_branch2a[0][0]              
__________________________________________________________________________________________________
res2c_branch2b (Conv2D)         (None, 15, 15, 64)   36928       activation_103[0][0]             
__________________________________________________________________________________________________
bn2c_branch2b (BatchNormalizati (None, 15, 15, 64)   256         res2c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_104 (Activation)     (None, 15, 15, 64)   0           bn2c_branch2b[0][0]              
__________________________________________________________________________________________________
res2c_branch2c (Conv2D)         (None, 15, 15, 256)  16640       activation_104[0][0]             
__________________________________________________________________________________________________
bn2c_branch2c (BatchNormalizati (None, 15, 15, 256)  1024        res2c_branch2c[0][0]             
__________________________________________________________________________________________________
add_34 (Add)                    (None, 15, 15, 256)  0           bn2c_branch2c[0][0]              
                                                                 activation_102[0][0]             
__________________________________________________________________________________________________
activation_105 (Activation)     (None, 15, 15, 256)  0           add_34[0][0]                     
__________________________________________________________________________________________________
res3a_branch2a (Conv2D)         (None, 15, 15, 128)  32896       activation_105[0][0]             
__________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizati (None, 15, 15, 128)  512         res3a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_106 (Activation)     (None, 15, 15, 128)  0           bn3a_branch2a[0][0]              
__________________________________________________________________________________________________
res3a_branch2b (Conv2D)         (None, 15, 15, 128)  147584      activation_106[0][0]             
__________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizati (None, 15, 15, 128)  512         res3a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_107 (Activation)     (None, 15, 15, 128)  0           bn3a_branch2b[0][0]              
__________________________________________________________________________________________________
res3a_branch2c (Conv2D)         (None, 15, 15, 512)  66048       activation_107[0][0]             
__________________________________________________________________________________________________
res3a_branch1 (Conv2D)          (None, 15, 15, 512)  131584      activation_105[0][0]             
__________________________________________________________________________________________________
bn3a_branch2c (BatchNormalizati (None, 15, 15, 512)  2048        res3a_branch2c[0][0]             
__________________________________________________________________________________________________
bn3a_branch1 (BatchNormalizatio (None, 15, 15, 512)  2048        res3a_branch1[0][0]              
__________________________________________________________________________________________________
add_35 (Add)                    (None, 15, 15, 512)  0           bn3a_branch2c[0][0]              
                                                                 bn3a_branch1[0][0]               
__________________________________________________________________________________________________
activation_108 (Activation)     (None, 15, 15, 512)  0           add_35[0][0]                     
__________________________________________________________________________________________________
res3b_branch2a (Conv2D)         (None, 15, 15, 128)  65664       activation_108[0][0]             
__________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizati (None, 15, 15, 128)  512         res3b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_109 (Activation)     (None, 15, 15, 128)  0           bn3b_branch2a[0][0]              
__________________________________________________________________________________________________
res3b_branch2b (Conv2D)         (None, 15, 15, 128)  147584      activation_109[0][0]             
__________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizati (None, 15, 15, 128)  512         res3b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_110 (Activation)     (None, 15, 15, 128)  0           bn3b_branch2b[0][0]              
__________________________________________________________________________________________________
res3b_branch2c (Conv2D)         (None, 15, 15, 512)  66048       activation_110[0][0]             
__________________________________________________________________________________________________
bn3b_branch2c (BatchNormalizati (None, 15, 15, 512)  2048        res3b_branch2c[0][0]             
__________________________________________________________________________________________________
add_36 (Add)                    (None, 15, 15, 512)  0           bn3b_branch2c[0][0]              
                                                                 activation_108[0][0]             
__________________________________________________________________________________________________
activation_111 (Activation)     (None, 15, 15, 512)  0           add_36[0][0]                     
__________________________________________________________________________________________________
res3c_branch2a (Conv2D)         (None, 15, 15, 128)  65664       activation_111[0][0]             
__________________________________________________________________________________________________
bn3c_branch2a (BatchNormalizati (None, 15, 15, 128)  512         res3c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_112 (Activation)     (None, 15, 15, 128)  0           bn3c_branch2a[0][0]              
__________________________________________________________________________________________________
res3c_branch2b (Conv2D)         (None, 15, 15, 128)  147584      activation_112[0][0]             
__________________________________________________________________________________________________
bn3c_branch2b (BatchNormalizati (None, 15, 15, 128)  512         res3c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_113 (Activation)     (None, 15, 15, 128)  0           bn3c_branch2b[0][0]              
__________________________________________________________________________________________________
res3c_branch2c (Conv2D)         (None, 15, 15, 512)  66048       activation_113[0][0]             
__________________________________________________________________________________________________
bn3c_branch2c (BatchNormalizati (None, 15, 15, 512)  2048        res3c_branch2c[0][0]             
__________________________________________________________________________________________________
add_37 (Add)                    (None, 15, 15, 512)  0           bn3c_branch2c[0][0]              
                                                                 activation_111[0][0]             
__________________________________________________________________________________________________
activation_114 (Activation)     (None, 15, 15, 512)  0           add_37[0][0]                     
__________________________________________________________________________________________________
res3d_branch2a (Conv2D)         (None, 15, 15, 128)  65664       activation_114[0][0]             
__________________________________________________________________________________________________
bn3d_branch2a (BatchNormalizati (None, 15, 15, 128)  512         res3d_branch2a[0][0]             
__________________________________________________________________________________________________
activation_115 (Activation)     (None, 15, 15, 128)  0           bn3d_branch2a[0][0]              
__________________________________________________________________________________________________
res3d_branch2b (Conv2D)         (None, 15, 15, 128)  147584      activation_115[0][0]             
__________________________________________________________________________________________________
bn3d_branch2b (BatchNormalizati (None, 15, 15, 128)  512         res3d_branch2b[0][0]             
__________________________________________________________________________________________________
activation_116 (Activation)     (None, 15, 15, 128)  0           bn3d_branch2b[0][0]              
__________________________________________________________________________________________________
res3d_branch2c (Conv2D)         (None, 15, 15, 512)  66048       activation_116[0][0]             
__________________________________________________________________________________________________
bn3d_branch2c (BatchNormalizati (None, 15, 15, 512)  2048        res3d_branch2c[0][0]             
__________________________________________________________________________________________________
add_38 (Add)                    (None, 15, 15, 512)  0           bn3d_branch2c[0][0]              
                                                                 activation_114[0][0]             
__________________________________________________________________________________________________
activation_117 (Activation)     (None, 15, 15, 512)  0           add_38[0][0]                     
__________________________________________________________________________________________________
res4a_branch2a (Conv2D)         (None, 8, 8, 256)    131328      activation_117[0][0]             
__________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizati (None, 8, 8, 256)    1024        res4a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_118 (Activation)     (None, 8, 8, 256)    0           bn4a_branch2a[0][0]              
__________________________________________________________________________________________________
res4a_branch2b (Conv2D)         (None, 8, 8, 256)    590080      activation_118[0][0]             
__________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizati (None, 8, 8, 256)    1024        res4a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_119 (Activation)     (None, 8, 8, 256)    0           bn4a_branch2b[0][0]              
__________________________________________________________________________________________________
res4a_branch2c (Conv2D)         (None, 8, 8, 1024)   263168      activation_119[0][0]             
__________________________________________________________________________________________________
res4a_branch1 (Conv2D)          (None, 8, 8, 1024)   525312      activation_117[0][0]             
__________________________________________________________________________________________________
bn4a_branch2c (BatchNormalizati (None, 8, 8, 1024)   4096        res4a_branch2c[0][0]             
__________________________________________________________________________________________________
bn4a_branch1 (BatchNormalizatio (None, 8, 8, 1024)   4096        res4a_branch1[0][0]              
__________________________________________________________________________________________________
add_39 (Add)                    (None, 8, 8, 1024)   0           bn4a_branch2c[0][0]              
                                                                 bn4a_branch1[0][0]               
__________________________________________________________________________________________________
activation_120 (Activation)     (None, 8, 8, 1024)   0           add_39[0][0]                     
__________________________________________________________________________________________________
res4b_branch2a (Conv2D)         (None, 8, 8, 256)    262400      activation_120[0][0]             
__________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizati (None, 8, 8, 256)    1024        res4b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_121 (Activation)     (None, 8, 8, 256)    0           bn4b_branch2a[0][0]              
__________________________________________________________________________________________________
res4b_branch2b (Conv2D)         (None, 8, 8, 256)    590080      activation_121[0][0]             
__________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizati (None, 8, 8, 256)    1024        res4b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_122 (Activation)     (None, 8, 8, 256)    0           bn4b_branch2b[0][0]              
__________________________________________________________________________________________________
res4b_branch2c (Conv2D)         (None, 8, 8, 1024)   263168      activation_122[0][0]             
__________________________________________________________________________________________________
bn4b_branch2c (BatchNormalizati (None, 8, 8, 1024)   4096        res4b_branch2c[0][0]             
__________________________________________________________________________________________________
add_40 (Add)                    (None, 8, 8, 1024)   0           bn4b_branch2c[0][0]              
                                                                 activation_120[0][0]             
__________________________________________________________________________________________________
activation_123 (Activation)     (None, 8, 8, 1024)   0           add_40[0][0]                     
__________________________________________________________________________________________________
res4c_branch2a (Conv2D)         (None, 8, 8, 256)    262400      activation_123[0][0]             
__________________________________________________________________________________________________
bn4c_branch2a (BatchNormalizati (None, 8, 8, 256)    1024        res4c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_124 (Activation)     (None, 8, 8, 256)    0           bn4c_branch2a[0][0]              
__________________________________________________________________________________________________
res4c_branch2b (Conv2D)         (None, 8, 8, 256)    590080      activation_124[0][0]             
__________________________________________________________________________________________________
bn4c_branch2b (BatchNormalizati (None, 8, 8, 256)    1024        res4c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_125 (Activation)     (None, 8, 8, 256)    0           bn4c_branch2b[0][0]              
__________________________________________________________________________________________________
res4c_branch2c (Conv2D)         (None, 8, 8, 1024)   263168      activation_125[0][0]             
__________________________________________________________________________________________________
bn4c_branch2c (BatchNormalizati (None, 8, 8, 1024)   4096        res4c_branch2c[0][0]             
__________________________________________________________________________________________________
add_41 (Add)                    (None, 8, 8, 1024)   0           bn4c_branch2c[0][0]              
                                                                 activation_123[0][0]             
__________________________________________________________________________________________________
activation_126 (Activation)     (None, 8, 8, 1024)   0           add_41[0][0]                     
__________________________________________________________________________________________________
res4d_branch2a (Conv2D)         (None, 8, 8, 256)    262400      activation_126[0][0]             
__________________________________________________________________________________________________
bn4d_branch2a (BatchNormalizati (None, 8, 8, 256)    1024        res4d_branch2a[0][0]             
__________________________________________________________________________________________________
activation_127 (Activation)     (None, 8, 8, 256)    0           bn4d_branch2a[0][0]              
__________________________________________________________________________________________________
res4d_branch2b (Conv2D)         (None, 8, 8, 256)    590080      activation_127[0][0]             
__________________________________________________________________________________________________
bn4d_branch2b (BatchNormalizati (None, 8, 8, 256)    1024        res4d_branch2b[0][0]             
__________________________________________________________________________________________________
activation_128 (Activation)     (None, 8, 8, 256)    0           bn4d_branch2b[0][0]              
__________________________________________________________________________________________________
res4d_branch2c (Conv2D)         (None, 8, 8, 1024)   263168      activation_128[0][0]             
__________________________________________________________________________________________________
bn4d_branch2c (BatchNormalizati (None, 8, 8, 1024)   4096        res4d_branch2c[0][0]             
__________________________________________________________________________________________________
add_42 (Add)                    (None, 8, 8, 1024)   0           bn4d_branch2c[0][0]              
                                                                 activation_126[0][0]             
__________________________________________________________________________________________________
activation_129 (Activation)     (None, 8, 8, 1024)   0           add_42[0][0]                     
__________________________________________________________________________________________________
res4e_branch2a (Conv2D)         (None, 8, 8, 256)    262400      activation_129[0][0]             
__________________________________________________________________________________________________
bn4e_branch2a (BatchNormalizati (None, 8, 8, 256)    1024        res4e_branch2a[0][0]             
__________________________________________________________________________________________________
activation_130 (Activation)     (None, 8, 8, 256)    0           bn4e_branch2a[0][0]              
__________________________________________________________________________________________________
res4e_branch2b (Conv2D)         (None, 8, 8, 256)    590080      activation_130[0][0]             
__________________________________________________________________________________________________
bn4e_branch2b (BatchNormalizati (None, 8, 8, 256)    1024        res4e_branch2b[0][0]             
__________________________________________________________________________________________________
activation_131 (Activation)     (None, 8, 8, 256)    0           bn4e_branch2b[0][0]              
__________________________________________________________________________________________________
res4e_branch2c (Conv2D)         (None, 8, 8, 1024)   263168      activation_131[0][0]             
__________________________________________________________________________________________________
bn4e_branch2c (BatchNormalizati (None, 8, 8, 1024)   4096        res4e_branch2c[0][0]             
__________________________________________________________________________________________________
add_43 (Add)                    (None, 8, 8, 1024)   0           bn4e_branch2c[0][0]              
                                                                 activation_129[0][0]             
__________________________________________________________________________________________________
activation_132 (Activation)     (None, 8, 8, 1024)   0           add_43[0][0]                     
__________________________________________________________________________________________________
res5a_branch2a (Conv2D)         (None, 4, 4, 512)    524800      activation_132[0][0]             
__________________________________________________________________________________________________
bn5a_branch2a (BatchNormalizati (None, 4, 4, 512)    2048        res5a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_133 (Activation)     (None, 4, 4, 512)    0           bn5a_branch2a[0][0]              
__________________________________________________________________________________________________
res5a_branch2b (Conv2D)         (None, 4, 4, 512)    2359808     activation_133[0][0]             
__________________________________________________________________________________________________
bn5a_branch2b (BatchNormalizati (None, 4, 4, 512)    2048        res5a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_134 (Activation)     (None, 4, 4, 512)    0           bn5a_branch2b[0][0]              
__________________________________________________________________________________________________
res5a_branch2c (Conv2D)         (None, 4, 4, 2048)   1050624     activation_134[0][0]             
__________________________________________________________________________________________________
res5a_branch1 (Conv2D)          (None, 4, 4, 2048)   2099200     activation_132[0][0]             
__________________________________________________________________________________________________
bn5a_branch2c (BatchNormalizati (None, 4, 4, 2048)   8192        res5a_branch2c[0][0]             
__________________________________________________________________________________________________
bn5a_branch1 (BatchNormalizatio (None, 4, 4, 2048)   8192        res5a_branch1[0][0]              
__________________________________________________________________________________________________
add_44 (Add)                    (None, 4, 4, 2048)   0           bn5a_branch2c[0][0]              
                                                                 bn5a_branch1[0][0]               
__________________________________________________________________________________________________
activation_135 (Activation)     (None, 4, 4, 2048)   0           add_44[0][0]                     
__________________________________________________________________________________________________
res5b_branch2a (Conv2D)         (None, 4, 4, 512)    1049088     activation_135[0][0]             
__________________________________________________________________________________________________
bn5b_branch2a (BatchNormalizati (None, 4, 4, 512)    2048        res5b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_136 (Activation)     (None, 4, 4, 512)    0           bn5b_branch2a[0][0]              
__________________________________________________________________________________________________
res5b_branch2b (Conv2D)         (None, 4, 4, 512)    2359808     activation_136[0][0]             
__________________________________________________________________________________________________
bn5b_branch2b (BatchNormalizati (None, 4, 4, 512)    2048        res5b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_137 (Activation)     (None, 4, 4, 512)    0           bn5b_branch2b[0][0]              
__________________________________________________________________________________________________
res5b_branch2c (Conv2D)         (None, 4, 4, 2048)   1050624     activation_137[0][0]             
__________________________________________________________________________________________________
bn5b_branch2c (BatchNormalizati (None, 4, 4, 2048)   8192        res5b_branch2c[0][0]             
__________________________________________________________________________________________________
add_45 (Add)                    (None, 4, 4, 2048)   0           bn5b_branch2c[0][0]              
                                                                 activation_135[0][0]             
__________________________________________________________________________________________________
activation_138 (Activation)     (None, 4, 4, 2048)   0           add_45[0][0]                     
__________________________________________________________________________________________________
res5c_branch2a (Conv2D)         (None, 4, 4, 512)    1049088     activation_138[0][0]             
__________________________________________________________________________________________________
bn5c_branch2a (BatchNormalizati (None, 4, 4, 512)    2048        res5c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_139 (Activation)     (None, 4, 4, 512)    0           bn5c_branch2a[0][0]              
__________________________________________________________________________________________________
res5c_branch2b (Conv2D)         (None, 4, 4, 512)    2359808     activation_139[0][0]             
__________________________________________________________________________________________________
bn5c_branch2b (BatchNormalizati (None, 4, 4, 512)    2048        res5c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_140 (Activation)     (None, 4, 4, 512)    0           bn5c_branch2b[0][0]              
__________________________________________________________________________________________________
res5c_branch2c (Conv2D)         (None, 4, 4, 2048)   1050624     activation_140[0][0]             
__________________________________________________________________________________________________
bn5c_branch2c (BatchNormalizati (None, 4, 4, 2048)   8192        res5c_branch2c[0][0]             
__________________________________________________________________________________________________
add_46 (Add)                    (None, 4, 4, 2048)   0           bn5c_branch2c[0][0]              
                                                                 activation_138[0][0]             
__________________________________________________________________________________________________
activation_141 (Activation)     (None, 4, 4, 2048)   0           add_46[0][0]                     
__________________________________________________________________________________________________
average_pooling2d_3 (AveragePoo (None, 2, 2, 2048)   0           activation_141[0][0]             
__________________________________________________________________________________________________
flatten_3 (Flatten)             (None, 8192)         0           average_pooling2d_3[0][0]        
__________________________________________________________________________________________________
fc6 (Dense)                     (None, 6)            49158       flatten_3[0][0]                  
==================================================================================================
Total params: 22,515,078
Trainable params: 22,465,030
Non-trainable params: 50,048
__________________________________________________________________________________________________

参考:

  1. Deep Residual Learning for Image Recognition
  2. 吴恩达深度学习系列视频:卷积神经网络
  3. 卷积神经网络 第三周作业:Residual+Networks±+v1
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2018年12月21日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
fNIRS数据处理工具包Homer2的数据转换教程
前期我们公众号推送了《fNIRS数据处理工具包Homer2下载与安装的详细教程》,对Homer2的下载与安装进行了详细介绍。不过需要注意的是在使用工具包Homer2进行数据处理前,需要将原始数据转换成Homer2所要求的数据格式。这是因为fNIRS厂家众多,各个厂家的设备具有特定数据存储格式,而因此Homer2能够读取的数据格式是特定的。本文我们将以NIRx原始数据为例,和大家分享下Homer2数据格式的转换方法,希望能帮助大家更好地学习该工具包的使用方法。
悦影科技
2022/09/05
7920
值得收藏!fNIRS功能近红外数据处理工具包盘点(全网最全)
可以说是做fNIRS激活分析必用的一款开源工具包,该工具包由韩国KAIST的生物成像信号处理 (BISP) 实验室开发。NIRS_SPM基于SPM工具包和Matlab平台,可以识别目前主流fNIRS厂商设备采集的数据格式,它基于一般线性模型GLM对fNIRS进行激活分析和统计(一阶分析和组分析)。此外,NIRS_SPM还具有对数据的简单预处理(比如说滤波等)、通道的定位(如输出每个通道的MNI坐标等)、ROI分析、时间序列分析等功能,如图1。
悦影科技
2022/05/04
2.4K0
新手必看!EEGLAB工具包下载、安装及数据导入教程
EEGLAB是一款免费、开源的且应用最广泛EEG数据处理和分析工具包,本文详细介绍EEGLAB工具包的下载、安装及数据导入步骤,希望对大家有用。
悦影科技
2020/11/13
4.7K0
新手必看!EEGLAB工具包下载、安装及数据导入教程
EEG信号处理与分析常用工具包介绍
在脑科学领域,EEG技术可以说是研究大脑的最重要的技术手段之一,而对于采集得到的EEG信号需要经过较为复杂的多个步骤的分析和处理才能够获得我们所需要的最终结果。EEG信号的分析和处理可能会涉及以下一个或多个方面:信号的预处理(预处理也需要多个步骤)、ERP时域分析、时频分析、信号的功率谱计算、功能连接、溯源分析等等。上述涉及到的EEG信号分析方法对于有编程基础和学过“数字信号处理”相关课程的人来说或许会稍感轻松,但是对于没有学过 “数字信号处理”相关课程的人来说可能就会困难重重。幸运的是,目前国内外研究者开发出了多款EEG信号处理和分析的开源工具包,供大家免费下载使用。这些开源的EEG工具包促进了脑科学领域的蓬勃发展,也使得“技术小白们”经过简单的学习就可以运用那些高大上的EEG分析技术。这里,笔者就对这些常用的EEG信号分析与处理工具包进行简单的介绍。
悦影科技
2020/12/16
1.6K0
EEG信号处理与分析常用工具包介绍
matlab2016a下载包及安装教程
链接:https://pan.baidu.com/s/1wMGK5jYxYdqs1IxRDfJl8A 提取码:y72e 复制这段内容后打开百度网盘手机App,操作更方便哦
全栈程序员站长
2022/09/01
2.7K0
支持向量机SVM工具包LIBSVM的安装和测试
目前,机器学习已广泛地应用于脑科学领域的研究中,特别是在利用脑影像数据进行疾病的诊断方面,离不开分类算法。支持向量机(Support Vector Machine, SVM)作为一种有监督学习的二元分类器,在小样本分类中具有突出的优势,因此非常适合于基于脑影像数据的疾病分类研究。LIBSVM工具包是台湾大学Lin Chih-Jen教授等开发一个SVM工具包,其可运行于Python, R, MATLAB等语言环境下,是目前大家用的比较多的一个SVM工具包。本文,笔者详细阐述Matlab环境下LIBSVM的安装和使用教程,希望对大家的研究有所帮助。
悦影科技
2020/12/16
1.4K0
支持向量机SVM工具包LIBSVM的安装和测试
EEG/MEG数据处理工具包Brainstorm的下载与安装教程
Brainstorm是由麦吉尔大学(McGill University)的McConnell Brain Imaging Centre,南加州大学(University of Southern California)的Signal & Image Processing Institute,Cleveland Clinic Neurological Institute等多家单位联合开发的一款基于Matlab的开源工具包,可用于分析EEG、MEG等信号。与FieldTrip工具包有点类似,除了包含基本常用的EEG分析技术外,Brianstorm最主要的优势是可以进行基于多种技术的溯源分析。此外,与FieldTrip相比,Brianstorm具有GUI界面,方便没有编程基础的研究者使用。
悦影科技
2020/11/13
1.7K0
EEG/MEG数据处理工具包Brainstorm的下载与安装教程
[深度学习工具]·音频特征提取pyAudioAnalysis工具包
链接:http://www.cnblogs.com/xingshansi/p/6806637.html
小宋是呢
2019/06/27
2.7K0
[深度学习工具]·音频特征提取pyAudioAnalysis工具包
Light Field 光场以及Matlab光场工具包(LightField ToolBox)的使用说明
我在这篇文章里详细介绍了光场数据的处理过程,如果你是研究光场领域的新手,这篇文章对你来说应该是非常有用的。声明一下:一切理解都是本人观点,如有疑问,还望在评论中留言。如需转载请与本人联系,谢谢合作!
好好学SLAM
2021/05/28
2.9K0
最新MATLAB下载安装教程(附文件)
MATLAB(矩阵实验室)是美国MathWorks公司出品的商业数学软件,用于数据分析、无线通信、深度学习、图像处理与计算机视觉、信号处理、量化金融与风险管理、机器人,控制系统等领域。
糯米导航
2022/07/19
5K0
最新MATLAB下载安装教程(附文件)
Psychtoolbox刺激呈现工具包的安装及下载
Psychtoolbox工具包是一款基于Maltab或Octave平台的用于视听觉刺激呈现的系统,与E-Prime刺激呈现软件一样,其主要应用于神经科学、脑科学、认知神经科学和心理学等领域,可与EEG和fMRI技术相结合,研究被试特定任务下的EEG和fMRI信号的变化特性。与商业软件E-Prime不同,Psychtoolbox工具包完全免费,通过Matlab编程可实现灵活的实验设计。但其唯一的缺点是没有GUI界面,需要研究者自己进行Matlab编程并调用工具包中的编写好的函数。但是,对于有一定编程基础的研究者来说,Psychtoolbox工具包还是很容易上手的。本文主要介绍Psychtoolbox工具包的安装方法以及工具包如何下载。
悦影科技
2020/11/28
1.8K0
MATLAB R2022b 安装教程(2024年图文保姆级教程)
Matlab是一个由MathWorks公司开发的高性能语言和交互式环境,主要用于技术计算。它被广泛应用于工程计算、数据分析、算法开发以及模型构建等领域。以下是Matlab的一些关键特点:
程序员洲洲
2024/07/20
1.8K0
MATLAB R2022b 安装教程(2024年图文保姆级教程)
BR2022下载安装包 br中文版一分钟安装教程各版本安装包-经验分享
BR是 Bridge 缩写,Adobe Bridge 是Adobe公司开发的一个组织工具程序,定义就是数字资产管理软件和照片管理工具。可以使用 Bridge创建管理使用Adobe所有软件创建任何格式的文件。以及查看有关从相机导入的数据,如照片按尺寸、相机型号、镜头类型、曝光时间等方面。
木子学Lee
2023/03/03
8070
BR2022下载安装包 br中文版一分钟安装教程各版本安装包-经验分享
matlab最新保姆级安装教程【博主亲测】
R2020b的安装包大小在22GB,里面包含了多个压缩包,下载时间可能需要多一点,耐心让它飞一会吧:
糯米导航
2023/01/03
3.4K0
matlab最新保姆级安装教程【博主亲测】
ERPLAB中文教程:ERPLAB安装与添加通道
ERPLAB是免费开源的Matlab软件包,用于分析ERP等脑电数据。ERPLAB扩展了EEGLAB的功能,为ERP的处理、分析和可视化提供非常棒的工具。不管你是初学者还是高级用户,ERPLAB都非常的好用。对于初学者来说的图形用户界面易于学习,而Matlab脚本为中级和高级用户提供了强大的功能。
脑机接口社区
2020/07/01
2.2K0
Matlab R2020a软件下载安装激活教程啦!
MATLAB是美国MathWorks公司出品的商业数学软件,用于算法开发、数据可视化、数据分析以及数值计算的高级技术计算语言和交互式环境,主要包括MATLAB和Simulink两大部分。
知识兔下载
2023/03/10
1.9K0
Fuzor2020安装教程
1.先使用“百度网盘客户端”下载Fur20_CN_x64安装包到电脑磁盘里,并鼠标右击进行解压缩,安装前先断网电脑网络,然后找到“第1步:安装Fuzor-2020.msi”,鼠标右击选择【安装】
糯米导航
2022/10/06
1.2K0
Fuzor2020安装教程
cdr2023全新版下载安装教程CorelDRAW2023
coreldraw是一款深受设计师们喜爱的制图软件,它能够帮助设计师绘制出许多精美的图案。作为一个强大的绘图软件,它被喜爱的程度可用事实说明:用作商业设计和美术设计的PC机几乎都安装了 。CorelDRAW让您轻松应对创意图形设计项目。
用户7442547
2022/11/05
3.4K2
详尽!Ubuntu16.04LTS安装Matlab2016b!(2018.4重编版)
前言 之前在网上也看了很多文章, 大多数都是图片少得可怜, 排版不够清晰, 所以血泪安装了几波之后, 写这篇分享给大家. 然后也算做Ubuntu软件推荐进击篇的第一篇文章. 多图预警!转载请注明出处!!! 重新整理和编辑了文章. ---- 下载和解压 某盘链接 密码:3cvw 这是mac, linux, win三个版本的链接 下载之后有一个rar压缩包和两个iso, 我们解压rar rar x Matlab\ 2016b\ Linux64\ Crack.rar 之后我们给一下终端截图, 和加
sean_yang
2018/09/04
7840
详尽!Ubuntu16.04LTS安装Matlab2016b!(2018.4重编版)
Matlab R2018a 64位安装教程
最近几天捣鼓Matlab的安装,折腾了好久,终于解决了。现将Matlab R2018a 64位的安装流程梳理总结如下。
3D视觉工坊
2020/12/11
1.3K0
Matlab R2018a 64位安装教程
推荐阅读
相关推荐
fNIRS数据处理工具包Homer2的数据转换教程
更多 >
LV.7
河南悦影医药科技有限公司总经理
目录
  • ResNet = Residual Network
  • 所有非残差网络都被称为平凡网络,这是一个原论文提出来的相对而言的概念。
  • Motivation of ResNet
  • What does Residual mean ?
  • Shortcut Connection & Residual Block
  • ResNet50
    • Identity block. Skip connection “skips over” 3 layers
    • The convolutional block
  • Keras实现
    • 导入相应的库:
  • 恒等块
  • 卷积块
  • ResNet本尊
  • 定义模型
  • 编译模型
  • 导入训练数据
  • 训练模型 (可能耗费比较长的时间)
  • 评估准确度
  • 打印出模型的结构
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档