我想建立一个端到端的可训练模式,包括以下几个方面:
(它或多或少类似于本文中的图2:https://arxiv.org/pdf/1611.07890.pdf)
我现在的问题是在重塑之后,如何用Keras或Tensorflow将特征矩阵的值提供给LSTM?
到目前为止,这是我使用VGG16网的代码(也是指向Keras问题的链接):
# VGG16
model = Sequential()
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(224, 224, 3)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 2
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 3
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(256, (3, 3), activation='relu'))
model.add(Conv2D(256, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 4
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 5
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 6
model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dense(4096, activation='relu'))
# reshape the feature 4096 = 64 * 64
model.add(Reshape((64, 64)))
# How to feed each row of this to LSTM?
# This is my first solution but it doesn’t look correct:
# model.add(LSTM(256, input_shape=(64, 1))) # 256 hidden units, sequence length = 64, feature dim = 1发布于 2018-07-27 01:56:24
考虑使用Conv2D和MaxPool2D层构建CNN模型,直到您到达平坦层,因为来自平坦层的矢量化输出将输入到结构的LSTM部分。
所以,建立这样的CNN模型:
model_cnn = Sequential()
model_cnn.add(Conv2D...)
model_cnn.add(MaxPooling2D...)
...
model_cnn.add(Flatten())现在,这是一个有趣的问题,当前版本的Keras与某些TensorFlow结构有些不兼容,这些结构不会让您将整个层堆在一个序列对象中。
因此,是时候使用Keras模型对象来用一个技巧完成神经网络了:
input_lay = Input(shape=(None, ?, ?, ?)) #dimensions of your data
time_distribute = TimeDistributed(Lambda(lambda x: model_cnn(x)))(input_lay) # keras.layers.Lambda is essential to make our trick work :)
lstm_lay = LSTM(?)(time_distribute)
output_lay = Dense(?, activation='?')(lstm_lay)最后,现在是时候把我们的两个分离的模型放在一起了:
model = Model(inputs=[input_lay], outputs=[output_lay])
model.compile(...)OBS:请注意,您可以在不包括顶层的情况下用VGG替换我的model_cnn示例,一旦VGG扁平层的矢量化输出将成为LSTM模型的输入。
https://stackoverflow.com/questions/43680870
复制相似问题