我正在尝试从预测的图像中提取特征。它是基于JJ的这个文章。艾莱尔
基本上,它所做的是使用经过训练的模型,并选择顶层-K层,并识别每个层中激活的图像区域。
我的模型可以下载这里 (134 My ),测试cat映像可以下载这里。
模型如下:
> summary(model)
___________________________________________________________________________________________________________________________________________________________________________________________________
Layer (type) Output Shape Param #
===================================================================================================================================================================================================
vgg16 (Model) (None, 4, 4, 512) 14714688
___________________________________________________________________________________________________________________________________________________________________________________________________
flatten_1 (Flatten) (None, 8192) 0
___________________________________________________________________________________________________________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 2097408
___________________________________________________________________________________________________________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 257
===================================================================================================================================================================================================
Total params: 16,812,353
Trainable params: 16,552,193
Non-trainable params: 260,160
___________________________________________________________________________________________________________________________________________________________________________________________________
这是我的完整代码:
library(keras)
model_file <- "data/kaggle_cats_dogs_small/model//model.hdf5"
model <- load_model_hdf5(model_file)
summary(model)
img_path <- "data/kaggle_cats_dogs_small/test_generic/cat.5009.jpg"
# We preprocess the image into a 4D tensor
img <- image_load(img_path, target_size = c(150, 150))
img_tensor <- image_to_array(img)
img_tensor <- array_reshape(img_tensor, c(1, 150, 150, 3))
# Remember that the model was trained on inputs
# that were preprocessed in the following way:
img_tensor <- img_tensor / 255
dim(img_tensor)
# Display picture ---------------------------------------------------------
plot(as.raster(img_tensor[1,,,]))
# Extracting layers and activation ----------------------------------------
# Extracts the outputs of the top 8 layers:
layer_outputs <- lapply(model$layers[1:8], function(layer) layer$output)
# Creates a model that will return these outputs, given the model input:
activation_model <- keras_model(inputs = model$input, outputs = layer_outputs)
它在最后两行代码中破译:
> layer_outputs <- lapply(model$layers[1:8], function(layer) layer$output)
Show Traceback
Rerun with Debug
Error in py_get_attr_impl(x, name, silent) :
AttributeError: Layer vgg16 has multiple inbound nodes, hence the notion of "layer output" is ill-defined. Use `get_output_at(node_index)` instead. > # Creates a model that will return these outputs, given the model input:
> activation_model <- keras_model(inputs = model$input, outputs = layer_outputs)
Show Traceback
Rerun with Debug
Error in py_call_impl(callable, dots$args, dots$keywords) :
RuntimeError: Graph disconnected: cannot obtain value for tensor Tensor("input_1_4:0", shape=(?, 150, 150, 3), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: []
正确的方法是什么?
发布于 2018-02-05 17:47:41
当出现此消息(多个入站节点)时,这意味着该模型与其原始输入以外的另一个输入一起使用。(因此,模型有许多输入,尽管您只使用其中一种可能的路径。也就是说,它有VGG原始输入和用于创建堆栈模型的另一个输入)。
要做你想做的事情,你必须在创建你的堆叠模型之前去做。
一些伪码(抱歉不熟悉R符号):
VGGModel <- functionToCreateVGG
layer_outputs <- lapply(VGGModel$layers[1:8], function(layer) layer$output)
activation_model <- keras_model(inputs = VGGModel$input, outputs = layer_outputs)
在添加顶层之前,当您这样做时,VGG模型还没有多个入站节点。
现在,您可以像以前一样将顶层堆叠到VGG模型上。
另一种选择是只为activation_model
创建另一个VGG模型(以防您没有训练VGG模型)。
https://stackoverflow.com/questions/48634151
复制