keras实现调用自己训练的模型,并去掉全连接层

其实很简单

from keras.models import load_model

base_model = load_model('model_resenet.h5')#加载指定的模型
print(base_model.summary())#输出网络的结构图

这是我的网络模型的输出,其实就是它的结构图

__________________________________________________________________________________________________
Layer (type)          Output Shape     Param #   Connected to
==================================================================================================
input_1 (InputLayer)      (None, 227, 227, 1) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D)        (None, 225, 225, 32) 320     input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 225, 225, 32) 128     conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation)    (None, 225, 225, 32) 0      batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D)        (None, 225, 225, 32) 9248    activation_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 225, 225, 32) 128     conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation)    (None, 225, 225, 32) 0      batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D)        (None, 225, 225, 32) 9248    activation_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 225, 225, 32) 128     conv2d_3[0][0]
__________________________________________________________________________________________________
merge_1 (Merge)         (None, 225, 225, 32) 0      batch_normalization_3[0][0]
                                 activation_1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation)    (None, 225, 225, 32) 0      merge_1[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D)        (None, 225, 225, 32) 9248    activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 225, 225, 32) 128     conv2d_4[0][0]
__________________________________________________________________________________________________
activation_4 (Activation)    (None, 225, 225, 32) 0      batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D)        (None, 225, 225, 32) 9248    activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 225, 225, 32) 128     conv2d_5[0][0]
__________________________________________________________________________________________________
merge_2 (Merge)         (None, 225, 225, 32) 0      batch_normalization_5[0][0]
                                 activation_3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation)    (None, 225, 225, 32) 0      merge_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 112, 112, 32) 0      activation_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D)        (None, 110, 110, 64) 18496    max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 110, 110, 64) 256     conv2d_6[0][0]
__________________________________________________________________________________________________
activation_6 (Activation)    (None, 110, 110, 64) 0      batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D)        (None, 110, 110, 64) 36928    activation_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 110, 110, 64) 256     conv2d_7[0][0]
__________________________________________________________________________________________________
activation_7 (Activation)    (None, 110, 110, 64) 0      batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D)        (None, 110, 110, 64) 36928    activation_7[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 110, 110, 64) 256     conv2d_8[0][0]
__________________________________________________________________________________________________
merge_3 (Merge)         (None, 110, 110, 64) 0      batch_normalization_8[0][0]
                                 activation_6[0][0]
__________________________________________________________________________________________________
activation_8 (Activation)    (None, 110, 110, 64) 0      merge_3[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D)        (None, 110, 110, 64) 36928    activation_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 110, 110, 64) 256     conv2d_9[0][0]
__________________________________________________________________________________________________
activation_9 (Activation)    (None, 110, 110, 64) 0      batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D)       (None, 110, 110, 64) 36928    activation_9[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 110, 110, 64) 256     conv2d_10[0][0]
__________________________________________________________________________________________________
merge_4 (Merge)         (None, 110, 110, 64) 0      batch_normalization_10[0][0]
                                 activation_8[0][0]
__________________________________________________________________________________________________
activation_10 (Activation)   (None, 110, 110, 64) 0      merge_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 55, 55, 64)  0      activation_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D)       (None, 53, 53, 64)  36928    max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 53, 53, 64)  256     conv2d_11[0][0]
__________________________________________________________________________________________________
activation_11 (Activation)   (None, 53, 53, 64)  0      batch_normalization_11[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 26, 26, 64)  0      activation_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D)       (None, 26, 26, 64)  36928    max_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 26, 26, 64)  256     conv2d_12[0][0]
__________________________________________________________________________________________________
activation_12 (Activation)   (None, 26, 26, 64)  0      batch_normalization_12[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D)       (None, 26, 26, 64)  36928    activation_12[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 26, 26, 64)  256     conv2d_13[0][0]
__________________________________________________________________________________________________
merge_5 (Merge)         (None, 26, 26, 64)  0      batch_normalization_13[0][0]
                                 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
activation_13 (Activation)   (None, 26, 26, 64)  0      merge_5[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D)       (None, 26, 26, 64)  36928    activation_13[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 26, 26, 64)  256     conv2d_14[0][0]
__________________________________________________________________________________________________
activation_14 (Activation)   (None, 26, 26, 64)  0      batch_normalization_14[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D)       (None, 26, 26, 64)  36928    activation_14[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 26, 26, 64)  256     conv2d_15[0][0]
__________________________________________________________________________________________________
merge_6 (Merge)         (None, 26, 26, 64)  0      batch_normalization_15[0][0]
                                 activation_13[0][0]
__________________________________________________________________________________________________
activation_15 (Activation)   (None, 26, 26, 64)  0      merge_6[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 13, 13, 64)  0      activation_15[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D)       (None, 11, 11, 32)  18464    max_pooling2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 11, 11, 32)  128     conv2d_16[0][0]
__________________________________________________________________________________________________
activation_16 (Activation)   (None, 11, 11, 32)  0      batch_normalization_16[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D)       (None, 11, 11, 32)  9248    activation_16[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 11, 11, 32)  128     conv2d_17[0][0]
__________________________________________________________________________________________________
activation_17 (Activation)   (None, 11, 11, 32)  0      batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D)       (None, 11, 11, 32)  9248    activation_17[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 11, 11, 32)  128     conv2d_18[0][0]
__________________________________________________________________________________________________
merge_7 (Merge)         (None, 11, 11, 32)  0      batch_normalization_18[0][0]
                                 activation_16[0][0]
__________________________________________________________________________________________________
activation_18 (Activation)   (None, 11, 11, 32)  0      merge_7[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D)       (None, 11, 11, 32)  9248    activation_18[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 11, 11, 32)  128     conv2d_19[0][0]
__________________________________________________________________________________________________
activation_19 (Activation)   (None, 11, 11, 32)  0      batch_normalization_19[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D)       (None, 11, 11, 32)  9248    activation_19[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 11, 11, 32)  128     conv2d_20[0][0]
__________________________________________________________________________________________________
merge_8 (Merge)         (None, 11, 11, 32)  0      batch_normalization_20[0][0]
                                 activation_18[0][0]
__________________________________________________________________________________________________
activation_20 (Activation)   (None, 11, 11, 32)  0      merge_8[0][0]
__________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 5, 5, 32)   0      activation_20[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D)       (None, 3, 3, 64)   18496    max_pooling2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 3, 3, 64)   256     conv2d_21[0][0]
__________________________________________________________________________________________________
activation_21 (Activation)   (None, 3, 3, 64)   0      batch_normalization_21[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D)       (None, 3, 3, 64)   36928    activation_21[0][0]
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, 3, 3, 64)   256     conv2d_22[0][0]
__________________________________________________________________________________________________
activation_22 (Activation)   (None, 3, 3, 64)   0      batch_normalization_22[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D)       (None, 3, 3, 64)   36928    activation_22[0][0]
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, 3, 3, 64)   256     conv2d_23[0][0]
__________________________________________________________________________________________________
merge_9 (Merge)         (None, 3, 3, 64)   0      batch_normalization_23[0][0]
                                 activation_21[0][0]
__________________________________________________________________________________________________
activation_23 (Activation)   (None, 3, 3, 64)   0      merge_9[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D)       (None, 3, 3, 64)   36928    activation_23[0][0]
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, 3, 3, 64)   256     conv2d_24[0][0]
__________________________________________________________________________________________________
activation_24 (Activation)   (None, 3, 3, 64)   0      batch_normalization_24[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D)       (None, 3, 3, 64)   36928    activation_24[0][0]
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, 3, 3, 64)   256     conv2d_25[0][0]
__________________________________________________________________________________________________
merge_10 (Merge)        (None, 3, 3, 64)   0      batch_normalization_25[0][0]
                                 activation_23[0][0]
__________________________________________________________________________________________________
activation_25 (Activation)   (None, 3, 3, 64)   0      merge_10[0][0]
__________________________________________________________________________________________________
max_pooling2d_6 (MaxPooling2D) (None, 1, 1, 64)   0      activation_25[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten)       (None, 64)      0      max_pooling2d_6[0][0]
__________________________________________________________________________________________________
dense_1 (Dense)         (None, 256)     16640    flatten_1[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout)       (None, 256)     0      dense_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense)         (None, 2)      514     dropout_1[0][0]
==================================================================================================
Total params: 632,098
Trainable params: 629,538
Non-trainable params: 2,560
__________________________________________________________________________________________________

去掉模型的全连接层

from keras.models import load_model

base_model = load_model('model_resenet.h5')
resnet_model = Model(inputs=base_model.input, outputs=base_model.get_layer('max_pooling2d_6').output)
#'max_pooling2d_6'其实就是上述网络中全连接层的前面一层,当然这里你也可以选取其它层,把该层的名称代替'max_pooling2d_6'即可,这样其实就是截取网络,输出网络结构就是方便读取每层的名字。
print(resnet_model.summary())

新输出的网络结构:

__________________________________________________________________________________________________
Layer (type)          Output Shape     Param #   Connected to
==================================================================================================
input_1 (InputLayer)      (None, 227, 227, 1) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D)        (None, 225, 225, 32) 320     input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 225, 225, 32) 128     conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation)    (None, 225, 225, 32) 0      batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D)        (None, 225, 225, 32) 9248    activation_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 225, 225, 32) 128     conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation)    (None, 225, 225, 32) 0      batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D)        (None, 225, 225, 32) 9248    activation_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 225, 225, 32) 128     conv2d_3[0][0]
__________________________________________________________________________________________________
merge_1 (Merge)         (None, 225, 225, 32) 0      batch_normalization_3[0][0]
                                 activation_1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation)    (None, 225, 225, 32) 0      merge_1[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D)        (None, 225, 225, 32) 9248    activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 225, 225, 32) 128     conv2d_4[0][0]
__________________________________________________________________________________________________
activation_4 (Activation)    (None, 225, 225, 32) 0      batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D)        (None, 225, 225, 32) 9248    activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 225, 225, 32) 128     conv2d_5[0][0]
__________________________________________________________________________________________________
merge_2 (Merge)         (None, 225, 225, 32) 0      batch_normalization_5[0][0]
                                 activation_3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation)    (None, 225, 225, 32) 0      merge_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 112, 112, 32) 0      activation_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D)        (None, 110, 110, 64) 18496    max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 110, 110, 64) 256     conv2d_6[0][0]
__________________________________________________________________________________________________
activation_6 (Activation)    (None, 110, 110, 64) 0      batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D)        (None, 110, 110, 64) 36928    activation_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 110, 110, 64) 256     conv2d_7[0][0]
__________________________________________________________________________________________________
activation_7 (Activation)    (None, 110, 110, 64) 0      batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D)        (None, 110, 110, 64) 36928    activation_7[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 110, 110, 64) 256     conv2d_8[0][0]
__________________________________________________________________________________________________
merge_3 (Merge)         (None, 110, 110, 64) 0      batch_normalization_8[0][0]
                                 activation_6[0][0]
__________________________________________________________________________________________________
activation_8 (Activation)    (None, 110, 110, 64) 0      merge_3[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D)        (None, 110, 110, 64) 36928    activation_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 110, 110, 64) 256     conv2d_9[0][0]
__________________________________________________________________________________________________
activation_9 (Activation)    (None, 110, 110, 64) 0      batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D)       (None, 110, 110, 64) 36928    activation_9[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 110, 110, 64) 256     conv2d_10[0][0]
__________________________________________________________________________________________________
merge_4 (Merge)         (None, 110, 110, 64) 0      batch_normalization_10[0][0]
                                 activation_8[0][0]
__________________________________________________________________________________________________
activation_10 (Activation)   (None, 110, 110, 64) 0      merge_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 55, 55, 64)  0      activation_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D)       (None, 53, 53, 64)  36928    max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 53, 53, 64)  256     conv2d_11[0][0]
__________________________________________________________________________________________________
activation_11 (Activation)   (None, 53, 53, 64)  0      batch_normalization_11[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 26, 26, 64)  0      activation_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D)       (None, 26, 26, 64)  36928    max_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 26, 26, 64)  256     conv2d_12[0][0]
__________________________________________________________________________________________________
activation_12 (Activation)   (None, 26, 26, 64)  0      batch_normalization_12[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D)       (None, 26, 26, 64)  36928    activation_12[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 26, 26, 64)  256     conv2d_13[0][0]
__________________________________________________________________________________________________
merge_5 (Merge)         (None, 26, 26, 64)  0      batch_normalization_13[0][0]
                                 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
activation_13 (Activation)   (None, 26, 26, 64)  0      merge_5[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D)       (None, 26, 26, 64)  36928    activation_13[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 26, 26, 64)  256     conv2d_14[0][0]
__________________________________________________________________________________________________
activation_14 (Activation)   (None, 26, 26, 64)  0      batch_normalization_14[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D)       (None, 26, 26, 64)  36928    activation_14[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 26, 26, 64)  256     conv2d_15[0][0]
__________________________________________________________________________________________________
merge_6 (Merge)         (None, 26, 26, 64)  0      batch_normalization_15[0][0]
                                 activation_13[0][0]
__________________________________________________________________________________________________
activation_15 (Activation)   (None, 26, 26, 64)  0      merge_6[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 13, 13, 64)  0      activation_15[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D)       (None, 11, 11, 32)  18464    max_pooling2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 11, 11, 32)  128     conv2d_16[0][0]
__________________________________________________________________________________________________
activation_16 (Activation)   (None, 11, 11, 32)  0      batch_normalization_16[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D)       (None, 11, 11, 32)  9248    activation_16[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 11, 11, 32)  128     conv2d_17[0][0]
__________________________________________________________________________________________________
activation_17 (Activation)   (None, 11, 11, 32)  0      batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D)       (None, 11, 11, 32)  9248    activation_17[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 11, 11, 32)  128     conv2d_18[0][0]
__________________________________________________________________________________________________
merge_7 (Merge)         (None, 11, 11, 32)  0      batch_normalization_18[0][0]
                                 activation_16[0][0]
__________________________________________________________________________________________________
activation_18 (Activation)   (None, 11, 11, 32)  0      merge_7[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D)       (None, 11, 11, 32)  9248    activation_18[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 11, 11, 32)  128     conv2d_19[0][0]
__________________________________________________________________________________________________
activation_19 (Activation)   (None, 11, 11, 32)  0      batch_normalization_19[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D)       (None, 11, 11, 32)  9248    activation_19[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 11, 11, 32)  128     conv2d_20[0][0]
__________________________________________________________________________________________________
merge_8 (Merge)         (None, 11, 11, 32)  0      batch_normalization_20[0][0]
                                 activation_18[0][0]
__________________________________________________________________________________________________
activation_20 (Activation)   (None, 11, 11, 32)  0      merge_8[0][0]
__________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 5, 5, 32)   0      activation_20[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D)       (None, 3, 3, 64)   18496    max_pooling2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 3, 3, 64)   256     conv2d_21[0][0]
__________________________________________________________________________________________________
activation_21 (Activation)   (None, 3, 3, 64)   0      batch_normalization_21[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D)       (None, 3, 3, 64)   36928    activation_21[0][0]
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, 3, 3, 64)   256     conv2d_22[0][0]
__________________________________________________________________________________________________
activation_22 (Activation)   (None, 3, 3, 64)   0      batch_normalization_22[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D)       (None, 3, 3, 64)   36928    activation_22[0][0]
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, 3, 3, 64)   256     conv2d_23[0][0]
__________________________________________________________________________________________________
merge_9 (Merge)         (None, 3, 3, 64)   0      batch_normalization_23[0][0]
                                 activation_21[0][0]
__________________________________________________________________________________________________
activation_23 (Activation)   (None, 3, 3, 64)   0      merge_9[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D)       (None, 3, 3, 64)   36928    activation_23[0][0]
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, 3, 3, 64)   256     conv2d_24[0][0]
__________________________________________________________________________________________________
activation_24 (Activation)   (None, 3, 3, 64)   0      batch_normalization_24[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D)       (None, 3, 3, 64)   36928    activation_24[0][0]
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, 3, 3, 64)   256     conv2d_25[0][0]
__________________________________________________________________________________________________
merge_10 (Merge)        (None, 3, 3, 64)   0      batch_normalization_25[0][0]
                                 activation_23[0][0]
__________________________________________________________________________________________________
activation_25 (Activation)   (None, 3, 3, 64)   0      merge_10[0][0]
__________________________________________________________________________________________________
max_pooling2d_6 (MaxPooling2D) (None, 1, 1, 64)   0      activation_25[0][0]
==================================================================================================
Total params: 614,944
Trainable params: 612,384
Non-trainable params: 2,560
__________________________________________________________________________________________________

以上这篇keras实现调用自己训练的模型,并去掉全连接层就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持我们。

(0)

相关推荐

  • 使用Keras预训练模型ResNet50进行图像分类方式

    Keras提供了一些用ImageNet训练过的模型:Xception,VGG16,VGG19,ResNet50,InceptionV3.在使用这些模型的时候,有一个参数include_top表示是否包含模型顶部的全连接层,如果包含,则可以将图像分为ImageNet中的1000类,如果不包含,则可以利用这些参数来做一些定制的事情. 在运行时自动下载有可能会失败,需要去网站中手动下载,放在"~/.keras/models/"中,使用WinPython则在"settings/.ke

  • Keras实现将两个模型连接到一起

    神经网络玩得越久就越会尝试一些网络结构上的大改动. 先说意图 有两个模型:模型A和模型B.模型A的输出可以连接B的输入.将两个小模型连接成一个大模型,A-B,既可以同时训练又可以分离训练. 流行的算法里经常有这么关系的两个模型,对GAN来说,生成器和判别器就是这样子:对VAE来说,编码器和解码器就是这样子:对目标检测网络来说,backbone和整体也是可以拆分的.所以,应用范围还是挺广的. 实现方法 首先说明,我的实现方法不一定是最佳方法.也是实在没有借鉴到比较好的方法,所以才自己手动写了一个.

  • 关于keras中keras.layers.merge的用法说明

    旧版本中: from keras.layers import merge merge6 = merge([layer1,layer2], mode = 'concat', concat_axis = 3) 新版本中: from keras.layers.merge import concatenate merge = concatenate([layer1, layer2], axis=3) 补充知识:keras输入数据的方法:model.fit和model.fit_generator 1.第一

  • Keras使用ImageNet上预训练的模型方式

    我就废话不多说了,大家还是直接看代码吧! import keras import numpy as np from keras.applications import vgg16, inception_v3, resnet50, mobilenet #Load the VGG model vgg_model = vgg16.VGG16(weights='imagenet') #Load the Inception_V3 model inception_model = inception_v3.I

  • keras实现调用自己训练的模型,并去掉全连接层

    其实很简单 from keras.models import load_model base_model = load_model('model_resenet.h5')#加载指定的模型 print(base_model.summary())#输出网络的结构图 这是我的网络模型的输出,其实就是它的结构图 _________________________________________________________________________________________________

  • Pytorch修改ResNet模型全连接层进行直接训练实例

    之前在用预训练的ResNet的模型进行迁移训练时,是固定除最后一层的前面层权重,然后把全连接层输出改为自己需要的数目,进行最后一层的训练,那么现在假如想要只是把 最后一层的输出改一下,不需要加载前面层的权重,方法如下: model = torchvision.models.resnet18(pretrained=False) num_fc_ftr = model.fc.in_features model.fc = torch.nn.Linear(num_fc_ftr, 224) model =

  • 使用keras实现densenet和Xception的模型融合

    我正在参加天池上的一个竞赛,刚开始用的是DenseNet121但是效果没有达到预期,因此开始尝试使用模型融合,将Desenet和Xception融合起来共同提取特征. 代码如下: def Multimodel(cnn_weights_path=None,all_weights_path=None,class_num=5,cnn_no_vary=False): ''' 获取densent121,xinception并联的网络 此处的cnn_weights_path是个列表是densenet和xce

  • Python实现Keras搭建神经网络训练分类模型教程

    我就废话不多说了,大家还是直接看代码吧~ 注释讲解版: # Classifier example import numpy as np # for reproducibility np.random.seed(1337) # from keras.datasets import mnist from keras.utils import np_utils from keras.models import Sequential from keras.layers import Dense, Act

  • keras做CNN的训练误差loss的下降操作

    采用二值判断如果确认是噪声,用该点上面一个灰度进行替换. 噪声点处理:对原点周围的八个点进行扫描,比较.当该点像素值与周围8个点的值小于N时,此点为噪点 . 处理后的文件大小只有原文件小的三分之一,前后的图片内容肉眼几乎无法察觉. 但是这样处理后图片放入CNN中在其他条件不变的情况下,模型loss无法下降,二分类图片,loss一直在8-9之间.准确率维持在0.5,同时,测试集的训练误差持续下降,但是准确率也在0.5徘徊.大概真是需要误差,让优化方法从局部最优跳出来. 使用的activation

  • pytorch fine-tune 预训练的模型操作

    之一: torchvision 中包含了很多预训练好的模型,这样就使得 fine-tune 非常容易.本文主要介绍如何 fine-tune torchvision 中预训练好的模型. 安装 pip install torchvision 如何 fine-tune 以 resnet18 为例: from torchvision import models from torch import nn from torch import optim resnet_model = models.resne

  • python神经网络slim常用函数训练保存模型

    目录 学习前言 slim是什么 slim常用函数 1.slim = tf.contrib.slim 2.slim.create_global_step 3.slim.dataset.Dataset 4.slim.dataset_data_provider.DatasetDataProvider 5.slim.conv2d 6.slim.max_pool2d 7.slim.fully_connected 8.slim.learning.train 本次博文实现的目标 整体框架构建思路 1.整体框架

  • python神经网络Keras构建CNN网络训练

    目录 Keras中构建CNN的重要函数 1.Conv2D 2.MaxPooling2D 3.Flatten 全部代码 利用Keras构建完普通BP神经网络后,还要会构建CNN Keras中构建CNN的重要函数 1.Conv2D Conv2D用于在CNN中构建卷积层,在使用它之前需要在库函数处import它. from keras.layers import Conv2D 在实际使用时,需要用到几个参数. Conv2D( nb_filter = 32, nb_row = 5, nb_col = 5

  • python接口调用已训练好的caffe模型测试分类方法

    训练好了model后,可以通过python调用caffe的模型,然后进行模型测试的输出. 本次测试主要依靠的模型是在caffe模型里面自带训练好的结构参数:~/caffe/models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel,以及结构参数 :~/caffe/models/bvlc_reference_caffenet/deploy.prototxt相结合,用python接口进行调用. 训练的源代码以及相应的注释如下所示

随机推荐