Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: data array has wrong number of channels #70

Open
Johnson-yue opened this issue Dec 7, 2017 · 2 comments
Open

RuntimeError: data array has wrong number of channels #70

Johnson-yue opened this issue Dec 7, 2017 · 2 comments

Comments

@Johnson-yue
Copy link

Johnson-yue commented Dec 7, 2017

Hi, Thank you for your share.When I reproduce ,I have some trouble .It confuse me much time.Please help me.

Error log:
Extracting X relu1_1 From Y conv1_2 stride 1
Process Process-3:
Traceback (most recent call last):
File "/home/yue/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/yue/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning-master/lib/worker.py", line 21, in job
ret = target(**kwargs)
File "train.py", line 75, in solve
WPQ, new_pt = net.R3()
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning-master/lib/net.py", line 1356, in R3
X = getX(conv)
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning-master/lib/net.py", line 1328, in getX
x = self.extract_XY(self.bottom_names[name][0], name)
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning-master/lib/net.py", line 623, in extract_XY
self.net.set_input_arrays(self._points_dict[(batch, 0)], self._points_dict[(batch, 1)])
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning/caffe/python/caffe/pycaffe.py", line 269, in _Net_set_input_arrays
return self._set_input_arrays(data, labels)
RuntimeError: data array has wrong number of channels

What steps reproduce the bug?

I have a little change in code:
1. deploy.prototxt: train ImageNet DataSet is very hard for me ,so I train a small Classification DataSets and get a binary_classification caffemodel.

default model prototxt is :temp/vgg.prototxt   ,I changed  source file
default weights file is : temp/vgg.prototxt, I used binary_classification caffemodel.
  1. accuracy@5 remove . replace "Accuracy@5" with "Accuracy@1" in train.py line:71

  2. then I run train.py -caffe 0 -action c3

  3. I got RunTimeError

What hardware and operating system/distribution are you running?

Operating system: Ubuntu 16.04
CUDA version: 8.0
CUDNN version: 6.0
openCV version: 3
BLAS:open
Python version:3.5

If the bug is a crash, provide the backtrace.

My model.prototxt is :
`name: "VGG_ILSVRC_16_layers"
layer {
name: "data"
type: "Data"
top: "data"
top: "label"

data_param{
source: "temp/dogscats/val_lmdb"
batch_size: 1
backend: LMDB
}

transform_param {
crop_size: 224
scale: 0.00390625
#mean_file: "temp/dogscats/mean.binaryproto"
mean_value: 104.0
mean_value: 117.0
mean_value: 123.0
}
include {
phase: TEST
}
}

layer {
name: "data"
type: "Data"
top: "data"
top: "label"

data_param{
source: "temp/dogscats/train_lmdb"
batch_size: 16
backend: LMDB
}

transform_param {
crop_size: 224
scale: 0.00390625
mean_value: 104.0
mean_value: 117.0
mean_value: 123.0
mirror:True
}
include {
phase: TRAIN
}
}
layer {
bottom: "data"
top: "conv1_1"
name: "conv1_1"
type: "Convolution"
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "relu1_1"
type: "ReLU"
}
layer {
bottom: "conv1_1"
top: "conv1_2"
name: "conv1_2"
type: "Convolution"
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv1_2"
top: "conv1_2"
name: "relu1_2"
type: "ReLU"
}
layer {
bottom: "conv1_2"
top: "pool1"
name: "pool1"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool1"
top: "conv2_1"
name: "conv2_1"
type: "Convolution"
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv2_1"
top: "conv2_1"
name: "relu2_1"
type: "ReLU"
}
layer {
bottom: "conv2_1"
top: "conv2_2"
name: "conv2_2"
type: "Convolution"
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv2_2"
top: "conv2_2"
name: "relu2_2"
type: "ReLU"
}
layer {
bottom: "conv2_2"
top: "pool2"
name: "pool2"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool2"
top: "conv3_1"
name: "conv3_1"
type: "Convolution"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv3_1"
top: "conv3_1"
name: "relu3_1"
type: "ReLU"
}
layer {
bottom: "conv3_1"
top: "conv3_2"
name: "conv3_2"
type: "Convolution"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv3_2"
top: "conv3_2"
name: "relu3_2"
type: "ReLU"
}
layer {
bottom: "conv3_2"
top: "conv3_3"
name: "conv3_3"
type: "Convolution"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv3_3"
top: "conv3_3"
name: "relu3_3"
type: "ReLU"
}
layer {
bottom: "conv3_3"
top: "pool3"
name: "pool3"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool3"
top: "conv4_1"
name: "conv4_1"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv4_1"
top: "conv4_1"
name: "relu4_1"
type: "ReLU"
}
layer {
bottom: "conv4_1"
top: "conv4_2"
name: "conv4_2"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv4_2"
top: "conv4_2"
name: "relu4_2"
type: "ReLU"
}
layer {
bottom: "conv4_2"
top: "conv4_3"
name: "conv4_3"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv4_3"
top: "conv4_3"
name: "relu4_3"
type: "ReLU"
}
layer {
bottom: "conv4_3"
top: "pool4"
name: "pool4"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool4"
top: "conv5_1"
name: "conv5_1"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv5_1"
top: "conv5_1"
name: "relu5_1"
type: "ReLU"
}
layer {
bottom: "conv5_1"
top: "conv5_2"
name: "conv5_2"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv5_2"
top: "conv5_2"
name: "relu5_2"
type: "ReLU"
}
layer {
bottom: "conv5_2"
top: "conv5_3"
name: "conv5_3"
type: "Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv5_3"
top: "conv5_3"
name: "relu5_3"
type: "ReLU"
}
layer {
bottom: "conv5_3"
top: "pool5"
name: "pool5"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom: "pool5"
top: "fc6"
name: "fc6"
type: "InnerProduct"
inner_product_param {
num_output: 4096
}
}
layer {
bottom: "fc6"
top: "fc6"
name: "relu6"
type: "ReLU"
}
layer {
bottom: "fc6"
top: "fc6"
name: "drop6"
type: "Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
bottom: "fc6"
top: "fc7"
name: "fc7"
type: "InnerProduct"
inner_product_param {
num_output: 4096
}
}
layer {
bottom: "fc7"
top: "fc7"
name: "relu7"
type: "ReLU"
}
layer {
bottom: "fc7"
top: "fc7"
name: "drop7"
type: "Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
bottom: "fc7"
top: "fc8_new"
name: "fc8_new"
type: "InnerProduct"
inner_product_param {
num_output: 2
}
}
layer {
bottom: "fc8_new"
bottom: "label"
top: "loss"
name: "loss"
type: "SoftmaxWithLoss"
}
layer {
bottom: "fc8_new"
bottom: "label"
top: "accuracy@1"
name: "accuracy/top1"
type: "Accuracy"
accuracy_param {
top_k: 1
}
}
`

My fine_tuned caffemodel is here: Baidu yun

Full Log :

`yue@yuePC:~/Deep_Learning/SpeedUp/channel-pruning-master$ $py3/python train.py
no lighting pack
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 537077672

stage0 freeze

temp/bn_vgg_finetune_data.prototxt
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 537077651
including last conv layer!
run for 500 batches nFeatsPerBatch 10
Extracting conv1_1 (5000, 64)
Extracting conv1_2 (5000, 64)
Extracting conv2_1 (5000, 128)
Extracting conv2_2 (5000, 128)
Extracting conv3_1 (5000, 256)
Extracting conv3_2 (5000, 256)
Extracting conv3_3 (5000, 256)
Extracting conv4_1 (5000, 512)
Extracting conv4_2 (5000, 512)
Extracting conv4_3 (5000, 512)
Extracting conv5_1 (5000, 512)
Extracting conv5_2 (5000, 512)
Extracting conv5_3 (5000, 512)
Acc 97.400
wrote memory data layer to temp/mem_bn_vgg_finetune_data.prototxt
freezing imgs to temp/frozen500.pickle

stage1 speed3.0

[libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 537077651
loading imgs from temp/frozen500.pickle
loaded
Extracting X relu1_1 From Y conv1_2 stride 1
Process Process-3:
Traceback (most recent call last):
File "/home/yue/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/yue/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning-master/lib/worker.py", line 21, in job
ret = target(**kwargs)
File "train.py", line 75, in solve
WPQ, new_pt = net.R3()
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning-master/lib/net.py", line 1356, in R3
X = getX(conv)
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning-master/lib/net.py", line 1328, in getX
x = self.extract_XY(self.bottom_names[name][0], name)
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning-master/lib/net.py", line 623, in extract_XY
self.net.set_input_arrays(self._points_dict[(batch, 0)], self._points_dict[(batch, 1)])
File "/home/yue/Deep_Learning/SpeedUp/channel-pruning/caffe/python/caffe/pycaffe.py", line 269, in _Net_set_input_arrays
return self._set_input_arrays(data, labels)
RuntimeError: data array has wrong number of channels
`
what should I do??

@yyl199655
Copy link

Have you sovered the problem?

@Johnson-yue
Copy link
Author

@yyl199655 No, I reproduct it by myself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants