Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

model test problem #13

Closed
manutdzou opened this issue Apr 10, 2017 · 19 comments
Closed

model test problem #13

manutdzou opened this issue Apr 10, 2017 · 19 comments

Comments

@manutdzou
Copy link

Hi, I have tested your released model, my code is refer to your notebook. my code is

`caffe.set_mode_gpu()
caffe.set_device(2)
net_liver = caffe.Net('/home/zhou/zou/Cascaded-FCN/models/cascadedfcn/step1/step1_deploy.prototxt', '/home/zhou/zou/Cascaded-FCN/models/cas cadedfcn/step1/step1_weights.caffemodel', caffe.TEST)

img=read_dicom_series("../train_image/3Dircadb1.17/PATIENT_DICOM/")
lbl=read_liver_lesion_masks("../train_image/3Dircadb1.17/MASKS_DICOM/")
S = 90
img_p = step1_preprocess_img_slice(img[...,S])
lbl_p = preprocess_lbl_slice(lbl[...,S])
net_liver.blobs['data'].data[0,0,...] = img_p
pred = net_liver.forward()['prob'][0,1] > 0.5
plt.figure(figsize=(3*5,10))
plt.subplot(1, 3, _1)
plt.title('CT')
plt.imshow(img_p[92:-92,92:-92], 'gray')
plt.subplot(1, 3, 2)
plt.title('GT')
plt.imshow(lbl_p, 'gray')
plt.subplot(1, 3, 3)
plt.title('pred')
plt.imshow(pred, 'gray')`

but the result is very bad like this
result

@manutdzou
Copy link
Author

Is there any trick I have neglected?

@mohamed-ezz
Copy link
Collaborator

mohamed-ezz commented Apr 10, 2017

The result look strange. Make sure you can run the notebook as-is and get correct results, before you make modifications.

@manutdzou
Copy link
Author

The code is same as what you show in notebook, thus I cannot find where the code is wrong, can you give me some guidance? thank you

@RenieWell
Copy link

I met the same problem with you, did you find out? I would appriciate if you can share your way out. @manutdzou

@manutdzou
Copy link
Author

I think the released model is wrong, when I self train my own model and use the code above it works well, and the result is good
3dircadb1 1 68

@RenieWell @mohamed-ezz

@PatrickChrist
Copy link
Contributor

Thats great news @manutdzou . You are more than welcome to write a pull request and offer your trained model to the public. Just upload your model to a public filehoster and modify the readme with the link and your name.

@PiaoLiangHXD
Copy link

Wow I got the same strange result as your first result. Then I'm sure this released model is not so good. Anyway I rebuild U-Net on TensorFlow, my prediction result is not so good but not strange.

@mohamed-ezz mohamed-ezz reopened this Jun 1, 2017
@mjiansun
Copy link

mjiansun commented Jun 5, 2017

@manutdzou . Hi guys, can you share your code?
Thank you very much.

@PatrickChrist
Copy link
Contributor

Hey Everyone,
i just updated the Readme and added a docker images, which runs our code smoothly.
Please have a look in the Readme for more details how to start the docker image.
The expected result should look like this print out.
Best wishes,
Patrick
cascaded_unet_inference.pdf.pdf

@zakizhou
Copy link

zakizhou commented Jul 4, 2017

@PatrickChrist Hi Patrick, thanks for the great work, but when I try to use the pretrained model, I find that the nvidia-docker is hard to install and could you please share a correct pretrained model without using nvidia-docker

@mohamed-ezz
Copy link
Collaborator

@zakizhou I think because this is a reproducibility issue, Docker is our best bet to achieve that.

nvidia-docker is needed only if you want to process the files on the GPU. You can, however, just use docker if you're ok with running on CPU.

If you're running on linux distro, what are the issues you're facing to install nvidia-docker ?

The models are also shared in https://github.com/IBBM/Cascaded-FCN/tree/master/models/cascadedfcn , you can use them in your host environment (without Docker)

@zakizhou
Copy link

zakizhou commented Jul 4, 2017

@mohamed-ezz thanks for your reply, I am using ubuntu with no gpus, indeed I have tried docker instead of nvidia-docker but sadly when I tried to import pretrained caffe model, the core of jupyter notebook dumped and I don't understand why. Like what @manutdzou said in this issue, the pretrained model here https://github.com/IBBM/Cascaded-FCN/tree/master/models/cascadedfcn performs badly on the sample image. I installed caffe with conda, do you think it's the wrong version of caffe that caused this problem?

@mohamed-ezz
Copy link
Collaborator

mohamed-ezz commented Jul 4, 2017 via email

@zakizhou
Copy link

zakizhou commented Jul 4, 2017

@mohamed-ezz OK, I'd try the model on a server with gpu, thanks again!

@mohamed-ezz
Copy link
Collaborator

mohamed-ezz commented Jul 4, 2017 via email

@manutdzou
Copy link
Author

I have released a version of right liver and lesion model in Baidu can use this model like this

`import sys,os
sys.path.insert(0, '/home/zhou/zou/caffe_ws/python')
sys.path.insert(0,'/home/zhou/zou/Cascaded-FCN/lib')
import numpy as np
from matplotlib import pyplot as plt
import caffe

result_path = "/home/zhou/zou/Cascaded-FCN/code/result/"
if not os.path.exists(result_path):
os.makedirs(result_path)

im_list = open('test_lesion_list.txt', 'r').read().splitlines()

caffe.set_mode_gpu()
caffe.set_device(0)
net_liver = caffe.Net('deploy.prototxt', 'liver.caffemodel', caffe.TEST)
net_lesion = caffe.Net('deploy.prototxt', 'lesion.caffemodel', caffe.TEST)

liver = 1
lesion = 2
for i in range(0,len(im_list)):
im = np.load(im_list[i].split(' ')[0])
mask = np.load(im_list[i].split(' ')[1])
in_ = np.array(im, dtype=np.float32)
in_expand = in_[np.newaxis, ...]
blob = in_expand[np.newaxis, :, :, :]

net_liver.blobs['data'].reshape(*blob.shape)
net_liver.blobs['data'].data[...] = blob
net_liver.forward()
output_liver = net_liver.blobs['prob'].data[0].argmax(axis=0)

net_lesion.blobs['data'].reshape(*blob.shape)
net_lesion.blobs['data'].data[...] = blob
net_lesion.forward()
output_lesion = net_lesion.blobs['prob'].data[0].argmax(axis=0)

output = output_liver
ind_1 = np.where(output_liver ==0)
output_lesion[ind_1] = 255
ind_2 = np.where(output_lesion ==0)
output[ind_2] = 2

plt.figure(figsize=(3*5,10))
plt.subplot(1, 3, 1)
plt.title('CT')
plt.imshow(im[92:-92,92:-92], 'gray')
plt.subplot(1, 3, 2)
plt.title('GT')
plt.imshow(mask, 'gray')
plt.subplot(1, 3, 3)
plt.title('pred')
plt.imshow(output, 'gray')
path = result_path + im_list[i].split(' ')[0].split('/')[-1][0:-3] +'jpg'
plt.savefig(path)
plt.close()

`
some result is shown

3dircadb1 17 85
3dircadb1 17 80
@mohamed-ezz @RenieWell @mjiansun @PatrickChrist @PiaoLiangHXD

@manutdzou
Copy link
Author

layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 1 dim: 572 dim: 572 } }
}

layer {
name: "conv_d0a-b"
type: "Convolution"
bottom: "data"
top: "d0b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d0b"
type: "ReLU"
bottom: "d0b"
top: "d0b"
}
layer {
name: "conv_d0b-c"
type: "Convolution"
bottom: "d0b"
top: "d0c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d0c"
type: "ReLU"
bottom: "d0c"
top: "d0c"
}
layer {
name: "pool_d0c-1a"
type: "Pooling"
bottom: "d0c"
top: "d1a"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv_d1a-b"
type: "Convolution"
bottom: "d1a"
top: "d1b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d1b"
type: "ReLU"
bottom: "d1b"
top: "d1b"
}
layer {
name: "conv_d1b-c"
type: "Convolution"
bottom: "d1b"
top: "d1c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d1c"
type: "ReLU"
bottom: "d1c"
top: "d1c"
}
layer {
name: "pool_d1c-2a"
type: "Pooling"
bottom: "d1c"
top: "d2a"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv_d2a-b"
type: "Convolution"
bottom: "d2a"
top: "d2b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d2b"
type: "ReLU"
bottom: "d2b"
top: "d2b"
}
layer {
name: "conv_d2b-c"
type: "Convolution"
bottom: "d2b"
top: "d2c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d2c"
type: "ReLU"
bottom: "d2c"
top: "d2c"
}
layer {
name: "pool_d2c-3a"
type: "Pooling"
bottom: "d2c"
top: "d3a"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv_d3a-b"
type: "Convolution"
bottom: "d3a"
top: "d3b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d3b"
type: "ReLU"
bottom: "d3b"
top: "d3b"
}
layer {
name: "conv_d3b-c"
type: "Convolution"
bottom: "d3b"
top: "d3c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d3c"
type: "ReLU"
bottom: "d3c"
top: "d3c"
}

layer {
name: "pool_d3c-4a"
type: "Pooling"
bottom: "d3c"
top: "d4a"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv_d4a-b"
type: "Convolution"
bottom: "d4a"
top: "d4b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 1024
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d4b"
type: "ReLU"
bottom: "d4b"
top: "d4b"
}
layer {
name: "conv_d4b-c"
type: "Convolution"
bottom: "d4b"
top: "d4c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 1024
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "relu_d4c"
type: "ReLU"
bottom: "d4c"
top: "d4c"
}

layer {
name: "upconv_d4c_u3a"
type: "Deconvolution"
bottom: "d4c"
top: "u3a"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 2
stride: 2
weight_filler {
type: "xavier"
}
}
}

layer {
name: "relu_u3a"
type: "ReLU"
bottom: "u3a"
top: "u3a"
}
layer {
name: "crop_d3c-d3cc"
type: "Crop"
bottom: "d3c"
bottom: "u3a"
top: "d3cc"

}
layer {
name: "concat_d3cc_u3a-b"
type: "Concat"
bottom: "u3a"
bottom: "d3cc"
top: "u3b"
}
layer {
name: "conv_u3b-c"
type: "Convolution"
bottom: "u3b"
top: "u3c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u3c"
type: "ReLU"
bottom: "u3c"
top: "u3c"
}
layer {
name: "conv_u3c-d"
type: "Convolution"
bottom: "u3c"
top: "u3d"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u3d"
type: "ReLU"
bottom: "u3d"
top: "u3d"
}
layer {
name: "upconv_u3d_u2a"
type: "Deconvolution"
bottom: "u3d"
top: "u2a"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 2
stride: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu_u2a"
type: "ReLU"
bottom: "u2a"
top: "u2a"
}
layer {
name: "crop_d2c-d2cc"
type: "Crop"
bottom: "d2c"
bottom: "u2a"
top: "d2cc"

}
layer {
name: "concat_d2cc_u2a-b"
type: "Concat"
bottom: "u2a"
bottom: "d2cc"
top: "u2b"
}
layer {
name: "conv_u2b-c"
type: "Convolution"
bottom: "u2b"
top: "u2c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u2c"
type: "ReLU"
bottom: "u2c"
top: "u2c"
}
layer {
name: "conv_u2c-d"
type: "Convolution"
bottom: "u2c"
top: "u2d"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u2d"
type: "ReLU"
bottom: "u2d"
top: "u2d"
}
layer {
name: "upconv_u2d_u1a"
type: "Deconvolution"
bottom: "u2d"
top: "u1a"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 2
stride: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu_u1a"
type: "ReLU"
bottom: "u1a"
top: "u1a"
}
layer {
name: "crop_d1c-d1cc"
type: "Crop"
bottom: "d1c"
bottom: "u1a"
top: "d1cc"

}
layer {
name: "concat_d1cc_u1a-b"
type: "Concat"
bottom: "u1a"
bottom: "d1cc"
top: "u1b"
}
layer {
name: "conv_u1b-c"
type: "Convolution"
bottom: "u1b"
top: "u1c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u1c"
type: "ReLU"
bottom: "u1c"
top: "u1c"
}
layer {
name: "conv_u1c-d"
type: "Convolution"
bottom: "u1c"
top: "u1d"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u1d"
type: "ReLU"
bottom: "u1d"
top: "u1d"
}
layer {
name: "upconv_u1d_u0a_NEW"
type: "Deconvolution"
bottom: "u1d"
top: "u0a"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 2
stride: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu_u0a"
type: "ReLU"
bottom: "u0a"
top: "u0a"
}
layer {
name: "crop_d0c-d0cc"
type: "Crop"
bottom: "d0c"
bottom: "u0a"
top: "d0cc"

}
layer {
name: "concat_d0cc_u0a-b"
type: "Concat"
bottom: "u0a"
bottom: "d0cc"
top: "u0b"
}
layer {
name: "conv_u0b-c_New"
type: "Convolution"
bottom: "u0b"
top: "u0c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u0c"
type: "ReLU"
bottom: "u0c"
top: "u0c"
}
layer {
name: "conv_u0c-d_New"
type: "Convolution"
bottom: "u0c"
top: "u0d"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u0d"
type: "ReLU"
bottom: "u0d"
top: "u0d"
}
layer {
name: "conv_u0d-score_New"
type: "Convolution"
bottom: "u0d"
top: "score"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 2
pad: 0
kernel_size: 1
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}

layer {
name: "prob"
type: "Softmax"
bottom: "score"
top: "prob"
}

@PatrickChrist
Copy link
Contributor

Great work @manutdzou
Thanks for your support. Would you mind to commit your work in this repo?
We could have a folder model-zoo/manutdzou in which you post your code as notebook and your prototxt and the links to baidu as text file? Other users will definitly appreciate. If you have a paper about your work we can also add this.

@manutdzou
Copy link
Author

manutdzou commented Jul 7, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants