Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do you need to substract mean from the input images/video frames? #1

Open
hhxxttxsh opened this issue Feb 12, 2017 · 22 comments
Open

Comments

@hhxxttxsh
Copy link

Hi,

Thanks for sharing.
Do you not need to subtract mean from the input video frames for training the model using VGG 16 layer model as initial weights? I have not seen that part in your implementation, and I am wondering why.

Thanks.

@chenxinpeng
Copy link
Owner

@hhxxttxsh
Hi, thank you for your question.
I only use the pre-trained VGG-16 layers model to extract features. I don't back-propagate gradients both for efficiency and also to prevent overfitting.
So, in the cnn_util.py python script, I used the ilsvrc_2012_mean.npy, the mean value of ImageNet dataset.

In other words, if I train the model with VGG model, I should subtract the mean value of video frames. Here I only used it to extract features.

@chenxinpeng
Copy link
Owner

chenxinpeng commented Feb 14, 2017

@hhxxttxsh
And, this is my understanding and explanation.

By the way, in the paper: A Hierarchical Approach for Generating Descriptive Image Paragraphs, the authors also processed the images like me.

You can read the paper in section 4.5. Transfer Learning .

@hhxxttxsh
Copy link
Author

Thank you for the reply.
In the training process of the original authors' code, for the encoder LSTM, they feed this batch fc7_frame_feature of size (1000, 32, 4096). Within the batch, the adjacent frames seem not to be correlated cuz before this step they have been shuffled. I am wondering how this could make sense cuz the time step t to t+1 are not even belonging to the same video.

Do you also have this similar implementation in your code? How do you organize your batch for training the encoder-decoder?

Thanks!

@chenxinpeng
Copy link
Owner

@hhxxttxsh
Hi,
I think you may be misunderstanding about the original code. In the original code model.py:

index = list(train_data.index)
np.random.shuffle(index)
train_data = train_data.ix[index]

He only shuffled the orders of videos. But the order of frames in each video are NOT changed.

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Feb 15, 2017 via email

@chenxinpeng
Copy link
Owner

chenxinpeng commented Feb 15, 2017

@hhxxttxsh
Hi,
I know what you mean...

In the original paper, S. Venugopalan et al. have done an experiment about the shuffled frames. In section 4.3. Experimental details of our model

We also experiment with randomly re-ordered input frames to verify that S2VT learns temporal-sequence information.
And the METEOR score of this experiment result is 28.2%, very close to the result when dosen''t randomly re-ordered. input frames. You can see this in Table 2 in the paper.

And I think, the features of frames in a video are close, so when we shuffle this frames, the influence is small.

In my code, I haven't shuffle the frames.

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Feb 15, 2017 via email

@chenxinpeng
Copy link
Owner

chenxinpeng commented Feb 16, 2017

@hhxxttxsh
Hi,

I also have tested the source code by S. Venugopalan et al. And the generation sentences is reasonable. I haven't trained the model by myself, just use the trained model downloaded from the authors.

And I only train the model on Youtube (MSVD) video).

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Feb 17, 2017 via email

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Feb 17, 2017 via email

@chenxinpeng
Copy link
Owner

@hhxxttxsh
Hi,

When I only use RGB features which extracted on the VGG model, I got 28.1 of METEOR score. You can see in the README.md of my code.
And I only split the dataset into two parts, training dataset, testing dataset. I think that is why I didn't get the same METEOR score which is 29.2.

Do you use the same number of training dataset, validation dataset and testing dataset as the S. Venugopalan et al.?

And, I think we don't need to get the same results as in the original paper.
I think the most important is that we understand the idea of the paper. :)

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Feb 17, 2017 via email

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Feb 22, 2017 via email

@chenxinpeng
Copy link
Owner

@hhxxttxsh
Hi,

As was expected, the authors used the extra data. Thank you for reminding me this trick.

My implementation got 28.2, worse but close to 29.2.

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Feb 23, 2017 via email

@chenxinpeng
Copy link
Owner

Hi,

I don't think there has such huge difference between Caffe and Tensorflow.

Yes, Tensorflow is a basic framework, so I suggest you to learn a higher framework which is based on Tensorflow: Keras if you don't need very detail control in your network. And then you can go deeper into Tensorflow.
By the way, in Tensorflow 1.0.0 version, the Keras has been fused in Tensorflow already.

I have been used Torch six month ago, I like Torch than Tensorflow, but the Lua language is minority, so I switch from Torch to Tensorflow.

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Feb 25, 2017 via email

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Mar 7, 2017 via email

@chenxinpeng
Copy link
Owner

chenxinpeng commented Mar 8, 2017

Hi,

Of course it is possible to implement S2VT with Keras. And I recommend a paper to you: https://www.aclweb.org/anthology/C/C16/C16-1005.pdf . This paper combines the S2VT model with attention machanism.

@hhxxttxsh
Copy link
Author

hhxxttxsh commented Mar 9, 2017 via email

@chenxinpeng
Copy link
Owner

@hhxxttxsh

Hi,

That's ok. I'm still working on image or video captioning.

My e-mail: jschenxinpeng@qq.com

@jozefmorvay
Copy link

@hhxxttxsh @chenxinpeng Hi both of you. Were you able to implement this in Keras? I will make my attempt in the coming weeks, and I would appreciate a how-to guide for this, as I am completely new to DL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants