Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model.call() fails on GraphConvolution layer, cannot connect to other models #41

Open
beckrob opened this issue Feb 22, 2019 · 2 comments

Comments

@beckrob
Copy link

beckrob commented Feb 22, 2019

Dear Thomas,

Thank you for sharing this interesting package. When trying to link a GraphConvolution model to other models via the call() function, I ran into an error. Here is the minimal code for reproducing it, with the same input structure as in train.py:

from keras.layers import Input
from keras.models import Model

from keras.optimizers import Adam

from kegra.layers.graph import GraphConvolution

featureInput = Input(shape=(1,))
adjacencyInput = Input(shape=(None, None), batch_shape=(None,None), sparse=False)
support=1

output = GraphConvolution(1, support, activation='linear')([featureInput,adjacencyInput])

# Compile model
graphConvModel = Model(inputs=[featureInput, adjacencyInput], outputs=output)
graphConvModel.compile(loss='mean_squared_error', optimizer=Adam(lr=1e-4))

The model compiles successfully, and I can train it, and predict with it. However, when I try to run the call function, for example like this: graphConvModel([featureInput,adjacencyInput]), I get the following error message:

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-826-c1aa401b2630> in <module>()
----> 1 graphConvModel([featureInput,adjacencyInput])

~\Anaconda3\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs)
    455             # Actually call the layer,
    456             # collecting output(s), mask(s), and shape(s).
--> 457             output = self.call(inputs, **kwargs)
    458             output_mask = self.compute_mask(inputs, previous_mask)
    459 

~\Anaconda3\lib\site-packages\keras\engine\network.py in call(self, inputs, mask)
    562             return self._output_tensor_cache[cache_key]
    563         else:
--> 564             output_tensors, _, _ = self.run_internal_graph(inputs, masks)
    565             return output_tensors
    566 

~\Anaconda3\lib\site-packages\keras\engine\network.py in run_internal_graph(self, inputs, masks)
    759                                 'and output masks. Layer ' + str(layer.name) + ' has'
    760                                 ' ' + str(len(output_tensors)) + ' output tensors '
--> 761                                 'and ' + str(len(output_masks)) + ' output masks.')
    762                     # Update model updates and losses:
    763                     # Keep track of updates that depend on the inputs

Exception: Layers should have equal number of output tensors and output masks. Layer graph_convolution_90 has 1 output tensors and 2 output masks.

With multiple GraphConvolution layers, the error always occurs at the first layer.

Changing node counts does not do anything. I'm suspecting that the batch shape difference between the two inputs might be why there are 2 output masks, but I couldn't make a change to the shape and batch_shape arguments of the inputs that would compile successfully and evade the issue.

Setup details:
Keras version: 2.2.4
Tensorflow version: 1.12.0

Sincerely,
Robert Beck

@beckrob beckrob changed the title Model.call() fails on GraphConvolution layer, cannot connect with other models Model.call() fails on GraphConvolution layer, cannot connect to other models Feb 22, 2019
@tkipf
Copy link
Owner

tkipf commented Feb 22, 2019 via email

@beckrob
Copy link
Author

beckrob commented Feb 22, 2019

Thanks for the quick reply!

I am currently unable to revert to 1.0.9 because of my other dependencies. However, in the Keras code it is clear that the run_internal_graph() function at the time did not raise the same exception, in fact there is a TODO line for specifically for including this later:

# TODO: raise exception when a .compute_mask does not return a list the same size as call

I do not have a solution currently, only a workaround where I explicitly build the combined model with all layers and pre-learned weights to avoid Model.call(), but this is not that practical. In case I find a good solution, I'll comment here again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants