Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Layer model_8 expects 1 input(s), but it received 2 input tensors #412

Open
zaydosman opened this issue Oct 20, 2020 · 11 comments · May be fixed by #465
Open

ValueError: Layer model_8 expects 1 input(s), but it received 2 input tensors #412

zaydosman opened this issue Oct 20, 2020 · 11 comments · May be fixed by #465

Comments

@zaydosman
Copy link

Getting an error when running model.fit in the multiple category training example.

history = model.fit( train_dataloader, steps_per_epoch=len(train_dataloader), epochs=EPOCHS, callbacks=callbacks, validation_data=valid_dataloader, validation_steps=len(valid_dataloader), )

Any ideas on what may be causing it? I suspect it has to do with my train_dataloader object, but I've prepared it as shown in the example.

ValueError: in user code:

F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py:784 train_function  *
    return step_function(self, iterator)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py:774 step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1261 run
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2794 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:3217 _call_for_each_replica
    return fn(*args, **kwargs)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py:767 run_step  **
    outputs = model.train_step(data)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py:733 train_step
    y_pred = self(x, training=True)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\base_layer.py:977 __call__
    input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\input_spec.py:204 assert_input_compatibility
    raise ValueError('Layer ' + layer_name + ' expects ' +

ValueError: Layer model_8 expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=uint8>, <tf.Tensor 'IteratorGetNext:1' shape=(None, None, None, None) dtype=float32>]
@JordanMakesMaps
Copy link

Can you show your data_loader code?

@zaydosman
Copy link
Author

I am using the multiclass segmentation example code that was provided as a basis.

`class Dataloder(keras.utils.Sequence):
"""Load data from dataset and form batches

Args:
    dataset: instance of Dataset class for image loading and preprocessing.
    batch_size: Integet number of images in batch.
    shuffle: Boolean, if `True` shuffle image indexes each epoch.
"""

def __init__(self, dataset, batch_size=1, shuffle=False):
    self.dataset = dataset
    self.batch_size = batch_size
    self.shuffle = shuffle
    self.indexes = np.arange(len(dataset))

    self.on_epoch_end()

def __getitem__(self, i):
    
    # collect batch data
    start = i * self.batch_size
    stop = (i + 1) * self.batch_size
    data = []
    for j in range(start, stop):
        data.append(self.dataset[j])
    
    # transpose list of lists
    batch = [np.stack(samples, axis=0) for samples in zip(*data)]
    
    return batch

def __len__(self):
    """Denotes the number of batches per epoch"""
    return len(self.indexes) // self.batch_size

def on_epoch_end(self):
    """Callback function to shuffle indexes each epoch"""
    if self.shuffle:
        self.indexes = np.random.permutation(self.indexes)`

@JordanMakesMaps
Copy link

Were you able to figure it out? You error is just showing that you're providing two tensors as a list, which is coming from __getitem__, your batch size is set correctly, right? (i.e., your model is expecting a batch size of 2)

@WeiChihChern
Copy link

WeiChihChern commented Nov 4, 2020

@JordanMakesMaps read it somewhere stating that in newer tensorflow or keras require the return batch as tuple rather than list

    def __getitem__(self, i):
        
        # collect batch data
        start = i * self.batch_size
        stop = (i + 1) * self.batch_size
        data = []
        for j in range(start, stop):
            data.append(self.dataset[j])
        
        # transpose list of lists
        batch = [np.stack(samples, axis=0) for samples in zip(*data)]
        
        # newer version of tf/keras want batch to be in tuple rather than list
        return tuple(batch)

Above modification works for me

@jkViswanadham
Copy link

where to do these modification?

@JordanMakesMaps
Copy link

@jkViswanadham @WeiChihChern's modified function is for the dataloader class provided in this repository.

@ilpapds
Copy link

ilpapds commented Dec 1, 2020

I did this modification and it worked, however my desktop turned black screen at the second epoch! Do you have any ideas what i should change? Thank you very much

@Nemecsek
Copy link

Nemecsek commented Feb 16, 2021

Very frustrated by these nerdy retro-compatibility breaks...
My boss is not interested in bugs that appears in a software that was already working and crosslinked libraries that don't work together anymore.
I don't know you, but after 30+ years of programming I am quite fed up to lose my time like this. Free software doesn't mean the freedom to crash somebody's work because read it somewhere stating that in newer tensorflow or keras require the return batch as tuple rather than list.

Not against you, guys, I only need to steam off.

@NavidCOMSC
Copy link

NavidCOMSC commented Mar 6, 2021

@JordanMakesMaps read it somewhere stating that in newer tensorflow or keras require the return batch as tuple rather than list

    def __getitem__(self, i):
        
        # collect batch data
        start = i * self.batch_size
        stop = (i + 1) * self.batch_size
        data = []
        for j in range(start, stop):
            data.append(self.dataset[j])
        
        # transpose list of lists
        batch = [np.stack(samples, axis=0) for samples in zip(*data)]
        
        # newer version of tf/keras want batch to be in tuple rather than list
        return tuple(batch)

Above modification works for me

I have found the same casting hint as well; though it returns the error message like:
NotImplementedError: Cannot convert a symbolic Tensor (dice_loss_plus_1focal_loss/truediv:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

@JordanMakesMaps any thoughts or former experiences what to be debugged? how do you take up debugging when this error message pops up?

@JordanMakesMaps
Copy link

@NavidCOMSC I use a custom dataloader instead of the one provided here in the repo. I just checked my code and I do in fact return a tuple instead of a list. I guess I ran into that error a while back, made the change and completely forgot about it:

        ...

        batch_x = np.array( processed_images )
        batch_y = np.array( processed_masks )
        
        del the_tile, the_mask, one_hot_mask, processed_image, processed_mask
        
        return (batch_x, batch_y)

@khanfarhan10
Copy link

@JordanMakesMaps read it somewhere stating that in newer tensorflow or keras require the return batch as tuple rather than list

    def __getitem__(self, i):
        
        # collect batch data
        start = i * self.batch_size
        stop = (i + 1) * self.batch_size
        data = []
        for j in range(start, stop):
            data.append(self.dataset[j])
        
        # transpose list of lists
        batch = [np.stack(samples, axis=0) for samples in zip(*data)]
        
        # newer version of tf/keras want batch to be in tuple rather than list
        return tuple(batch)

Above modification works for me

@qubvel this should be integrated in the examples.

khanfarhan10 added a commit to khanfarhan10/segmentation_models that referenced this issue May 18, 2021
Fixes qubvel#412 : Resizing Images to avoid errors
Fixes Colab Bug : Using Segmentation Models
@khanfarhan10 khanfarhan10 linked a pull request May 18, 2021 that will close this issue
@Testbild Testbild mentioned this issue Jul 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants