Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'SpatialTransformer' object has no attribute 'is_placeholder' #1

Open
mdshopon opened this issue Sep 19, 2017 · 9 comments

Comments

@mdshopon
Copy link

mdshopon commented Sep 19, 2017

I was trying to implement Spatial Transformer Layer in my Code but I am getting this error. Is there any solution for it ?

`
def locnet():
b = np.zeros((2, 3), dtype='float32')
b[0, 0] = 1
b[1, 1] = 1
W = np.zeros((64, 6), dtype='float32')
weights = [W, b.flatten()]
locnet = Sequential()

    locnet.add(Conv2D(16, (7, 7), padding='valid', input_shape=(128, 64, 1)))
    locnet.add(MaxPooling2D(pool_size=(2, 2)))
    locnet.add(Conv2D(32, (5, 5), padding='valid'))
    locnet.add(MaxPooling2D(pool_size=(2, 2)))
    locnet.add(Conv2D(64, (3, 3), padding='valid'))
    locnet.add(MaxPooling2D(pool_size=(2, 2)))

    locnet.add(Flatten())
    locnet.add(Dense(128))
    locnet.add(Activation('elu'))
    locnet.add(Dense(64))
    locnet.add(Activation('elu'))
    locnet.add(Dense(6, weights=weights))

    return locnet

def train(run_name, start_epoch, stop_epoch, img_w):
    # Input Parameters
    img_h = 64
    words_per_epoch = 8000
    val_split = 0.2
    val_words = int(words_per_epoch * (val_split))

    # Network parameters
    conv_filters = 16
    kernel_size = (3, 3)
    pool_size = 2
    time_dense_size = 32
    rnn_size = 512
    minibatch_size = 32

    if K.image_data_format() == 'channels_first':
        input_shape = (1, img_w, img_h)
    else:
        input_shape = (img_w, img_h, 1)

    fdir = os.path.dirname(get_file('wordlists.tgz',
                                    origin='http://www.mythic-ai.com/datasets/wordlists.tgz', untar=True))

    img_gen = TextImageGenerator(monogram_file=os.path.join( 'bangla_wordlist_mono_clean.txt'),
                                 bigram_file=os.path.join(fdir, 'wordlist_bi_clean.txt'),
                                 minibatch_size=minibatch_size,
                                 img_w=img_w,
                                 img_h=img_h,
                                 downsample_factor=(pool_size ** 2),
                                 val_split=words_per_epoch - val_words
                                 )
    act = 'relu'
    input_data = Input(name='the_input', shape=input_shape, dtype='float32')
    input_data=(SpatialTransformer(localization_net=locnet(),
                                 output_size=(128, 64)))(input_data)
    inner = Conv2D(conv_filters, kernel_size, padding='same',
                   activation=act, kernel_initializer='he_normal',
                   name='conv1')(input_data)
    inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max1')(inner)
    
    inner = Conv2D(conv_filters, kernel_size, padding='same',
                   activation=act, kernel_initializer='he_normal',
                   name='conv2')(inner)
    inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max2')(inner)

    conv_to_rnn_dims = (img_w // (pool_size ** 2), (img_h // (pool_size ** 2)) * conv_filters)
    inner = Reshape(target_shape=conv_to_rnn_dims, name='reshape')(inner)

    # cuts down input size going into RNN:
    inner = Dense(time_dense_size, activation=act, name='dense1')(inner)

    # Two layers of bidirectional GRUs
    # GRU seems to work as well, if not better than LSTM:
    gru_1 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru1')(inner)
    gru_1b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru1_b')(inner)
    gru1_merged = add([gru_1, gru_1b])
    gru_2 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru2')(gru1_merged)
    gru_2b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru2_b')(gru1_merged)

    # transforms RNN output to character activations:
    inner = Dense(img_gen.get_output_size(), kernel_initializer='he_normal',
                  name='dense2')(concatenate([gru_2, gru_2b]))
    y_pred = Activation('softmax', name='softmax')(inner)
    # Model(inputs=input_data, outputs=y_pred).summary()

    labels = Input(name='the_labels', shape=[img_gen.absolute_max_string_len], dtype='float32')
    input_length = Input(name='input_length', shape=[1], dtype='int64')
    label_length = Input(name='label_length', shape=[1], dtype='int64')
    # Keras doesn't currently support loss funcs with extra parameters
    # so CTC loss is implemented in a lambda layer
    loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([y_pred, labels, input_length, label_length])

    # clipnorm seems to speeds up convergence
    sgd = SGD(lr=0.02, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)

    model = Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out)

    # the loss calc occurs elsewhere, so use a dummy lambda func for the loss
    model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=sgd)`
@hello2all
Copy link
Owner

Could you post your error output and your tensorflow & keras versions so I can take a look at your problem?

@mdshopon
Copy link
Author

Thank you very much for your early reply
Keras Version: 2.0.2
Tensorflow Version: 1.3.0

Error:

Traceback (most recent call last): File "bangla_image_ocr.py", line 604, in <module> train(run_name, 0, 40, 128) File "bangla_image_ocr.py", line 580, in train model = Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out) File "/home/codehead/anaconda2/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 88, in wrapper return func(*args, **kwargs) File "/home/codehead/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 1566, in __init__ if layer.is_placeholder: AttributeError: 'SpatialTransformer' object has no attribute 'is_placeholder'

@hello2all
Copy link
Owner

Thank you, I'd try to reproduce the error using newer versions of tf and keras to see if the problem is caused by compatibility issues.

@mdshopon
Copy link
Author

@hello2all May I know which versions did you use for this ?

@hello2all
Copy link
Owner

If I recall correctly, tensorflow 1.0 and keras 2.0

@mdshopon
Copy link
Author

Thanks ! Please let me know if you have found any solution for this problem .

@hello2all
Copy link
Owner

@codeheadshopon There is a compatibility issue with the newest version of Keras. However, there is an easy fix to it: in spatia_transformer.py, delete line 37

self.constraints = self.locnet.constraints

After deleting the line, my model was able to successfully compile with no error.

This is tested with:
Tensorflow 1.3.0
Keras 2.0.8

@mdshopon
Copy link
Author

Didn't work with me.
I updated my keras version to 2.0.8
But still getting the same error.

@hello2all
Copy link
Owner

In the newer versions of Keras, there has been an update on the method to build customized layer. Since I can not re-create the exact error you have been experiencing, I encourage you to look into the Keras documentation and modify the layer initialization accordingly.

I hope this is helpful for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants