Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: fix integer compliance tests #20

Merged
merged 10 commits into from Mar 19, 2021
Merged

Conversation

HemangChothani
Copy link
Contributor

No description provided.

@google-cla google-cla bot added the cla: yes This human has signed the Contributor License Agreement. label Mar 8, 2021
"""
SPANNER OVERRIDE:

Spanner is not able cleanup data and drop the table correctly,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know why Spanner is not cleaning up the data and dropping the table correctly? Could #28 potentially resolve this?

This override reason doesn't seem tied to a feature that Spanner doesn't support such as temporary tables or multiple rows in empty primary key tables so I think this may be a bug rather than something we should override.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Locally, I can get this test to pass without the override by changing the db_api implementation to execute DDL statements immediately instead of queuing them until a non-DDL statement is executed.

Please add a TODO to remove this override

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, because locally implemented both the PRs #28, googleapis/python-spanner#277, but still failing for me.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The required change is shown below:

--- a/google/cloud/spanner_dbapi/cursor.py
+++ b/google/cloud/spanner_dbapi/cursor.py
@@ -174,14 +174,9 @@ class Cursor(object):
         try:
             classification = parse_utils.classify_stmt(sql)
             if classification == parse_utils.STMT_DDL:
-                self.connection._ddl_statements.append(sql)
+                self.connection.database.update_ddl([sql]).result()
                 return
 
-            # For every other operation, we've got to ensure that
-            # any prior DDL statements were run.
-            # self._run_prior_DDL_statements()
-            self.connection.run_prior_DDL_statements()
-
             if not self.connection.autocommit:
                 if classification == parse_utils.STMT_UPDATING:
                     sql = parse_utils.ensure_where_clause(sql)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs here and some other places do seem to lean towards batching the DDL statements since they might be quite time consuming. That was probably the original idea behind this implementation.
Perhaps we can offer a solution where we would execute DDL statement in autocommit=True mode and keep the current implementation in autocommit=False mode.
I think it will be intuitive from the user's perspective.

--- a/google/cloud/spanner_dbapi/cursor.py
+++ b/google/cloud/spanner_dbapi/cursor.py
        try:
             classification = parse_utils.classify_stmt(sql)
             if classification == parse_utils.STMT_DDL:
                 self.connection._ddl_statements.append(sql)
+                if self.connection.autocommit:
+                    self.connection.run_prior_DDL_statements()
                 return
 
            # For every other operation, we've got to ensure that
            # any prior DDL statements were run.
            # self._run_prior_DDL_statements()
            self.connection.run_prior_DDL_statements()

            if not self.connection.autocommit:
                if classification == parse_utils.STMT_UPDATING:
                    sql = parse_utils.ensure_where_clause(sql)

@larkee WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larkee Yes, I was talking about autocommit=Flase. I think need to remove TODO note then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. Please remove it and update the override explanation to mention that DDL statements aren't executed until a non-DDL statement is executed 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larkee Still this doesn't work for me with changes recommended by alex, am i missing something here?

def _literal_round_trip(self, type_, input_, output, filter_=None):
    t = Table("int_t", self.metadata, Column("x", type_))
        t.create()

        with db.connect() as conn:
            for value in input_:
                ins = (
                    t.insert()
                    .values(x=literal(value))
                    .compile(
                        dialect=db.dialect, compile_kwargs=dict(literal_binds=True),
                    )
                )
                conn.execute(ins) --> thorws an error
                conn.execute("SELECT 1")  # didn't execute
            if self.supports_whereclause:
                stmt = t.select().where(t.c.x == literal(value))
            else:
                stmt = t.select()

            stmt = stmt.compile(
                dialect=db.dialect, compile_kwargs=dict(literal_binds=True),
            )
            for row in conn.execute(stmt):
                value = row[0]
                if filter_ is not None:
                    value = filter_(value)
                assert value in output

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larkee Test passes for the very first time when table doesn't exist and after that it failed every time with an error 400 Duplicate name in schema: int_t.
DROP TABLE is still not working properly after merging all the related branches (with current master branch of spanner and spaner-sqlalchemy). I think the culprit is autocommit=False because when set autocommit=True in spanner_dbapi everything works fine and table deleted successfully after every test.

Solution might be execute non-DDL statement after the DROP TABLE or commit manually.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solution might be execute non-DDL statement after the DROP TABLE

This is what I would suggest but I'm not sure where DROP TABLE is called between tests. For now I think the current implementation is fine. We can revisit this another time 👍

test/test_suite.py Show resolved Hide resolved
"""
SPANNER OVERRIDE:

Spanner is not able cleanup data and drop the table correctly,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solution might be execute non-DDL statement after the DROP TABLE

This is what I would suggest but I'm not sure where DROP TABLE is called between tests. For now I think the current implementation is fine. We can revisit this another time 👍

test/test_suite.py Show resolved Hide resolved
@larkee larkee changed the title fix: integer complaince tests test: fix integer compliance tests Mar 18, 2021
@larkee larkee merged commit 79695c2 into main Mar 19, 2021
@IlyaFaer IlyaFaer deleted the fix_integer_compliance_test branch November 17, 2021 10:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes This human has signed the Contributor License Agreement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants