Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Enable Expectations tests for BigQuery #3219

Merged
Merged
Show file tree
Hide file tree
Changes from 77 commits
Commits
Show all changes
107 commits
Select commit Hold shift + click to select a range
184c04f
first push of somewhat hacky fix
Jul 30, 2021
25a008f
Merged in my work from https://github.com/great-expectations/great_ex…
jdimatteo Jul 30, 2021
1b73b96
Improve documentation for running locally (#3160)
jdimatteo Aug 2, 2021
7223884
Update util.py
Aug 4, 2021
5a21fe6
run all
Aug 4, 2021
5798fef
Merge branch 'develop' into working-branch/enable-bigquery-flag
Aug 4, 2021
3c86f6c
added azure cloud db integration
Aug 4, 2021
375462e
Update azure-pipelines-cloud-db-integration.yml
Aug 4, 2021
1328ed0
lint
Aug 4, 2021
d069670
Update azure-pipelines-cloud-db-integration.yml
Aug 4, 2021
21b3083
update pipeline
Aug 4, 2021
10537f5
Update azure-pipelines-cloud-db-integration.yml
Aug 4, 2021
667294f
Update util.py
Aug 4, 2021
cfc9ca7
remove spaces from schema and columns
Aug 4, 2021
7b2dd0b
linted
Aug 4, 2021
477887d
debug question
Aug 4, 2021
6cf890a
Update util.py
Aug 4, 2021
6c4a2fd
some debugging messages
Aug 5, 2021
7a6907e
Update util.py
Aug 5, 2021
c56964e
better error messages
Aug 5, 2021
555fc5a
oops
Aug 5, 2021
4cc425e
more checks
Aug 5, 2021
afc77c5
Update util.py
Aug 5, 2021
1e7a0b3
adding more to list
Aug 5, 2021
ce7ff46
Update util.py
Aug 5, 2021
d56858e
Revert "Update util.py"
Aug 5, 2021
4976ed6
normal expectations now run
Aug 6, 2021
9b36b6e
old expectation tests and linting
Aug 6, 2021
a24bc9c
linted and old expectations
Aug 6, 2021
f423665
Update test_expectations.py
Aug 6, 2021
601f438
Update azure-pipelines-cloud-db-integration.yml
Aug 6, 2021
88c5208
Update azure-pipelines-cloud-db-integration.yml
Aug 6, 2021
bd88211
push with filtered list
Aug 6, 2021
ea3d55a
Update test_expectations_cfe.py
Aug 6, 2021
eb313d2
Update test_expectations.py
Aug 6, 2021
b34a0e0
making sure the tests run
Aug 6, 2021
649c643
pushing to run tests on Azure
Aug 7, 2021
b85e103
pushing
Aug 7, 2021
f2cba33
Update test_expectations.py
Aug 7, 2021
de81bbc
column_distributional_expectations
Aug 7, 2021
5ed0d41
column_map_expectations
Aug 7, 2021
664e059
Update test_expectations.py
Aug 7, 2021
7f64366
column_map_expectation
Aug 7, 2021
b87ae4d
column pair map expectations
Aug 7, 2021
0d58e65
multi-table expectations
Aug 7, 2021
bb34885
multi-column map expectations
Aug 7, 2021
de2dd98
other_expectations
Aug 7, 2021
6c5efb1
combining both pipelines
Aug 7, 2021
022e184
cleaned up
Aug 7, 2021
1f17e36
combined pipelines as different steps
Aug 8, 2021
7229c3f
Revert "combined pipelines as different steps"
Aug 8, 2021
a2aafe4
adding dependency
Aug 8, 2021
0f703bc
switch the order
Aug 8, 2021
3b4925a
adding disributional tests to skipped
Aug 8, 2021
d9431fc
Merge branch 'develop' into working-branch/enable-bigquery-flag
Aug 9, 2021
ebd1f06
adding checks
Aug 9, 2021
abcb64a
updated pipeline and docs
Aug 10, 2021
c89c395
updated test_list
Aug 10, 2021
84fb721
testing 3 remaining expectations
Aug 10, 2021
5635983
testing 3 remaining expectations no sleep
Aug 10, 2021
a9c6163
added fix
Aug 10, 2021
d5e04d5
push before PR issued
Aug 11, 2021
d9543c7
Merge branch 'develop' into feature/GREAT-66/enable-bigquery-tests-fo…
Aug 11, 2021
0f8aa53
one more check
Aug 11, 2021
069ca11
Update azure-pipelines-cloud-db-integration.yml
Aug 11, 2021
c55431e
Merge branch 'develop' into feature/GREAT-66/enable-bigquery-tests-fo…
Aug 11, 2021
0eff30c
2 more tests added to notimplemented
Aug 11, 2021
161579a
Merge branch 'develop' into feature/GREAT-66/enable-bigquery-tests-fo…
Aug 11, 2021
082eac5
Update azure-pipelines-cloud-db-integration.yml
Aug 11, 2021
3061d1f
Test with Monday 3PM chron
Aug 11, 2021
e94188f
remove cron scheduling so it can be set on azure
Aug 11, 2021
c5ebe6a
Update azure-pipelines-cloud-db-integration.yml
Aug 11, 2021
3f1ba9e
testing cleaner if statement
Aug 12, 2021
3580cf4
take off sleep message
Aug 12, 2021
7a06de4
testing cleaner if statement
Aug 12, 2021
f3cfe32
Revert "testing cleaner if statement"
Aug 12, 2021
2a9c6fc
Revert "testing cleaner if statement"
Aug 12, 2021
3ae0eb1
after review
Aug 13, 2021
85e1f42
minor clean up
Aug 13, 2021
af2fde7
Merge branch 'develop' into feature/GREAT-66/enable-bigquery-tests-fo…
Aug 13, 2021
84355d6
Merge branch 'develop' into feature/GREAT-66/enable-bigquery-tests-fo…
Aug 14, 2021
026cd19
Merge branch 'develop' into feature/GREAT-66/enable-bigquery-tests-fo…
Aug 16, 2021
f11293c
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
f954297
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
d49d09a
Update great_expectations/dataset/sqlalchemy_dataset.py
Aug 17, 2021
44a2e5a
Update docs/contributing/contributing_test.md
Aug 17, 2021
3c463a5
Update great_expectations/expectations/metrics/util.py
Aug 17, 2021
609c4f7
Update great_expectations/dataset/sqlalchemy_dataset.py
Aug 17, 2021
ced8994
Update tests/test_definitions/test_expectations.py
Aug 17, 2021
c25846d
try matrix
Aug 17, 2021
78fcca7
Revert "try matrix"
Aug 17, 2021
f07193c
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
7b8c7a9
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
5b90ebf
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
a28280b
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
65e7db9
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
4931f67
oops
Aug 17, 2021
1bafb0c
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
2b9e6b1
added notes and issue
Aug 17, 2021
1fda0f8
try with conversion
Aug 17, 2021
637da06
Make BigQuery SqlAlchemy dialect conform to "dialect" attribute conve…
jdimatteo Aug 17, 2021
d0e63a7
Merge branch 'develop' into feature/GREAT-66/enable-bigquery-tests-fo…
Aug 17, 2021
13ed1d4
clean up
Aug 17, 2021
aad54a8
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
78d89c2
Update azure-pipelines-cloud-db-integration.yml
Aug 17, 2021
06dfad9
Merge branch 'develop' into feature/GREAT-66/enable-bigquery-tests-fo…
Aug 17, 2021
4d251dc
updated after final review
Aug 18, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
112 changes: 112 additions & 0 deletions azure-pipelines-cloud-db-integration.yml
@@ -0,0 +1,112 @@
stages:
- stage: lint
pool:
vmImage: 'ubuntu-latest'

jobs:
- job: lint
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: 3.7
displayName: 'Use Python 3.7'

- script: |
pip install isort[requirements]==5.4.2 flake8==3.8.3 black==20.8b1 pyupgrade==2.7.2
EXIT_STATUS=0
isort . --check-only --skip docs/ || EXIT_STATUS=$?
black --check --exclude docs/ . || EXIT_STATUS=$?
flake8 great_expectations/core || EXIT_STATUS=$?
pyupgrade --py3-plus || EXIT_STATUS=$?
exit $EXIT_STATUS

- stage: cloud_db_integration_expectations_cfe
pool:
vmImage: 'ubuntu-latest'

dependsOn: [lint]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On second thought, do we actually require lint? We are doing this check on develop, a branch that strictly disallows code to be merged if lint fails on its own pipeline. As such, aren't we guaranteed to have properly linted code by the time this check rolls around?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point. Linting is definitely handled by this point


jobs:
- job: bigquery_expectations_cfe
timeoutInMinutes: 0 # Maximize the time that pipelines remain open (6 hours currently)

variables:
python.version: '3.8'

steps:
# delay the execution of the second stages so that we do not hit the rate limit for BigQuery
# - bash: sleep 5m
Shinnnyshinshin marked this conversation as resolved.
Show resolved Hide resolved
# displayName: Delay for BigQuery rate limit

- task: UsePythonVersion@0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'

- bash: python -m pip install --upgrade pip==20.2.4
Shinnnyshinshin marked this conversation as resolved.
Show resolved Hide resolved
displayName: 'Update pip'

- script: |
pip install -r requirements-dev.txt

displayName: 'Install dependencies'

- task: DownloadSecureFile@1
name: gcp_authkey
displayName: 'Download Google Service Account'
inputs:
secureFile: 'superconductive-service-acct.json'
retryCount: '2'

- script: |
pip install pytest pytest-azurepipelines
pytest -v --no-spark --no-postgresql --bigquery --napoleon-docstrings --junitxml=junit/test-results.xml --cov=. --cov-report=xml --cov-report=html --ignore=tests/cli --ignore=tests/integration/usage_statistics tests/test_definitions/test_expectations_cfe.py

displayName: 'pytest'
env:
GOOGLE_APPLICATION_CREDENTIALS: $(gcp_authkey.secureFilePath)
GCP_PROJECT: $(GCP_PROJECT)
GCP_BIGQUERY_DATASET: $(GCP_BIGQUERY_DATASET)

- stage: cloud_db_integration_expectations
pool:
vmImage: 'ubuntu-latest'

dependsOn: [cloud_db_integration_expectations_cfe, lint]
jobs:
- job: bigquery_expectations
timeoutInMinutes: 0 # Maximize the time that pipelines remain open (6 hours currently)

variables:
python.version: '3.8'

steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'

- bash: python -m pip install --upgrade pip==20.2.4
displayName: 'Update pip'

- script: |
pip install -r requirements-dev.txt

displayName: 'Install dependencies'

- task: DownloadSecureFile@1
name: gcp_authkey
displayName: 'Download Google Service Account'
inputs:
secureFile: 'superconductive-service-acct.json'
retryCount: '2'

- script: |
pip install pytest pytest-azurepipelines
pytest -v --no-spark --no-postgresql --bigquery --napoleon-docstrings --junitxml=junit/test-results.xml --cov=. --cov-report=xml --cov-report=html --ignore=tests/cli --ignore=tests/integration/usage_statistics tests/test_definitions/test_expectations.py

displayName: 'pytest'
env:
GOOGLE_APPLICATION_CREDENTIALS: $(gcp_authkey.secureFilePath)
GCP_PROJECT: $(GCP_PROJECT)
GCP_BIGQUERY_DATASET: $(GCP_BIGQUERY_DATASET)
22 changes: 21 additions & 1 deletion docs/contributing/contributing_test.md
Expand Up @@ -17,6 +17,26 @@ For example, you can run `pytest --no-spark --no-sqlalchemy` to skip all local b

Note: as of early 2020, the tests generate many warnings. Most of these are generated by dependencies (pandas, sqlalchemy, etc.) You can suppress them with pytest’s `--disable-pytest-warnings` flag: `pytest --no-spark --no-sqlalchemy --disable-pytest-warnings`.

#### BigQuery tests

In order to run BigQuery tests, you first need to go through the following steps:

1. Select or create a Cloud Platform project.
2. Setup Authentication.
3. In your project, create a bigquery dataset named `test_ci` and set the dataset default table expiration to `.1` days

* Select or create a Cloud Platform project.: [https://console.cloud.google.com/project](https://console.cloud.google.com/project)
* Setup Authentication.: [https://googleapis.dev/python/google-api-core/latest/auth.html](https://googleapis.dev/python/google-api-core/latest/auth.html)
* In your project, create a bigquery dataset named `test_ci`: [https://cloud.google.com/bigquery/docs/datasets](https://cloud.google.com/bigquery/docs/datasets)
* Set the dataset default table expiration to `.1` days: [https://cloud.google.com/bigquery/docs/updating-datasets#table-expiration](https://cloud.google.com/bigquery/docs/updating-datasets#table-expiration)

After setting up authentication, you can run with your project using the environment variable `GE_TEST_BIGQUERY_PROJECT`, e.g.

```bash
GE_TEST_BIGQUERY_PROJECT=<YOUR_GOOGLE_CLOUD_PROJECT>
pytest tests/test_definitions/test_expectations_cfe.py --bigquery --no-spark --no-postgresql
```

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super nitpicky but could we capitalize any usages of BigQuery in this file (and any other .md)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

### Writing unit and integration tests

Production code in Great Expectations must be thoroughly tested. In general, we insist on unit tests for all branches of every method, including likely error states. Most new feature contributions should include several unit tests. Contributions that modify or extend existing features should include a test of the new behavior.
Expand All @@ -25,7 +45,7 @@ Experimental code in Great Expectations need only be tested lightly. We are movi

Most of Great Expectations’ integration testing is in the CLI, which naturally exercises most of the core code paths. Because integration tests require a lot of developer time to maintain, most contributions should not include new integration tests, unless they change the CLI itself.

Note: we do not currently test Great Expectations against all types of SQL database. CI test coverage for SQL is limited to postgresql and sqlite. We have observed some bugs because of unsupported features or differences in SQL dialects, and we are actively working to improve dialect-specific support and testing.
Note: we do not currently test Great Expectations against all types of SQL database. CI test coverage for SQL is limited to postgresql, sqlite, mssql, and bigquery. We have observed some bugs because of unsupported features or differences in SQL dialects, and we are actively working to improve dialect-specific support and testing.

### Unit tests for Expectations
One of Great Expectations’ important promises is that the same Expectation will produce the same result across all supported execution environments: pandas, sqlalchemy, and Spark.
Expand Down
22 changes: 21 additions & 1 deletion docs_rtd/contributing/testing.rst
Expand Up @@ -34,6 +34,26 @@ Note: as of early 2020, the tests generate many warnings. Most of these are gene

.. _contributing_testing__writing_unit_tests:

Running BigQuery tests
----------------------

In order to run BigQuery tests, you first need to go through the following steps:

1. `Select or create a Cloud Platform project.`_
2. `Setup Authentication.`_
3. `In your project, create a bigquery dataset named "test_ci"`_ and `set the dataset default table expiration to .1 days`_

.. _Select or create a Cloud Platform project.: https://console.cloud.google.com/project
.. _Setup Authentication.: https://googleapis.dev/python/google-api-core/latest/auth.html
.. _`In your project, create a bigquery dataset named "test_ci"`: https://cloud.google.com/bigquery/docs/datasets
.. _`set the dataset default table expiration to .1 days`: https://cloud.google.com/bigquery/docs/updating-datasets#table-expiration

After setting up authentication, you can run with your project using the environment variable `GE_TEST_BIGQUERY_PROJECT`, e.g.

.. code-block::

GE_TEST_BIGQUERY_PROJECT=<YOUR_GOOGLE_CLOUD_PROJECT> pytest tests/test_definitions/test_expectations_cfe.py --bigquery --no-spark --no-postgresql -k bigquery

Writing unit and integration tests
----------------------------------

Expand All @@ -43,7 +63,7 @@ Experimental code in Great Expectations need only be tested lightly. We are movi

Most of Great Expectations' integration testing is in the CLI, which naturally exercises most of the core code paths. Because integration tests require a lot of developer time to maintain, most contributions should *not* include new integration tests, unless they change the CLI itself.

Note: we do not currently test Great Expectations against all types of SQL database. CI test coverage for SQL is limited to postgresql and sqlite. We have observed some bugs because of unsupported features or differences in SQL dialects, and we are actively working to improve dialect-specific support and testing.
Note: we do not currently test Great Expectations against all types of SQL database. CI test coverage for SQL is limited to postgresql, sqlite, mssql, and bigquery. We have observed some bugs because of unsupported features or differences in SQL dialects, and we are actively working to improve dialect-specific support and testing.


Unit tests for Expectations
Expand Down
9 changes: 6 additions & 3 deletions great_expectations/dataset/sqlalchemy_dataset.py
Expand Up @@ -548,6 +548,7 @@ def __init__(
"sqlite",
"oracle",
"mssql",
"bigquery",
]:
# These are the officially included and supported dialects by sqlalchemy
self.dialect = import_library_module(
Expand All @@ -571,6 +572,10 @@ def __init__(
self.dialect = import_library_module(
module_name="pyathena.sqlalchemy_athena"
)
elif self.engine.dialect.name.lower() == "bigquery":
self.dialect = import_library_module(
module_name="pybigquery.sqlalchemy_bigquery"
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More of a style comment but would it be possible to clean up this if/elif/else branch?

Suggested change
elif self.engine.dialect.name.lower() == "bigquery":
self.dialect = import_library_module(
module_name="pybigquery.sqlalchemy_bigquery"
)
dialect_name: str = self.engine.dialect.name.lower()
if dialect_name == "something":
...
elif dialect_name == "bigquery":
...
else:
...

Very minor but a nice boost to readability (plus we don't need to recreate a str each time!)

else:
self.dialect = None

Expand Down Expand Up @@ -2093,9 +2098,7 @@ def _get_dialect_like_pattern_expression(self, column, like_pattern, positive=Tr

try:
# Bigquery
if isinstance(
self.sql_engine_dialect, pybigquery.sqlalchemy_bigquery.BigQueryDialect
):
if hasattr(self.sql_engine_dialect, "BigQueryDialect"):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we perhaps log something if we hit an exception? I know logging was causing some issues earlier so avoid if it blocks you. I was just thinking that it might be useful to get some more information than a standard pass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

dialect_supported = True
except (
AttributeError,
Expand Down
Expand Up @@ -62,20 +62,21 @@ def _sqlalchemy_window(cls, column, _table, **kwargs):
# the column we will be performing the expectation on, and the query is performed against it.
dialect = kwargs.get("_dialect", None)
sql_engine = kwargs.get("_sqlalchemy_engine", None)
if sql_engine and dialect and dialect.dialect.name == "mysql":
temp_table_name = f"ge_tmp_{str(uuid.uuid4())[:8]}"
temp_table_stmt = "CREATE TEMPORARY TABLE {new_temp_table} AS SELECT tmp.{column_name} FROM {source_table} tmp".format(
new_temp_table=temp_table_name,
source_table=_table,
column_name=column.name,
)
sql_engine.execute(temp_table_stmt)
dup_query = (
sa.select([column])
.select_from(sa.text(temp_table_name))
.group_by(column)
.having(sa.func.count(column) > 1)
)
if sql_engine and dialect:
if hasattr(dialect, "dialect") and dialect.dialect.name == "mysql":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be cleaner to force the sqlalchemy dialect to follow the convention of including a dialect attribute.

It appears there are many usages of dialect, e.g. git grep dialect.name | wc -l shows 87 usages of dialect. I don't think it is maintainable to add special logic for bigquery for every usage today and in the future.

Please consider merging into this branch #3259 which includes forcing BigQuery to have this dialect attribute. Note that I also created a PR to fix this upstream with googleapis/python-bigquery-sqlalchemy#251, but the #3259 works in the mean time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very (very) much for this 🙇🏼

temp_table_name = f"ge_tmp_{str(uuid.uuid4())[:8]}"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the bugfix for #2978 and #2959. This is caused by the quirk in how the BigQueryDialect is imported

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice work!

temp_table_stmt = "CREATE TEMPORARY TABLE {new_temp_table} AS SELECT tmp.{column_name} FROM {source_table} tmp".format(
new_temp_table=temp_table_name,
source_table=_table,
column_name=column.name,
)
sql_engine.execute(temp_table_stmt)
dup_query = (
sa.select([column])
.select_from(sa.text(temp_table_name))
.group_by(column)
.having(sa.func.count(column) > 1)
)
return column.notin_(dup_query)

@column_condition_partial(
Expand Down
28 changes: 15 additions & 13 deletions great_expectations/expectations/metrics/util.py
Expand Up @@ -123,7 +123,7 @@ def get_dialect_regex_expression(column, regex, dialect, positive=True):

try:
# Bigquery
if issubclass(dialect.dialect, pybigquery.sqlalchemy_bigquery.BigQueryDialect):
if hasattr(dialect, "BigQueryDialect"):
if positive:
return sa.func.REGEXP_CONTAINS(column, literal(regex))
else:
Expand Down Expand Up @@ -251,10 +251,10 @@ def column_reflection_fallback(
columns_query: str = f"""
SELECT
SCHEMA_NAME(tab.schema_id) AS schema_name,
tab.name AS table_name,
tab.name AS table_name,
col.column_id AS column_id,
col.name AS column_name,
t.name AS column_data_type,
col.name AS column_name,
t.name AS column_data_type,
col.max_length AS column_max_length,
col.precision AS column_precision
FROM sys.tables AS tab
Expand All @@ -264,7 +264,7 @@ def column_reflection_fallback(
ON col.user_type_id = t.user_type_id
WHERE tab.name = '{selectable}'
ORDER BY schema_name,
table_name,
table_name,
column_id
"""
col_info_query: TextClause = sa.text(columns_query)
Expand Down Expand Up @@ -319,21 +319,23 @@ def get_dialect_like_pattern_expression(column, dialect, like_pattern, positive=

try:
# Bigquery
if isinstance(dialect, pybigquery.sqlalchemy_bigquery.BigQueryDialect):
if hasattr(dialect, "BigQueryDialect"):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment about pass here.

dialect_supported = True
except (
AttributeError,
TypeError,
): # TypeError can occur if the driver was not installed and so is None
pass

if issubclass(
dialect.dialect,
(
sa.dialects.sqlite.dialect,
sa.dialects.postgresql.dialect,
sa.dialects.mysql.dialect,
sa.dialects.mssql.dialect,
if hasattr(dialect, "dialect") and (
issubclass(
dialect.dialect,
(
sa.dialects.sqlite.dialect,
sa.dialects.postgresql.dialect,
sa.dialects.mysql.dialect,
sa.dialects.mssql.dialect,
),
),
):
dialect_supported = True
Expand Down