New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
1011 Running sql using sparkconnect should not print full stack trace #1012
base: master
Are you sure you want to change the base?
1011 Running sql using sparkconnect should not print full stack trace #1012
Conversation
src/sql/run/sparkdataframe.py
Outdated
raise exceptions.MissingPackageError("pysark not installed") | ||
|
||
return SparkResultProxy(dataframe, dataframe.columns, should_cache) | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please integrate this with short_errors:
Line 175 in 0433444
short_errors = Bool( |
by default, it should raise the exception, if short_errors is True, then just print it
src/sql/run/sparkdataframe.py
Outdated
return SparkResultProxy(dataframe, dataframe.columns, should_cache) | ||
try: | ||
return SparkResultProxy(dataframe, dataframe.columns, should_cache) | ||
except AnalysisException as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this except is redundant, the except Exception as e
can catch all exceptions
@@ -559,6 +559,7 @@ def is_non_sqlalchemy_error(error): | |||
# Pyspark | |||
"UNRESOLVED_ROUTINE", | |||
"PARSE_SYNTAX_ERROR", | |||
"AnalysisException", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After looking through the code I think adding AnalysisException here will solve the issue since PARSE_SYNTAX_ERROR
works as expected.
AnalysisException covers all these error conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just need to test it somehow. Will try to package the jupysql and install it in a spark environment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can install like this:
pip install git+https://github.com/b1ackout/jupysql@running-sql-using-sparkconnect-should-not-print-full-stack-trace
Describe your changes
Issue number
Closes #1011
Checklist before requesting a review
pkgmt format
馃摎 Documentation preview 馃摎: https://jupysql--1012.org.readthedocs.build/en/1012/