You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, when a result is required we do not distinguish between two cases:
Cases where we want a table to be materialised in the backend database, but not sent to python.
Cases where we want a result to be 'collected' from the database and sent to Python, but don't need a table to be materialised in the database
Currently we always use (1) and sometimes use methods to_pandas_dataframe on the resultant SplinkDatafame to subsequently send the result from the db to Python.
We have no way of doing (2).
One result of this lack of clarity is there's lots of fudging in Spark where dataframes work differently:
Spark is lazy, calculations only happen when they're 'collected' (which, in Spark, means when we trigger an action like save to parquet)
This means that when you execute SQL to 'create a table', you're just queueing up a DAG. Its not executed or materialiesd
Which means we have to manually intervene to tell Spark when we want tables to be physically created.
Quite a bit of this complexity might go away if we allow two forms of sql execution:
SQL statements where we want a table to be materialised, and a SplinkDataFrame returned (i.e. we don't immediately need the result in the Python client. e.g. predict()
SQL statements where we don't want a table to be materialised, but we do immediately need the result in the Python client (e.g. during EM training when we compute the new values of m and u)
The text was updated successfully, but these errors were encountered:
At the moment, when a result is required we do not distinguish between two cases:
Currently we always use (1) and sometimes use methods
to_pandas_dataframe
on the resultant SplinkDatafame to subsequently send the result from the db to Python.We have no way of doing (2).
One result of this lack of clarity is there's lots of fudging in Spark where dataframes work differently:
Quite a bit of this complexity might go away if we allow two forms of sql execution:
predict()
The text was updated successfully, but these errors were encountered: