You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As mentioned in #1971, parquet format supports arbitrary key-value metadata. As SplinkDataFrames now support such metadata (in particularly used for storing table-creation thresholds), it would be nice if this could be written/read from parquet.
Backend notes:
Supported in duckdb, though think only (currently, 0.10.0) using a literal struct in SQL (which would thus need to be carefully constructed) rather than via e.g. subquery
Doesn't appear to be directly supported in spark, could possibly go via pyarrow
athena uses arrow under-the-hood so should be okay.
postgres/sqlite we don't currently have a to_parquet(), but could look into implementing
The text was updated successfully, but these errors were encountered:
As mentioned in #1971, parquet format supports arbitrary key-value metadata. As
SplinkDataFrame
s now support such metadata (in particularly used for storing table-creation thresholds), it would be nice if this could be written/read from parquet.Backend notes:
duckdb
, though think only (currently,0.10.0
) using a literal struct in SQL (which would thus need to be carefully constructed) rather than via e.g. subqueryspark
, could possibly go viapyarrow
athena
usesarrow
under-the-hood so should be okay.postgres
/sqlite
we don't currently have ato_parquet()
, but could look into implementingThe text was updated successfully, but these errors were encountered: