Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Determine how to handle large datasets in the future #100

Open
edwardleardi opened this issue Dec 17, 2020 · 0 comments
Open

Determine how to handle large datasets in the future #100

edwardleardi opened this issue Dec 17, 2020 · 0 comments
Labels
core Enhancements or bugs related to the core without which PyDAX can't perform its minimal functionality

Comments

@edwardleardi
Copy link
Collaborator

edwardleardi commented Dec 17, 2020

Adding to post-release epic as it's probably not a priority right now.

We need to determine how we want to handle large datasets, say anything over 2GB, e.g. the Airline dataset is 81GB. Relevant discussion started here. This would affect our CI in dax-schemata (related issue: CODAIT/dax-schemata#9).

Investigate Pandas and other packages for their abilities to exchange data on the hard disk.

@edwardleardi edwardleardi added the core Enhancements or bugs related to the core without which PyDAX can't perform its minimal functionality label Dec 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Enhancements or bugs related to the core without which PyDAX can't perform its minimal functionality
Projects
None yet
Development

No branches or pull requests

1 participant