Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add method of populating pages for older crawls #1597

Closed
tw4l opened this issue Mar 14, 2024 · 0 comments · Fixed by #1562
Closed

Add method of populating pages for older crawls #1597

tw4l opened this issue Mar 14, 2024 · 0 comments · Fixed by #1562
Assignees

Comments

@tw4l
Copy link
Contributor

tw4l commented Mar 14, 2024

We debated/tried implementing as a database migration and as API endpoints and settled on the latter for flexibility/timing.

@tw4l tw4l self-assigned this Mar 14, 2024
ikreymer pushed a commit that referenced this issue Mar 19, 2024
Fixes #1597

New endpoints (replacing old migration) to re-add crawl pages to db from
WACZs.

After a few implementation attempts, we settled on using
[remotezip](https://github.com/gtsystem/python-remotezip) to handle
parsing of the zip files and streaming their contents line-by-line for
pages. I've also modified the sync log streaming to use remotezip as
well, which allows us to remove our own zip module and let remotezip
handle the complexity of parsing zip files.

Database inserts for pages from WACZs are batched 100 at a time to help
speed up the endpoint, and the task is kicked off using
asyncio.create_task so as not to block before giving a response.

StorageOps now contains a method for streaming the bytes of any file in
a remote WACZ, requiring only the presigned URL for the WACZ and the
name of the file to stream.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done!
Development

Successfully merging a pull request may close this issue.

1 participant