You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want each thing that ArchiveBox does internally to be a subcommand exposed directly to users or callable by the Huey job queue system.
It will also have the side effect of making the process tree nicely represent what ArchiveBox is doing internally, and allow users to kill stuck subtasks independently without stopping the entire "add" import.
The old oneshot will be renamed and joined by a new command to run a single extractor method:
archivebox snapshot
Can be run to snapshot an individual URL into the current directory (runs all extractors by default).
archivebox snapshot --methods=all 'https://example.com/somepage.html'# creates a subfolder for each extractor method, and an index.html and index.json file in $PWD
This works the same way as oneshot does now, and I'll alias oneshot to the new command so we don't break backwards compatibility.
archivebox extract
This runs an individual extractor method and outputs into the current directory.
archivebox extract --method=PDF --method-args-here 'https://example.com/somepage.html'# writes output.pdf (and an index.json containing cmd+output for each run) into $PWD using the headless browser
After the refactor, archivebox add will work by internally enqueuing a job that runs archivebox snapshot ... for each imported URL.
The snapshot job then in turn enqueues a job for each extractor needed on that URL.
Each extractor job then runs archivebox extract --method=... internally to write the output into the final archive directory.
The text was updated successfully, but these errors were encountered:
pirate
changed the title
Rename oneshot command to "snapshot" and add "extract" command to break down take and make them runnable independently of a full "add" job
Rename oneshot command to "snapshot" and add "extract" command to break down tasks and make them runnable independently of a full "add" job
Dec 15, 2023
pirate
changed the title
Rename oneshot command to "snapshot" and add "extract" command to break down tasks and make them runnable independently of a full "add" job
Rename oneshot command to snapshot, add new extract command to expose add subtasks for future job queue system
Dec 18, 2023
pirate
changed the title
Rename oneshot command to snapshot, add new extract command to expose add subtasks for future job queue system
Rename archivebox oneshot -> archivebox snapshot, add new archivebox extract command to break down add subtasks for future job queue system
Dec 18, 2023
pirate
changed the title
Rename archivebox oneshot -> archivebox snapshot, add new archivebox extract command to break down add subtasks for future job queue system
Rename archivebox oneshot -> archivebox snapshot & archivebox extract, expose atomic subtasks for future job queue system
Dec 18, 2023
I want each thing that ArchiveBox does internally to be a subcommand exposed directly to users or callable by the Huey job queue system.
It will also have the side effect of making the process tree nicely represent what ArchiveBox is doing internally, and allow users to kill stuck subtasks independently without stopping the entire "add" import.
The old
oneshot
will be renamed and joined by a new command to run a single extractor method:archivebox snapshot
Can be run to snapshot an individual URL into the current directory (runs all extractors by default).
This works the same way as
oneshot
does now, and I'll aliasoneshot
to the new command so we don't break backwards compatibility.archivebox extract
This runs an individual extractor method and outputs into the current directory.
After the refactor,
archivebox add
will work by internally enqueuing a job that runsarchivebox snapshot ...
for each imported URL.The snapshot job then in turn enqueues a job for each extractor needed on that URL.
Each extractor job then runs
archivebox extract --method=...
internally to write the output into the final archive directory.The text was updated successfully, but these errors were encountered: