Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] Store local copy of webpages #132

Open
lardissone opened this issue May 2, 2024 · 5 comments
Open

[Feature request] Store local copy of webpages #132

lardissone opened this issue May 2, 2024 · 5 comments

Comments

@lardissone
Copy link

Like Raindrop does, allow to store a copy of the webpage.

There's a bunch of options to accomplish this.
Here's a nice list of tools to use for inspiration: awesome web archiving

@scubanarc
Copy link

Monolith might be a good choice here... it's CLI and works well.

@MohamedBassem
Copy link
Collaborator

btw, hoarder already stores a local copy of the crawled content. That's what you see in the bookmark preview page.

But it as of right now, doesn't include the images. It's also only the readable parts of the page, not the entire page.

I've seen monolith before and I think it's cool, I might give it a try :)

@lardissone
Copy link
Author

@MohamedBassem yes, I've seen that, but in fact the images are a must. I usually store articles with explanation figures embedded, and the text only turns it useless.

@MohamedBassem
Copy link
Collaborator

@lardissone that makes sense. I think I can give monolith a try and see how it goes. Will see if I can include it in the next release.

@lardissone
Copy link
Author

Awesome! You rock!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants