-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to get meta information for non webpages #112
Comments
Tried to write compatible http client using Guzzle. I have extended interface with a couple of methods. to separate body and headers retrieval, so that we could retrieve headers only for large files. Have used it like this
What do you think about making extended interface. So that more complex providers could be written. It seems normal solution, so some extended providers can be created, and it shouldn't break compatibility.
and
|
The ability to extract metadata from files could be a cool thing :) |
essense doesn't need to do anything. There is filter that match urls to providers. So the fetch metadata and form output could be put on shoulders of providers. Essense just select provider and forward url to it. :) Default provider of course will need to make a headers request. because no one knows what will be. for other providers it is optional i think. |
If we use essence as end-user links parser, sometimes they add links directly to files/videos. Of course getting metadata is almost impossible. But if we could check headers. we can form imageUrl, title (from filename), providername/providerurl(guess from domain). If the file is small we can also try to get inner metadata. Also header checking of requested url will help if there will be default provider and some one pass a link to debian iso :) we can say it is big file,mark it in resulting data, and close the request.
The text was updated successfully, but these errors were encountered: