Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: check if page is readable? #572

Open
zirkelc opened this issue Apr 23, 2024 · 7 comments
Open

Question: check if page is readable? #572

zirkelc opened this issue Apr 23, 2024 · 7 comments
Labels
question Further information is requested

Comments

@zirkelc
Copy link
Contributor

zirkelc commented Apr 23, 2024

I was wondering if there is a function like Readability's isProbablyReaderable function? I have the case that I can extract the content of a page with Trafilatura, but the content is actually meaningless. So in that case it would be good to have some kind of score to decide to throw away the result because it's below a certain threshold. I could use the content length, but I was wondering if Trafilatura itself calculates a content score that could be exposed?

@adbar
Copy link
Owner

adbar commented Apr 23, 2024

Hi @zirkelc, it makes perfect sense to give back a confidence score to the user. I used a binary criterion at some point and then removed it. The feature would need to be implemented before adding such a function.

@zirkelc
Copy link
Contributor Author

zirkelc commented Apr 23, 2024

Hi @adbar thanks for the quick response! Are you planning to add this feature? I could help you out if you have some more details implementation-wise.

@adbar
Copy link
Owner

adbar commented Apr 24, 2024

Trafilatura actually uses a combination of several extractors so the different scores wouldn't be commensurable.

The best we could do would be to mimick a score and/or try to give back useful information about extraction quality to the user. Feel free to suggest something if you have an idea.

@adbar adbar added the question Further information is requested label Apr 24, 2024
@zirkelc
Copy link
Contributor Author

zirkelc commented Apr 28, 2024

I thought about some options, but as you said, the combination of different extractors makes it difficult to calculate a real score.

One idea that might be feasible: Let's say you run 3 different extractors (baseline, justext, readability) and html2text for one page. Then remove all special characters such as #( ) [ ] | - _ *, which are normally used for formatting, images, links and tables, so that only the plain text remains. The entire text is then lowercased, split into individual words and all line breaks and spaces are removed. This gives you a kind of corpus for each extractor. Then you compare the word count for each extractor with the word count of the raw text in html2text and use the difference or ratio to calculate some sort of confidence score.

You might even convert the word list of each extractor and html2text into a set to get a unique word list. This makes the comparison between extractor and html2text even more meaningful. My assumption here is that the main content of a page consists of words with a high variance. The other elements such as header, footer and navigation only have a few unique words. This means that if the number of unique words in an extractor is within a certain range of the number of unique words in the html2text extraction (e.g. 50%, 75%, etc.), the result will be more meaningful. For pages without main content, the number of words in the header, footer, navigation etc. would probably be much higher than the result of the extractor and would therefore lead to a low score.

This is of course just a naive assumption without knowing the internals of trafilatura, so please feel free to let me know if this idea is complete nonsense :-D

@adbar
Copy link
Owner

adbar commented Apr 29, 2024

Interesting idea, your "words among html2text" metric probably works well if you have several webpages from the same source. Then the variance will indeed be among the main text. This is what Trafilatura's deduplication option does to some extent, it filters out recurring text segments.

I'm not sure how it would work for a single webpage (which is what the extractor is evaluated on). Postal addresses in footers add a lot of variance for example, so does a "latest articles" column.

@zirkelc
Copy link
Contributor Author

zirkelc commented Apr 30, 2024

Yes, you might be right. Even though, some pre-filtering on the raw html, like removing semantic elements (<header>, <nav>, <footer>) should maybe decrease the variance a bit.

I will try to implement a small PoC the next few days/weeks, just to try out how good or bad it is. I could then use your evaluation dataset to compare the eval scores with the calculated confidence score. I assume a high eval score should correlate with a high confidence score. What do you think?

@adbar
Copy link
Owner

adbar commented Apr 30, 2024

By all means, please go ahead. If everything works you'd have another extraction method and the confidence score would be a useful by-product, I'm curious to see how that goes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants