You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The observations presented here are also relevant for the polmineR repository.
Some Background
The Bundestag Protokolle often employ spacing to enhance readability of large numerical values. This approach, while globally standardized, may lead to problems with corpus analysis, notably regarding the tokenization process.
For illustration, consider a speech given by then-Chancellor Angela Merkel during the final session of the 17th legislative period (reference: BT_17_253). In this speech, five instances of large numbers grouped with spaces can be identified:
Bereits über 100 000 Menschen haben ihr Leben verloren;
Wir haben als erster EU-Mitgliedstaat 5 000 syrischen Flüchtlingen Aufnahme angeboten.
700 000 mehr Menschen im Alter von 60 bis 65 sind noch in Arbeit.
650 000 Menschen erhalten mehr Leistungen.
Wir haben seit 2007 in Deutschland 820 000 neue Betreuungsplätze für Kinder unter drei Jahren geschaffen.
The Issue
Corpus tools, like PolmineR (and similarly, #LancsBox X), fail to recognize these groups of spaced numerical values as single tokens. Consider the following R code snippet:
In my initial post, the regular expression \b(\d{1,3})(\s)(\d{3})\b was designed to match numbers in the thousand-range. However, this pattern encounters limitations when dealing with even larger numbers. While — in fact!— the original regex does partially match these larger values, its 'coverage' is obviously incomplete.
Updated Regular Expression(s)
To improve upon the issue described above, I've developed 3 new regular expressions for each numerical range (i.e. billion-rage, million-range, and thousand range). Further, I made use of grouping to facilitate easier replacement — if that is desired. (I could not think of a better solution than a 'three step clean up'.)
I. Billion-Range
RegEx (Grouped): \b(\d{1,3})\s(\d{3})\s(\d{3})\s(\d{3})\b
Replacement : \1\2\3\4
II. Million-Range
RegEx (Grouped): \b(\d{1,3})\s(\d{3})\s(\d{3})\b
Replacement : \1\2\3
III. Thousand-Range
RegEx (Grouped): \b(\d{1,3})\s(\d{3})\b
Replacement: \1\2
A few words of caution
As was hinted at in my initial post, there is the danger of false positives. Consider the following example from the corpus:
Bis zum Jahresende 2010 wurden statt 90 000 180 000 Studienplätze geschaffen (BT_17_126)
Preliminary Remark
The observations presented here are also relevant for the polmineR repository.
Some Background
The Bundestag Protokolle often employ spacing to enhance readability of large numerical values. This approach, while globally standardized, may lead to problems with corpus analysis, notably regarding the tokenization process.
For illustration, consider a speech given by then-Chancellor Angela Merkel during the final session of the 17th legislative period (reference: BT_17_253). In this speech, five instances of large numbers grouped with spaces can be identified:
The Issue
Corpus tools, like PolmineR (and similarly, #LancsBox X), fail to recognize these groups of spaced numerical values as single tokens. Consider the following R code snippet:
As a matter of fact, polmineR incorrectly counts each spaced segment of the numbers as separate tokens, yielding:
The Implications
The implications of this issue are twofold:
The extent of this issue's impact: Employing the regular expression \b(\d{1,3})(\s)(\d{3}) returns 134,609 hits (not 100% precision rate!)
The text was updated successfully, but these errors were encountered: