This package tokenizes (splits) words, sentences and graphemes, based on Unicode text segmentation (UAX #29), for Unicode version 13.0.0. Details and usage are in the respective packages:
Any time our code operates on individual words, we are tokenizing. Often, we do it ad hoc, such as splitting on spaces, which gives inconsistent results. The Unicode standard is better: it is multi-lingual, and handles punctuation, special characters, etc.
We use the official test suites. Status:
jargon, a text pipelines package for CLI and Go, which consumes this package.