Skip to content

clipperhouse/uax29

Repository files navigation

This package tokenizes (splits) words, sentences and graphemes, based on Unicode text segmentation (UAX #29), for Unicode version 13.0.0. Details and usage are in the respective packages:

uax29/words

uax29/sentences

uax29/graphemes

Why tokenize?

Any time our code operates on individual words, we are tokenizing. Often, we do it ad hoc, such as splitting on spaces, which gives inconsistent results. The Unicode standard is better: it is multi-lingual, and handles punctuation, special characters, etc.

Conformance

We use the official test suites. Status:

Go

See also

jargon, a text pipelines package for CLI and Go, which consumes this package.

Prior art

blevesearch/segment

rivo/uniseg

Other language implementations

JavaScript

Rust

Java

Python

About

A tokenizer based on Unicode text segmentation (UAX #29), for Go. Split words, sentences and graphemes.

Topics

Resources

License

Stars

Watchers

Forks

Languages