Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for tokenization of languages without spaces #4

Open
andreekeberg opened this issue Jul 24, 2021 · 0 comments
Open

Support for tokenization of languages without spaces #4

andreekeberg opened this issue Jul 24, 2021 · 0 comments
Labels
🥳 enhancement New feature or request 👋🏼 good first issue Great for new contributors 🙋🏼‍♂️ help wanted Extra attention is appreciated

Comments

@andreekeberg
Copy link
Owner

Need to implement a smarter method of tokenization which takes into account languages that traditionally does not use spaces between words (currently resulting in full-sentence tokens not suitable for the current method of cosine similarity comparisons).

Some of these languages include:

  • Chinese
  • Japanese
  • Thai
  • Khmer
  • Lao
  • Burmese
@andreekeberg andreekeberg added 🥳 enhancement New feature or request 🙋🏼‍♂️ help wanted Extra attention is appreciated 👋🏼 good first issue Great for new contributors labels Jul 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🥳 enhancement New feature or request 👋🏼 good first issue Great for new contributors 🙋🏼‍♂️ help wanted Extra attention is appreciated
Projects
None yet
Development

No branches or pull requests

1 participant