Skip to content

lovit/textmining-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 

Repository files navigation

(한국어) 텍스트 마이닝을 위한 튜토리얼

텍스트 마이닝을 공부하기 위한 자료입니다. 언어에 상관없이 적용할 수 있는 자연어처리 / 머신러닝 관련 자료도 포함되지만, 한국어 분석을 위한 자료들도 포함됩니다.

  • 이 자료는 현재 작업중이며, slide와 jupyter notebook example codes가 포함되어 있습니다.
  • 이 자료는 soynlp package를 이용합니다. 한국어 분석을 위한 자연어처리 코드입니다. soynlp 역시 현재 작업중입니다.
  • Slides 내용에 관련된 texts 는 blog 에 포스팅 중입니다.
  • 실습코드는 코드 repository 에 있습니다.

Contents

  1. Python basic
    1. jupyter tutorial
  2. From text to vector (KoNLPy)
    1. n-gram
    2. from text to vector using KoNLPy
  3. Word extraction and tokenization (Korean)
    1. word extractor
    2. unsupervised tokenizer
    3. noun extractor
    4. dictionary based pos tagger
  4. Document classification
    1. Logistic Regression and Lasso regression
    2. SVM (linear, RBF)
    3. k-nearest neighbors classifier
    4. Feed-forward neural network
    5. Decision Tree
    6. Naive Bayes
  5. Sequential labeling
    1. Conditional Random Field
  6. Embedding for representation
    1. Word2Vec / Doc2Vec
    2. GloVe
    3. FastText (word embedding using subword)
    4. FastText (supervised word embedding)
    5. Sparse Coding
    6. Nonnegative Matrix Factorization (NMF) for topic modeling
  7. Embedding for vector visualization
    1. MDS, ISOMAP, Locally Linear Embedding, PCA, Kernel PCA
    2. t-SNE
    3. t-SNE (detailed)
  8. Keyword / Related words analysis
    1. co-occurrence based keyword / related word analysis
  9. Document clustering
    1. k-means is good for document clustering
    2. DBSCAN, hierarchical, GMM, BGMM are not appropriate for document clustering
  10. Finding similar documents (neighbor search)
    1. Random Projection
    2. Locality Sensitive Hashing
    3. Inverted Index
  11. Graph similarity and ranking (centrality)
    1. SimRank & Random Walk with Restart
    2. PageRank, HITS, WordRank, TextRank
    3. kr-wordrank keyword extraction
  12. String similarity
    1. Levenshtein / Cosine / Jaccard distance
  13. Convolutional Neural Network (CNN)
    1. Introduction of CNN
    2. Word-level CNN for sentence classification (Yoon Kim)
    3. Character-level CNN (LeCun)
    4. BOW-CNN
  14. Recurrent Neural Network (RNN)
    1. Introduction of RNN
    2. LSTM, GRU
    3. Deep RNN & ELMo
    4. Sequence to sequence & seq2seq with attention
    5. Skip-thought vector
    6. Attention mechanism for sentence classification
    7. Hierarchical Attention Network (HAN) for document classification
    8. Transformer & BERT
  15. Applications
    1. soyspacing: heuristic Korean space correction
    2. crf-based Korean soace correction
    3. HMM & CRF-based part-of-speech tagger (morphological analyzer)
    4. semantic movie search using IMDB
  16. TBD

Thanks to

자료를 리뷰하고 함께 토론해주는 고마운 동료들이 많습니다. 특히 많은 시간과 정성을 들여 도와주는 태욱에게 고마움을 표합니다.

About

(한국어) 텍스트 마이닝을 위한 공부거리들

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published