Download the word-embeddng data set from [glove.6B.zip] and put it under data/word_embeddings/glove/
You can input a text file from the command line:
python -m subjectivity.classify < data/name_of_the_text.txt
The output will look like the following:
Objective characters:
<number of characters in objective sentences in the text>
Subjective characters:
<number of characters in subjective sentences in the text>
You can also run /Jupyter/NYT_classifications.ipynb to run 100 New York Times article one time and plot their distribution.
python3 /webcrawler/crawler_usage.py <start-url-string> <number-of-documents-to-crawl> <results-directory-path>