Skip to content

A distributed web crawler for xiaohongshu.com and visualization for the crawled content.

Notifications You must be signed in to change notification settings

KaitoHH/xiaohongshu-spider-visualizer

Repository files navigation

distributed Xiaohongshu Spider and Data Visiualization

A distributed web crawler for xiaohongshu and visualization for the crawled content. word cloud

Building a stand-alone crawler

As this crawler supports distribution, using a pre-build docker is the recommended and convenient way to build this project. If you only wish to build with a stand-alone crawler, following the instruction below:

  1. add chromedriver to PATH
  2. install all required python packages in requirements.txt
  3. run xiaohongshu_consumer.py

Building a distributed crawler

This project use celery to distribute tasks, so you have to run worker(s) first, and then execute the consumer code to create tasks.

Using docker to start worker

registry.cn-hangzhou.aliyuncs.com/kaitohh/celery:5 is the pre built docker image, and you can also usedocker build -t <my_image_name> .to build image locally.

Step 1 Prerequisites

  1. Install docker
  2. if you with to run a distributed version of crawler, make sure you have deployed a cluster using docker swarm or kubernetes.

Step 2a run in development environment

run following command

docker-compose up

and all services will be first built locally and then run automatically. Note that this command will only create one replica for each service.

After all services up, visit localhost:5555 to enter the celery flower dashboard, and localhost:8080 to enter the docker visualizer page.

celery flower dashboard

Step 2b run in production environment

run following command

docker stack deploy -c docker-compose.yml <your_stack_name>

You can also modify the replicas in Line 8, docker-compose.yml to be equal to your amount of cluster.

Below is the screenshot of the docker visualizer page for a successful deploy.

docker

Step 3 execute the consumer code

run following command

set USE_CELERY=1 & python xiaohongshu_consumer.py

now visit the celery dashboard and you will see your tasks.

celery runtime

Build manually to start worker

If you wish to build manually, you have to first follow these instructions for building a stand-alone crawler, then start a redis server and change the environment variable REDIS_URL to your redis host. Finally, run celery worker command to start workers.

See the Dockerfile and docker-compose.yml as a reference.

Visualization

see xiaohongshu_wordcloud.py for more detailed implementaion.

Acknowledgments

About

A distributed web crawler for xiaohongshu.com and visualization for the crawled content.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published