Skip to content

Latest commit

 

History

History
49 lines (29 loc) · 1.32 KB

README.md

File metadata and controls

49 lines (29 loc) · 1.32 KB

Apache Spark Kinesis Consumer

Example project for consuming AWS Kinesis streamming and save data on Amazon Redshift using Apache Spark

Code from: Processing IoT realtime data - Medium

Usage example

You need to set Amazon Credentials on your enviroment.

export AWS_ACCESS_KEY_ID=""
export AWS_ACCESS_KEY=""
export AWS_SECRET_ACCESS_KEY=""
export AWS_SECRET_KEY=""

Dependencies

Must be included on --packages flag.

org.apache.spark:spark-streaming-kinesis-asl_2.10:1.6.1

Setup

How run Kinesis locally?

A few months ago I created a Docker image with Kinesalite (amazin project to simulate Amazon Kinesis), you can use this image, or run Kinesalite directly.

docker run -d -p 4567:4567 vsouza/kinesis-local -p 4567 --createStreaMs 5

check the project

I should have DynamoDB too?

Yes, 😢 . The AWS SDK Kinesis module make checkpoints of your Kinesis tunnel, and store this on DynamoDB. You don't need to create tables or else, the SDK will create for you.

Remember to configure your throughput value in DynamoDB correctly

License

MIT License © Vinicius Souza