Skip to content

This is a simple Java program to illustrate how you can read input datasets from S3 files and write your result datasets to S3 files.

License

Notifications You must be signed in to change notification settings

contactsunny/sparkReadAndWriteFromS3POC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Apache Spark - Read and Write files from and to AWS S3

This is a simple Java program to illustrate how you can read input datasets from S3 files and write your result datasets to S3 files.

Inputs

There are two inputs:

  1. The input file path, or S3 key.
  2. The output S3 folder path.

Input File

The input file should be a .csv with the columns name and number. A sample is included in the root of the project as an example. Make sure you upload this file to an S3 bucket and provide that path as the first input. The file content is as follows:

name,number
name1,1
name2,2
name3,3
name4,4

The first row of the CSV file will be ignored as it is assumed to be the header.

Running The Project

You need to build the project first before running it. You can build it by running the following command from the root of the project directory:

mvn clean install

After this, run the following command to run the project from the same directory:

java -jar target/sparkReadAndWriteFromS3POC-1.0-SNAPSHOT.jar thetechecheck/inputFile.csv thetechcheck

About

This is a simple Java program to illustrate how you can read input datasets from S3 files and write your result datasets to S3 files.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages