Skip to content

Cloud application to promote responsible tourism and help prevent overtourism.

Notifications You must be signed in to change notification settings

Teanlouise/shared-world

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Back to Home

sw_overview

Overview | Develop | Deploy | Data

This project was the assessment item for INFS3208 (Cloud Computing) at University of Queensland. The proposal was submitted as assignment 1 and this implementation as assignment 2. This project was nominated for the best project award and was awarded the 'Best Use of Cloud' for the cohort. A live and recorded explanation, including demonstration of the project was given.

video

Purpose

A cloud application solution for overtourism and promoting responsible travel by providing a platform where travellers explore destinations using a map displaying tourist-to-local ratios and view blogs filtered by their interests. It is a platform where responsible, conscientious travellers can be made.

Goals

  1. educate travellers
  2. show only relevant posts
  3. informative map
  4. improve quality of travel blogs

Technologies

sw_tech

The application was written in Django and React, deployed on Google App Engine through docker and Firebase respectively, the database hosted on Cloud SQL and the file storage on Cloud Storage. BigQuery was used to query data from the World Bank dataset, Apache Spark clusters were managed via Dataproc in which a Scala program was run using SparkSQL, as well as an Apache Zeppelin notebook for data analysis using Scala and a Jupyter notebook to create a linear regression model with PySpark.

Workflow

sw_workflow

The application was written in Django and React, deployed on Google App Engine through docker and Firebase respectively, the database hosted on Cloud SQL and the file storage on Cloud Storage. BigQuery was used to query data from the World Bank dataset, Apache Spark clusters were managed via Dataproc in which a Scala program was run using SparkSQL, as well as an Apache Zeppelin notebook for data analysis using Scala and a Jupyter notebook to create a linear regression model with PySpark.

There are three main parts to this project:

  1. Develop

  2. Deploy

  3. Data