Skip to content

bdubxl/MobiVision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MobiVision

Central Connecticut State University Senior Capstone Project

MobiVision is a fully automated device/application which tracks and displays how safely you've been driving. Simply plug the device in and start driving and you'll see the results of your trip on the MobiVision website shortly after you've finished driving.

This was created by using resources from AWS, Azure Maps REST API, Django Framework, as well as many python scripts to process the collected data.

MobiVision Website

Recent Trips Page

TripsPage

More Details On Selected Trip Page

MoreDetailsPage

MobiVision Physical Device

image1 image2

MobiVision tracks the following bad driving habits which we call flags:

  • Accelerating too hard
  • Decelerating (braking) too hard
  • Taking corners too hard
  • Speeding 5mph above the speed limit
  • Not stopping at stop signs

In order to detect all of these flags throughout your trip, MobiVision has 4 stages.

Stage 1:
Involves the physical device and capturing of all the raw data through a csv sheet. Data is collected in real time and recorded to be sent to our AWS infrastructure for further processing. Stage 1 is responsible for recording the following flags,

  • Accelerating too hard
  • Decelerating (braking) too hard
  • Taking corners too hard

Stage 2:
First stage in post processing of original data from stage 1 through AWS infrastructure. The goal of stage 2 is to use object detection to detect stop signs throughout your trip and whether you stopped at them or not.

Stage 3:
Next stage in processing the data after stage 2 adds stop sign flags. This stage is responsible for periodically checking if the driver was speeding (going 5mph over the speed limit). Once speeding has been checked for, all flags have been processed and data will be saved to database to be viewed on website.

Stage 4:
Final stage in MobiVision processing of original csv sheet from stage 1. This stage involves generating a map of the final csv produced from stage 3 to visualize the trip into one image. Also responsible for displaying the data recorded from the trip onto the MobiVision Website.

Now let's look deeper into how exactly each of these stages work.

Stage 1

The physical portion of MobiVision uses multiple resources to collect and manipulate data in real time. Physical components include,

  • Raspberry Pi 4b
  • GPS Module
  • ADXL345 Accelerometer
  • Verizon MIFI USB Modem
  • 640x480 Camera

Using these components, we can capture many pieces of information in real time to help to determine how safely you were driving.

The GPS module records NMEA (National Marine Electronics Association) data which can be parsed through for important information regarding your position including,

  • Latitude
  • Longitude
  • Speed in knots (converted to mph)

if data[0] == b"$GNRMC": # GNRMC data is what we need for relevant geographic positional data, ignore other besides this
sm.state_machine_set('running')
nmea = pynmea2.parse(line.decode('utf-8'))
#Retrieve Latitiude, Longitude, and Speed and store them in variables
lat = float(round(nmea.latitude, 6))
lon = float(round(nmea.longitude, 6))
vel = round(float(nmea.spd_over_grnd) * 1.151, 1) #Speed comes in knots, converted to mph
With our accelerometer we are able to log your acceleration on 3 different axises, for our purposes we will only be needing the y axis for determining how hard your turning.

Now with these two components along with our camera, this is all the data that will be collected onto our raw data csv in real time. Our csv has the following format which makes it easy to parse through data for further processing,

Latitude, Longitude, Speed(mph), X-axis Acceleration(m/s^2), Y-axis Acceleration(m/s^2), Time(s), Flag image3

The script behind this loops every one second, and appends a new row with the current collected data into the csv. The way it is determined that you are accelerating too hard and breaking too hard is by comparing the drivers speed from one second prior the current speed. If your speed increase was over 6mph in a one seconds time span, a 'Hard Acceleration' flag will be added to the current row and the opposite will occur for hard breaking.

Because the area of the development of the MobiVision has highway speed limits of almost always 65mph, we have a 'Speeding' flag added at anytime your speed is above 70 because there are no speed limit signs around this speed that need to be checked later on (Stage 3). As for hard cornering, if your acceleration on the y axis averaged greater than 3 in one second, a 'Hard Cornering' flag will be added to the current row.

if vel > 70:
flag = 'Speeding'
if flag == 'Speeding':
pass
else:
#Reads list for previous speed and current speed. If speed has increased or decreased more then 6mph
#in one second, appropriate flag will be added.
if (float(speed_table[1]) - float(speed_table[0])) > 6:
flag = "Hard Acceleration"
elif (float(speed_table[0]) - float(speed_table[1])) > 6:
flag = "Hard Breaking"
#'Hard Cornering' flag takes average of previous second and current second to filter out possible
#random spikes on the y-axis
elif (acc_table[0] + acc_table[1])/2 > 3:
flag = "Hard Cornering"

Once the trip is finished, all the data from your trip will be saved into one csv with the three flags accounted for, and using the AWS library boto3, we can send this data into our cloud infrastructure to begin stage 2. This is possible because of the MIFI USB modem connected to the Raspberry Pi giving it internet service while driving on the road.

def start_ec2(): #Start stage 2 and stage 4 EC2 VM's for processing after intital data has been captured
ec2 = boto3.client(
'ec2',
region_name='us-east-1',
)
ec2.start_instances(
InstanceIds=[os.environ.get('instanceid1'), os.environ.get('instanceid2')],
)
def export_file(filename): #Send csv and mp4 to stage 2 bucket
s3 = boto3.client('s3')
s3.upload_file(filename, os.environ.get('stage2bucket'), filename.split('/')[-1])

We now see how the data is collected and recorded in real time. But this is an automated system, so how does the device know when your trip has started and ended without manual input? This is done by using the gps data to periodically check your speed before the data starts being recorded into a csv. If your speed is under 10mph, nothing will be recorded and the device will wait for you to drive above 10mph to start the main function.

while True:
try:
line = gps.readline()
data = line.split(b',')
if data[0] == b"$GNRMC": #Read GPS data for speed
nmea = pynmea2.parse(line.decode('utf-8'))
vel = round(float(nmea.spd_over_grnd) * 1.151, 1)
if vel < 10:
sm.state_machine_set('waitingmove')
print(f'waiting for speed over 10mph. Current Speed {vel}')
else: # Once speed has reached 10mph, main loop will begin and data will start being tracked into csv
start_trip()
time.sleep(1)

The way it is determined that you've stopped driving is by seeing how long your speed was below 8mph, if you've been traveling under 8mph for 60 seconds, the loop will be broken out of and the data will send the data to our cloud infrastructure. In order to know which state the device is, whether it be waiting for movement above 10mph or running main loop and so on. A state machine is used to change the LED's on the attached breadboard to indicate where in the code its currently running. See state_machine.py for more information Stage1/state_machine.py.

Stage 2

Stage 2 is the first processing of the initial data to determine if you've stopped at stop signs or not. This is done by using the video of the trip recorded by stage1 and using object detection to determine when stop signs appear in your trip and if you stopped at those stop signs or not. The AWS resources used to make this possible include,

  • 1 g4dn.xlarge EC2 instance (great for accelerated computing / object detection)
  • 1 sqs queue
  • 1 s3 bucket

As shown in stage 1, before sending the data to the cloud we turn on our ec2 instances which are set to run our scripts on bootup. For the stage 2 EC2, once booted up the ec2 will be polling for messages to appear from the dedicated sqs queue. The stage 2 s3 bucket has an object creation event notification set to send a message to the sqs queue that the EC2 is polling from when a csv is put into the bucket (the video file is uploaded first then the csv to make sure both files are available when needed). Once the message appears, the script will parse through the json to get the files it needs to perform object detection on and download said files.

while True:
res = sqs.receive_message(
QueueUrl=os.environ.get('queueURL'),
WaitTimeSeconds=20,
)
#Poll from queue until message appears, message will have informtaion regarding files that created the event
if 'Messages' in res:
body = json.loads(res['Messages'][0]['Body'])
bucket = body['Records'][0]['s3']['bucket']['name']
key = body['Records'][0]['s3']['object']['key']
mp4key = f"{key.split('.')[0]}.mp4"
txtkey = f"{key.split('.')[0]}.txt"
s3.download_file(bucket, key, fr'/home/ubuntu/OD/s3files/{key}')

Once we have the files downloaded caused the event, we can begin detecting stop signs in the video recorded from the trip. The source of our object detection comes from ultalytics yolov5 github repo at https://github.com/ultralytics/yolov5. Coding a script/algorithm that can classify specific objects in videos was outside the scope of this project so we opted to use this is as a source of object detection. Along with this, we have 1200 images of stop signs with their respective labels provided by the LISA road sign dataset which was used to train the model to detect stop signs. We have the script running with a couple extra arguments including,

  • 95% confidence threshold, doing this would filter out the false positives picked up from glare or signs that looked similar to a stop sign
  • 10x video stride speed, this would make the process significantly faster by speeding the video up 10 times faster then the normal rate and the results were relatively the same

While the script is running it displays information we need to determine when stops signs appeared to the console, we can redirect the output of this script to a text file and re read it after its finished classifying the entire video.

gif1

os.system(rf"sudo python3 ~/OD/yolov5/detect.py --weights ~/OD/yolov5/runs/train/exp3/weights/best.pt --source ~/OD/s3files/{mp4key} --conf-thres 0.95 --vid-stride 10 >> ~/OD/DATA/results/{txtkey} 2>&1")

Once the script is done and the entire video has been analyzed into the text file, we can use information from the text file to compare its results to the csv generated from stage 1 and add flags accordingly. Now by using this text file, we have 2 important pieces of information. On the right side we have what was detected, and on the left we have when it was detected.

detections

To determine the when, we simply divide the current frame by the frame rate of the camera that recorded the video to get the time in seconds. The object detection script runs at 10x the speed and camera frame rate = 30fps, so we divide 30/10 = 3 and this value is what we divide the current frame by to get the time in seconds. But when exactly do we decide when to look at the time the stop sign was detected? If we say anytime a stop sign appears in the text, there would be many true posistives in an area in which only one stop sign was actually present.

So to make sure that we know a single stop sign was present comes with the following logic.

with open(fr'/home/ubuntu/OD/DATA/results/{txtkey}', 'r', newline='') as file:
lines = file.readlines()
stop_sign = 0
no_stop = 0
global stoptimes
stoptimes = [] # List which stores times where the stop times were detected
# If 4 stop sign frames occur before 4 frames of no detections, append time to stoptimes.
for line in lines:
stop = line.find('stop')
if stop != -1: #If current line has "stop" in it
stop_sign += 1
if stop_sign == 1:
no_stop = 0
if stop_sign == 4:
x = line.split('(') # Removing line formatting to get frame integer
y = x[1].split('/')
s = int(y[0]) / 3 # Divide current frame by 3 to get time in seconds (30fps camera at 10x the speed)
stoptimes.append(s) # Append time to stoptimes
else: #If 4 frames of no detections occur before 4 frames with stop sign, filter these out
no_stop += 1
if no_stop == 4:
stop_sign = 0
no_stop = 0
We see here that if 4 stop sign detections occur before 4 no detections occur, this is deemed as a true positive. They do not need to be in succession with each other, as long as 4 stop signs frames occur before 4 frames of no detections this qualifies as one true positive and we can compare your speed in the csv to the time the true positive was determined.

If your speed ever reached under 1.5mph between the previous 2 seconds and future 6 seconds the true positive occurred, then this is deemed as stopping and no flags will be added to the current row. Otherwise if your minimum speed was above 1.5mph for that time span, a 'Ran Stop Sign' flag will be added.

MobiVision/Stage2/runOD.py

Lines 84 to 103 in 0cc6c82

csv_lines = csv.readlines()
newlines = []
speeds = []
count = 0
for i, line in enumerate(csv_lines):
speeds.clear()
splt = line.split(',')
try:
if splt[5] == str(int(stoptimes[count])): #If time in csv == time that stop sign was deteced
for j in range(i-2, i+6): #Add last 2 seconds and future 8 seconds speeds into 'speeds' list
speeds.append(csv_lines[j].split(',')[2])
count += 1 #stoptimes index increment
if float(min(speeds)) > 1.5: #If lowest speed was above 1.5mph, then add flag
splt[6] = 'Ran Stop Sign\n'
line = ','.join(splt)
newlines.append(line)
else:
newlines.append(line)
except:
newlines.append(line)

In the example/gif above, a true positive would have been determined at frame 89.

89/3 = just about 30 seconds

stopped

Looking at the csv after stage 2 has been processed, the vehicle completely stops at 33 seconds matching the criteria to have stopped at a stopped sign (any speed under 1.5mph between last 2 and future 6 seconds stop sign is found) and no flags were added.

But later on during the same trip when another true positive was detected

nostopped

We see 4 stop sign frames occur before 4 no detections at frame 706.

706/3 = about 235 seconds

nostop

Looking at the csv post stage 2, because no speeds were below 1.5mph between -2 and +6 seconds a 'Ran Stop Sign' flag was added the row.

nostop2

Once text file has been fully iterated through and all stop signs have been accounted for, the stage 2 csv is complete. We can now send this csv over to the stage 3 bucket to check for speeding occurences.

Stage 3

After the stop sign flags have been added, it's time to get speeding flags for speeds under 65 mph. And once we those flags added our csv will be finalized with all 5 flags fully processed and we can import important data from the csv sheet into our rds PostgreSQL database.

Stage 3 is achieved with the following resources:

Firstly, csv is dropped into the stage3 bucket which will trigger the lambda function. With this we can extract information about the event that triggered the lambda and download the csv we need.

def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
s3.download_file(Bucket=bucket, Key=key, Filename=rf'/tmp/{key}')

Once the csv is downloaded, we will iterate though the csv rows to check if you were over 5mph above the speed limit for speeds under 65mph. We check under 65mph because in stage 1 we already have speeding flags checked for those higher speeds. The way we check the speed limit is by using Azure Maps REST API and using the current rows latitude and longitude, we can get the speed limit of the exact position you were driving in. For the purpose of being efficient with the API calls instead of calling the api every row, we call it once every 10 rows (10 seconds). Your geographical position has changed enough in 10 seconds to warrant another api call as opposed to every single second.

Below is the logic behind using this API to get speeds limits and compare your current speed in the csv to the speed limit. If you were driving 5mph over the speed limit, a 'Speeding' flag will be added to the current row.

for i, row in enumerate(reader):
if i % 10 == 0: # Check speeds every 10 rows (10 seconds)
speed = float(row[2]) # Get speed from csv row
if speed < 65:
speed_limit = get_speed_limit(row[0], row[1]) # Get speed limit with latitude and longitude for api
if speed_limit == None: #If speed limit data not available, rewrite current row as is
writer.writerow(row)
elif (speed - speed_limit) > 5: # If over speed limit by 5 mph
row[6] = 'Speeding' # Add speeding flag to current row
writer.writerow(row) # Write row to new file
else:
writer.writerow(row) #Add row as is
else:
writer.writerow(row) #Add row as is
else:
writer.writerow(row) #Add row as is

And below is the function with the API call that returns the speed limit

def get_speed_limit(lat, lon):
latitude = str(round(float(lat), 6))
longitude = str(round(float(lon), 6))
key = str(os.environ.get('azure_key'))
r = requests.get(rf'https://atlas.microsoft.com/search/address/reverse/json?api-version=1.0&query={latitude},{longitude}&returnSpeedLimit=true')
response = r.json()
try:
speed_limit = float(response['addresses'][0]['address']['speedLimit'].split("M")[0])
return speed_limit
except:
return None

Lambda script checked rows periodically for speeds limits, seen speed was 5mph above speed limit and added flags accordingly imagea

Using coordinates from above example to data returned by API imagec imageb

Once the csv has been iterated through completely, all 5 flags from stage 1 to now have been checked for and added to the current csv. But now what? How do we display this information in a easy to comprehend way without looking through say hundreds or rows in a csv to find out. What we do is put this on a website for the driver to easily see where flags were present in their trip. The way we do this is by using Django as our web server's front-end/back-end framework to retrieve and display all the information from the current trip and trips to come. In order to do this, Django needs access to this information constantly. A PostgreSQL database is set up with the following tables.

image4 (Order of CSV is slightly rearranged but the logic is indifferent)

As you see in the image above, a row with a flag in it is the row we will append to our database. The primary key and foreign key elements will be the trip date and time of the trip being analyzed. In stage 1 we have a "date" variable to serve as a uuid for the current trip. This uuid contains the "Month-Day-Year_Hour_Minute_Second.csv" of the trip and is used throughout the entire process to make all trips unique.

Stage 1 date variable

now = datetime.now()
#date variable serves as a uuid for every trip, no two trips will have the same date and time,
#making it easy to identify exactly which trip is being reffered too in later processing.
date = now.strftime("%m-%d-%Y_%Hh%Mm%Ss")

So how do we get this information into our database tables? Well during stage 3's lambda function it makes use of an API we created on API Gateway to add all rows with flags in them into our database.

Function that is called after speeding flags have been added

def insert_events(reader, key):
headers = {'x-api-key': str(os.environ.get('xapikey'))}
for row in reader:
if row[6] != '*':
requests.post(rf'https://sbebc8tcb4.execute-api.us-east-1.amazonaws.com/one/upload?tripdate={key}&lat={row[0]}&lon={row[1]}&speed={row[2]}&x={row[3]}&y={row[4]}&time={row[5]}&flag={row[6]}', headers=headers)
You can see any row that is not a '*', it makes a call to the api using all the information from the current row as well as the key name of the csv which is the uuid we discussed before.

The API is a POST REST API that uses lambda integration to handle connecting to our database and inserting information into our tables with a simple link. The lambda will query through the http string parameters and execute the psycopg2 commands needed to insert data into the tables.

First we set variables equivalent to the correct string parameters, and check to see if the uuid (tripdate) already exist in the "Trips" table

def lambda_handler(event, context):
tripdate = event['queryStringParameters']['tripdate']
lat = event['queryStringParameters']['lat']
lon = event['queryStringParameters']['lon']
speed = event['queryStringParameters']['speed']
x = event['queryStringParameters']['x']
y = event['queryStringParameters']['y']
time = event['queryStringParameters']['time']
flag = event['queryStringParameters']['flag']
hostname = os.environ.get('hostname')
port = os.environ.get('port')
username = os.environ.get('username')
password = os.environ.get('password')
conn = psycopg2.connect(dbname='MobiVisionTables', host=hostname, port=port, user=username, password=password)
cursor = conn.cursor()
# Return if entry in 'Trips' table already exist for current tripdate
cursor.execute(f"SELECT * FROM Trips WHERE tripdate='{tripdate}'")
results = cursor.fetchone()

If this is the first time the API is being called for the current trip, a new entry will be inserted into the "Trips" table first because the "Events" table requires a Foreign Key reference to the trips table to be valid. If this is not the first time the API is being called, then it will simply insert the current data into the "Events" table which holds all the events from every trip.

if results == None:
cursor.execute(f"INSERT INTO Trips(tripdate) VALUES ('{tripdate}')")
cursor.execute(f"INSERT INTO Events(tripdate, lat, lon, speed, xacc, yacc, sec, flags) VALUES('{tripdate}', '{lat}', '{lon}', '{speed}', '{x}', '{y}', '{time}', '{flag}')")
else: # Else insert into events table only
cursor.execute(f"INSERT INTO Events(tripdate, lat, lon, speed, xacc, yacc, sec, flags) VALUES('{tripdate}', '{lat}', '{lon}', '{speed}', '{x}', '{y}', '{time}', '{flag}')")

Once all rows with flags have been inserted into the database, everything we need to display you trips data ready to viewed by using Django. The final csv is now sent to our stage 4 bucket.

For information on how to use PostgreSQL’s python library psycopg2 inside a lambda function, please check this repo https://github.com/jkehler/awslambda-psycopg2

Stage 4

Once we have our finalized csv with all flags appended and inserted into our database, it's time to visualize this data and display it on a website.

This is done by using the following resources:

  • 2 EC2 instances
  • 1 S3 bucket
  • 1 sqs queue

You may have noticed when introducing the MobiVision the maps that are shown on the website which visualize the entire trip into a single image. The way these images are processed are by using Selenium to grab a cross section of a map from OpenStreeMaps.org and some python scripting to fill in said image.

Firstly we start similar to stage 2 in that we have an event notification triggered by a csv upload to our stage 4 bucket send a message to an sqs queue. An ec2 dedicated to producing a map is polling for this message and once it is received will parse through the json for information regarding the file that caused this notification. Once downloaded, the logic for generating a map goes as follows.

Using the CSV's maximum and minimum latitude and longitude, we can generate a bounding box to use for a cross section of a map that all of the points of the csv will fit inside. Meaning the entire trip of all coordinates logged can be viewed in this single image.

def get_map(csv_path):
# Retrieves the min and max lon and lat cooridnates for bounding box of section of map to get
df = pd.read_csv(rf'{csv_path}', names=['lat', 'lon', 'flag']) # Remove all columns except for lat, lon and flags which are needed
global min_lat, max_lat, min_lon, max_lon
# Get minimum and maximum latitiude and longitude from the cvs. Also add/subtract constant for padding around the image
min_lat = df['lat'].min() - 0.001
min_lon = df['lon'].min() - 0.001
max_lat = df['lat'].max() + 0.001
max_lon = df['lon'].max() + 0.001

They way we get this map is by using selenium to access OpenStreetMaps.org and modifying some of the elements of the html to grab an exact cross section of a map we need based on the coordiates.

# Modifying elements of html to use input csv's coordinates to capture section of map
driver = webdriver.Chrome(executable_path=r'/usr/bin/chromedriver', options=options)
driver.get(r'https://www.openstreetmap.org/export#map=13/41.6178/-72.7686')
element = driver.find_element(By.XPATH, '/html/body/div/div[2]/div[3]/div[4]/form/input[1]')
driver.execute_script(f"arguments[0].setAttribute('value', '{min_lon}')", element)
element = driver.find_element(By.XPATH, '/html/body/div/div[2]/div[3]/div[4]/form/input[2]')
driver.execute_script(f"arguments[0].setAttribute('value', '{min_lat}')", element)
element = driver.find_element(By.XPATH, '/html/body/div/div[2]/div[3]/div[4]/form/input[3]')
driver.execute_script(f"arguments[0].setAttribute('value', '{max_lon}')", element)
element = driver.find_element(By.XPATH, '/html/body/div/div[2]/div[3]/div[4]/form/input[4]')
driver.execute_script(f"arguments[0].setAttribute('value', '{max_lat}')", element)
element = driver.find_element(By.XPATH, '/html/body/div/div[2]/div[3]/div[4]/form/input[7]')
driver.execute_script(f"arguments[0].click()", element)

Base map generated from selenium script:

image5

Other methods of retrieving a cross section of a map based on coordinates failed. Libraries that do just this aren't precise enough with the coordinates to be used for our purposes or too costly.

Now that we have this base map, the way we introduce our data into it comes from this formula for converting one min/max set to another min/max set.

def convert_minmax(row): #Convert min/max set of coordinates to min/max set of pixels of image
lat_coor = hei - (float(row[0]) - min_lat) / (max_lat - min_lat) * (hei - 0) + 0
lon_coor = (float(row[1]) - min_lon) / (max_lon - min_lon) * (wid - 0) + 0
return lat_coor, lon_coor

The formula used in this function will take the input latitude and longitude and convert it into the length and width in pixels of where it should be on the image. So when we iterate through the csv of the current trip, we need to use this function to return all the points from the csv to be converted into image points. If there is a flag present we will append it to flag points with a unique flag color.

for row in reader_list: #Convert all points in list and add to img_points
lat_coor, lon_coor = convert_minmax(row)
img_points.append([lon_coor, lat_coor])
if row[2] != '*':
flag = row[2]
if flag == 'Speeding':
flag_points.append([lon_coor, lat_coor]) #If current row has flag in it append point to flag_points
colors[f'{lat_coor, lon_coor}'] = '#7d0000' #Color of circle will change depending on flag type
elif flag == 'Hard Acceleration':
flag_points.append([lon_coor, lat_coor])
colors[f'{lat_coor, lon_coor}'] = '#ff7300'
elif flag == 'Hard Breaking':
flag_points.append([lon_coor, lat_coor])
colors[f'{lat_coor, lon_coor}'] = '#0048ff'
elif flag == 'Hard Cornering':
flag_points.append([lon_coor, lat_coor])
colors[f'{lat_coor, lon_coor}'] = '#00a118'
elif flag == 'Ran Stop Sign':
flag_points.append([lon_coor, lat_coor])
colors[f'{lat_coor, lon_coor}'] = 'red'

Now that we have all of the coordinates converted to image points, we just use the PIL library to draw these points onto the base image. For image points we draw a line, for flag points we draw a circle with the corresponding color.

for i in range(len(img_points)):
try:
if img_points[i+1]:
line = img_points[i][0], img_points[i][1], img_points[i+1][0], img_points[i+1][1]
draw.line(line, fill=(0,0,255), width=4)
else:
break
except:
break
for flag in flag_points: #Draw flag circles
draw.ellipse((flag[0]-f_const, flag[1]-f_const, flag[0]+f_const, flag[1]+f_const), fill=get_color(flag, colors), outline=(0,0,0))

image6

The start and end points (White and Black) follow the same logic as the flag circles

Once we have this final map, to display it on our website our webserver is listening (same logic as stage 2 and stage 4) for a sqs message from a png upload to the stage 4 bucket. Once received the webserver will automatically save it to it's static files to be viewed on the website. https://github.com/bdubxl/MobiVision/blob/main/Stage%204/Website/autostatic.py

After configuring or settings.py file for connectivity with our database, using Django's "inspect.db" tool we make migrate the exact arrangement of the tables into our models.py file.

class Events(models.Model):
tripdate = models.ForeignKey('Trips', models.DO_NOTHING, db_column='tripdate', blank=False, null=False, primary_key=True, unique=False)
lat = models.CharField(max_length=255, blank=False, null=False)
lon = models.CharField(max_length=255, blank=False, null=False)
speed = models.CharField(max_length=255, blank=False, null=False)
xacc = models.CharField(max_length=255, blank=False, null=False)
yacc = models.CharField(max_length=255, blank=False, null=False)
sec = models.CharField(max_length=255, blank=False, null=False)
flags = models.CharField(max_length=255, blank=False, null=False)
def __str__(self):
return f'{self.tripdate}, {self.lat}, {self.lon}, {self.flags}'
class Meta:
managed = False
db_table = 'events'
class Trips(models.Model):
tripdate = models.CharField(primary_key=True, max_length=255)
def __str__(self):
return self.tripdate
class Meta:
managed = False
db_table = 'trips'

Now once we have made the models migrated using our views.py file we can query the data using django functions and display the queried data into HTML files for viewing.

def home(request):
trips = Trips.objects.all() # Query all trips from database
if request.method == 'POST': #Display Events from selected trip
tripd = request.POST.get('tripdate') # Depending on button press find data returned from form with tripdate value
events = Events.objects.filter(tripdate=tripd) #Query all events from selected date
context = {'events': events, 'tripdate' : tripd}
return render(request, 'events.html', context)
else: # Display all trips
context = {'trips': trips}
return render(request, 'index.html', context)

Note this is a very basic website with the purpose of just showing data from your trip and nothing more.

HTML utalizing contextual views variables ("Trips Page")

<form action="" method="post">
{% csrf_token %}
{% for trip in trips %}
<button name="tripdate" value="{{trip.tripdate}}">
<img src="{% static "" %}{{trip.tripdate}}.png" style="width:300px;height:300px;">
<div class="top-middle">{{trip}}</div>
</button>
{% endfor %}
TripsPage

"Events (More Details) Page"

{% for event in events %}
{% if event.flags == 'Ran Stop Sign' %}
<li class="list-group-item">{{event.lat}}, {{event.lon}}, Ran Stop sign at {{event.speed}}mph</li>
{% endif %}
{% if event.flags == 'Hard Cornering' %}
<li class="list-group-item">{{event.lat}}, {{event.lon}}, Turned Corner at {{event.yacc}} G's</li>
{% endif %}
{% if event.flags == 'Hard Acceleration' %}
<li class="list-group-item">{{event.lat}}, {{event.lon}}, Accelerated from {{event.speed|sub:8|add_decimal:0}}mph to {{event.speed}}mph in one second</l1>
{% endif %}
{% if event.flags == 'Speeding' %}
<li class="list-group-item">{{event.lat}}, {{event.lon}}, Speed Limit surpassed by Over 5mph, Going {{event.speed}}mph </l1>
{% endif %}
{% if event.flags == 'Hard Breaking' %}
<li class="list-group-item">{{event.lat}}, {{event.lon}}, Deccelerated from {{event.speed|add:8|add_decimal:0}}mph to {{event.speed}}mph in one second</li>
{% endif %}
{% endfor %}
MoreDetailsPage