Skip to content

anupwarrier/DeepLens-workshops

 
 

Repository files navigation

DeepLens-Workshop Labs

In this workshop you will learn the following:

  1. Register the DeepLens device to your AWS account
  2. Modify the DeepLens inference lambda function to upload cropped faces to S3
  3. Deploy the inference lambda function and face detection model to DeepLens
  4. Create a lambda function to trigger Rekognition to identify emotions
  5. Create a DynamoDB table to store the recognized emotions
  6. Analyze sentiments using CloudWatch

image

The workshop consists of 3 hands-on lab sessions:

Hands-on Lab 1: Register and configure your DeepLens device

Register AWS DeepLens

Visit AWS Management Console. Make sure you are on US-East (N.Virginia) region.

Search for DeepLens in the search bar and select AWS DeepLens to open the console.

On the AWS DeepLens console screen, find the Get started section on the right hand side and select Register Device.

register device landing page

Step 1- Provide a name for your device.

Enter a name for your DeepLens device (for example, “MyDevice”), and select Next.

name device

Step 2- Provide permissions

AWS DeepLens projects require different levels of permissions, which are set by AWS Identity and Access Management (IAM) roles. When registering your device for the first time, you'll need to create each one of these IAM roles.

Steps to use CloudFormation to automatically create required IAM roles for DeepLens

  • Select the checkbox "I acknowledge that AWS CloudFormation might create IAM resources with custom names." and click Create.

  • Wait for few seconds and refresh the screen to find that status is CREATE_COMPLETE.

Manual steps to create IAM roles for DeepLens(If the Cloudformation above is failing for some reason)

Role 1- IAM role for AWS DeepLens

Select Create a role in IAM.

create role-deeplens

Use case is selected by default. Click Next:Permissions

Click Next:Review

Click Create role

service role review page

Return back to the Set Permission page, select on Refresh IAM roles and select the newly created Role name AWSDeepLensServiceRole.

refresh role- service roles

Role 2- IAM role for AWS Greengrass

Select Create a role in IAM.

Use case is selected by default. Click Next:Permissions

Click Next:Review

Click Create role

Return back to the Set Permission page, select on Refresh IAM roles and select the newly created Role name AWSDeepLensGreengrassRole.

Role 3- IAM group role for AWS Greengrass

Select Create a role in IAM. Use case is selected by default. Click Next:Permissions Click Next:Review Click Create role Return back to the Set Permission page, select on Refresh IAM roles and select the newly created Role name AWSDeepLensGreengrassGroupRole.

Role 4- IAM role for Amazon SageMaker

Select Create a role in IAM. Use case is selected by default. Click Next:Permissions Click Next:Review Click Create role Return back to the Set Permission page, select on Refresh IAM roles and select the newly created Role name AWSDeepLensSageMakerRole.

Role 5- IAM role for AWS Lambda

Select Create a role in IAM. Use case is selected by default. Click Next:Permissions Click Next:Review Click Create role Return back to the Set Permission page, select on Refresh IAM roles and select the newly created Role name AWSDeepLensLambdaRole.

Note: These roles are very important. Make sure that you select the right role for each one, as you can see in the screenshot.

all roles

Once you have all the roles correctly created and populated, select Next.

Step 3- Refresh IAM roles and Select the newly created Role Name

In AWS DeepLens, on the Set permissions page, choose Refresh IAM roles, then do the following:

  • For IAM role for AWS DeepLens, choose AWSDeepLensServiceRole.
  • For IAM role for AWS Greengrass service, choose AWSDeepLensGreengrassRole.
  • For IAM role for AWS Greengrass device groups, choose AWSDeepLensGreegrassGroupRole.
  • For IAM role for Amazon SageMaker, choose AWSDeepLensSagemakerRole.
  • For IAM role for AWS Lambda, choose AWSDeepLensLambdaRole.

Important, Attach the roles exactly as described. Otherwise, you might have trouble deploying models to AWS DeepLens.

If any of the lists do not have the specified role, find that role in step 2, follow the directions to create the role, choose Refresh IAM roles, and return to where you were in step 3.

Once you have all the roles correctly created and populated, select Next.

Step 4- Download certificate

In this step, you will download and save the required certificate to your computer. You will use it later to enable your DeepLens to connect to AWS.

Select Download certificate and note the location of the certificates.zip file. Select Register.

download certificate

Note: Do not open the zip file. You will attach this zip file later on during device registration.

Configure your DeepLens

In this step, you will connect the device to a Wi-Fi/Ethernet connection, upload the certificate and review your set-up. Then you're all set!

Power ON your device

If you are connected over monitor setup

Make sure the middle LED is blinking. If it is not, then use a pin to reset the device. The reset button is located at the back of the device

Navigate to the setup page at 192.168.0.1.

If you are connected in headless mode

Make sure the middle LED is blinking. If it is not, then use a pin to reset the device. The reset button is located at the back of the device

Locate the SSID/password of the device’s Wi-Fi. You can find the SSID/password on the underside of the device.

Connect to the device network via the SSID and provide the password

Navigate to the setup page at 192.168.0.1.

set up guide

Step 5- Connect to your network

Select your local Wi-Fi network ID from the dropdown list and enter your WiFi password. If you are using ethernet, choose Use Ethernet option instead.

Select Save.

network connection

Step 6- Attach Certificates

Select Browse in the Certificate section. Select the zip file you downloaded in Step 4

Select Next.

upload certificate

Step 7- Device set up.

If you are on the device summary page- Please do not make changes to the password.

Note: Instead, if you are presented with the below screen, type the device password as Aws2017! .

device settings

Step 8- Select Finish

set up summary finish

Congratulations! You have successfully registered and configured your DeepLens device. To verify, return to AWS DeepLens console and select Devices in the left side navigation bar and verify that your device has completed the registration process. You should see a green check mark and Completed under Registration status.

Hands-on Lab 2: Build a project to detect faces and send the cropped faces to S3 bucket

IAM Roles:

First, we need to add S3 permissions to the DeepLens Lambda role so the lambda on the device can call Put Object into the bucket of interest.

Go to IAM Console

Choose Roles and look up AWSDeepLensGreenGrassGroupRole

Click on the role, and click Attach Policy

Search for AmazonS3FullAccess and choose the policy by checking the box and click on Attach Policy

Create Bucket:

We need to create an S3 bucket that we can upload faces to.

Go to AWS Management console and search for S3

Choose 'Create bucket'

Name your bucket : face-detection-your-name

Click on Create

Create Inference lambda function:

Go to AWS Management console and search for Lambda

Click 'Create function'

Choose 'Blueprints'

In the search bar, type “greengrass-hello-world” and hit Enter

Choose the python blueprint and click Configure

Name the function: DeepLens-sentiment-your-name Role: Choose an existing role Existing Role: AWSDeepLensLambdaRole

Click Create Function

Replace the default script with the inference script . You can select the inference script, by selecting Raw in the Github page and choosing the script using ctrl+A/ cmd+A . Copy the script and paste it into the lambda function (make sure you delete the default code).

In the script, you will have to provide the name for your S3 bucket. Insert your bucket name in the code below

code bucket

Click Save

Under the “Actions” drop-down menu, Click “Publish new version” and publish.

Note:It is important that you publish the lambda function, else you cannot access it from DeepLens console.

Deploy project:

Step 1- Create Project

The AWS DeepLens console should open on the Projects screen, select Create new project on the top right (if you don’t see the project list view, click on the hamburger menu on the left and select Projects)

create project

Choose a blank template and scroll down the screen to select Next

Provide a name for your project: face-detection-your-name

Click on Add Models and choose face detection

Click on Add function and choose the lambda function you just created: Deeplens-sentiment-your-name

Click Create

Step 2- Deploy to device In this step, you will deploy the Face detection project to your AWS DeepLens device.

Select the project you just created from the list by choosing the radio button

Select Deploy to device.

choose project-edited-just picture

On the Target device screen, choose your device from the list, and select Review.

target device

Select Deploy.

review deploy

On the AWS DeepLens console, you can track the progress of the deployment. It can take a few minutes to transfer a large model file to the device. Once the project is downloaded, you will see a success message displayed and the banner color will change from blue to green.

Confirmation/ verification

You will find your cropped faces uplaod to your S3 bucket.

Hands-on Lab 3: Identify emotions

Step 1- Create DynamoDB table

Go to AWS Management console and search for Dynamo

Click on Create Table.

Name of the table: recognize-emotions-your-name Primary key: s3key

Click on Create. This will create a table in your DynamoDB.

Step 2- Create a role for cloud lambda function

Go to AWS Management console and search for IAM

Choose 'Create Role'

Select “AWS Service”

Select “Lambda” and choose "Next:Permissions"

Attach the following policies:

  • AmazonDynamoDBFullAcces
  • AmazonS3FullAccess
  • AmazonRekognitionFullAccess
  • CloudWatchFullAccess

Click Next

Provide a name for the role: rekognizeEmotions

Choose 'Create role'

Step 3- Create a lambda function that runs in the cloud

The inference lambda function that you deployed earlier will upload the cropped faces to your S3. On S3 upload, this new lambda function gets triggered and runs the Rekognize Emotions API by integrating with Amazon Rekognition.

Go to AWS Management console and search for Lambda

Click 'Create function'

Choose 'Author from scratch'

Name the function: recognize-emotion-your-name.
Runtime: Choose Python 2.7 Role: Choose an existing role Existing role: rekognizeEmotions

Choose Create function

Replace the default script with the script in recognize-emotions.py. You can select the script by selecting Raw in the Github page and choosing the script using ctrl+A/ cmd+A . Copy the script and paste it into the lambda function (make sure you delete the default code).

Make sure you enter the table name you created earlier in the section highlighted below:

dynamodb

Next, we need to add the event that triggers this lambda function. This will be an “S3:ObjectCreated” event that happens every time a face is uploaded to the face S3 bucket. Add S3 trigger from designer section on the left.

Configure with the following:

  • Bucket name: face-detection-your-name (you created this bucket earlier)
  • Event type- Object Created
  • Prefix- faces/
  • Filter- .jpg
  • Enable trigger- ON (keep the checkbox on)

Save the lambda function

Under 'Actions' tab choose Publish

Step 4- View the emotions on a dashboard

Go to AWS Management console and search for Cloudwatch

Create a dashboard called “sentiment-dashboard-your-name”

Choose Line in the widget

Under Custom Namespaces, select “string”, “Metrics with no dimensions”, and then select all metrics.

Next, set “Auto-refresh” to the smallest interval possible (1h), and change the “Period” to whatever works best for you (1 second or 5 seconds)

NOTE: These metrics will only appear once they have been sent to Cloudwatch via the Rekognition Lambda. It may take some time for them to appear after your model is deployed and running locally. If they do not appear, then there is a problem somewhere in the pipeline.

With this we have come to the end of the session. As part of building this project, you learnt the following:

  1. Register the DeepLens device to your AWS account
  2. Modify the DeepLens inference lambda function to upload cropped faces to S3
  3. Deploy the inference lambda function and face detection model to DeepLens
  4. Create a lambda function to trigger Rekognition to identify emotions
  5. Create a DynamoDB table to store the recognized emotions
  6. Analyze sentiments using CloudWatch

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 80.2%
  • Python 19.8%