Skip to content

goruck/foodcomputer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The Food Computer Alexa skill is now in beta test! Please send me an e-mail to be a part of the beta and you'll be able to use the skill to interact with my food computer on your Echo devices. Any feedback is welcome but please be aware this skill and the Personal Food Computer project are still in development.

Alexa OpenAg Personal Food Computer

This project describes how to use Amazon's Alexa to create a voice user interface to the OpenAg Personal Food Computer (PFC) and provides a framework to use Alexa with the Internet of Things.

I was very fortunate to have the opportunity to build my own OpenAg PFC which is now in use at my home. I thought I'd give back the OpenAg community by integrating Amazon's Alexa with the PFC. I find the PFC an amazing device and the OpenAg initiative inspiring! I hope people find the PFC even more useful with Alexa.

If you want to try this Alexa skill out right now you can be included in my beta test and interact with my PFC via my skill on your own Echo device. Or fork my code and run it as your own skill if you have your own PFC.

Table of Contents

  1. Project Requirements
  2. System Architecture
  3. Alexa Skills, AWS Lambda and S3
  4. OpenAg Brain
  5. OpenAg Computer Vision
  6. CouchDB
  7. Plotly
  8. Alexa User Interaction Examples
  9. Licensing
  10. Contact Information
  11. Appendix

Project Requirements

My high level goals and associated requirements for this project are shown below.

  1. Learn about the OpenAg initiative and and how might people benefit from this technology.
  2. Learn how to develop an Alexa skill for Echo devices with a display and in particular skills for the Internet of Things. This lead to the requirement to use an external plot creation service (plot.ly), for example. Note that I did consider using the AWS IoT service for this project but decided against it since the PFC is a relatively complex device and in particular has an integrated database.
  3. Learn how to integrate a voice UI with the PFC that gave a great user experience. This required me to not only think through the user interaction but also help fix stability bugs in the OpenAg Brain software and to further develop its API to facilitate the integration with Alexa. I also had to pay attention how fast Alexa responded to requests with five seconds or less as the requirement. This drove some decisions such as how to access the PFC's CouchDB database, in particular the use of stale views. Latency optimization is a work in progress.
  4. Learn about ROS which is extensively used in the OpenAg Brain project.
  5. Create an end-to-end framework for Alexa Skills development that can be used for other similar IoT applications. This lead to the requirement to use standard services and components where possible and provide clear documentation.
  6. Ensure that the end-to-end service is secure. This required me to enable HTTPS on CouchDB and in the JavaScript code that handles the Alexa intent.
  7. Learn about how computer vision can be used in growing plants optimally. This lead to the use of OpenAg CV and the integration of it with OpenAg Brain.

System Architecture

The figure below shows the Alexa OpenAg Personal Food Computer's system architecture.

Alt text

Alexa Skills, AWS Lambda and S3

The JSON in this repo's ask directory can be used in your dev account to create the Alexa skill and the JavaScript in the lambda directory will need to run in your own lambda instance (using node.js as the runtime) as it serves as the intent handler for the skill.

To write your own version of this skill you need to set up an Amazon applications developer account and an Amazon Web Services account. See this excellent tutorial for an example of how to do this and get started writing Alexa skills. I used the Alexa Skills Kit SDK for Node.js to develop this application.

I'm using AWS S3 to temporarly store images from the PFC's cameras and the plots of PFC variable data over time from Plotly service. This is required since the Alexa service needs a URL to an image to be displayed. I needed to setup a S3 bucket for this purpose and give the lambda service permission to access it, see this for how to do that.

OpenAg Brain

I've made modifications to the openag_brain source to facilitate integration with Alexa and although some of these changes have been integrated into the official openag_brain repo, to get the latest you should use my "cv" fork of openag_brain. See the appendix for the steps required to install OpenAg Brain.

OpenAg Computer Vision

I'm using openag_cv for CV development. I've also made modifications to the openag_cv source to get it to work with an integrated openag_brain configuration, so you need to use my fork of openag_cv. See the appendix for the steps required to install OpenAg CV.

CouchDB

The PFC uses Apache CouchDB which is open source database software. It has a document-oriented NoSQL database architecture and is implemented in the concurrency-oriented language Erlang; it uses JSON to store data, JavaScript as its query language using MapReduce, and HTTP for an API.

The Alexa skill's code runs in an AWS Lambda instance which communicates with the PFC over the Internet via CouchDB REST APIs and openag_brain REST APIs proxied by CouchDB. For security purposes, these APIs need to be authenticated and encrypted via TLS/SSL and the CouchDB "admin party" needs to be ended. See the appendix for the steps I took to secure CouchDB.

Two methods of querying the database are supported - via JSON and CSV. The JSON method has more overhead and so the returned file size is bigger but is returned relatively fast. The CSV method returns a file size that's about 10x smaller but takes 4 to 5 times longer to generate. The choice between them depends on the Internet bandwidth available and the execution time of CouchDB which is affected by overall PFC system load. See the intent handlers in the lambda code directory for examples of how the JSON and CSV methods are implemented over https.

Database queries are used with the "stale = update_after" directive which causes CouchDB to update the view after a stale result is returned. This will speed up queries but results in slightly out of data info returned. See this for more information.

Plotly

I'm using a fantastic plotting service called Plotly to generate plots from the PFC's database information upon a user request to Alexa. I needed to setup a free Plotly account to use it. The lambda code sends the PFC data to the Plotly service which returns a png image. The latency of the service tends to be a second or two but I've seen it as long as five seconds. Although this is not the bottleneck to show a PFC variable plot to the user it can be optimized by running the Plotly code in the lambda instance. This would required it to be compiled as a node package and uploaded with the rest of my handler code to lambda.

Alexa User Interaction Examples

The examples below show how a user can interact with the PFC via Alexa on an Echo device (with or without a screen).

Example User Request Note
Get information about the recipe being used
"Alexa, ask Food Computer for recipe information" Returns current recipe name and when started
Alt text
Show graph of a parameter (if Echo device has a display) Latency may be high - be patient.
"Alexa, ask Food Computer to graph air carbon dioxide" Default is to graph over last 24 hours
Alt text
"Alexa, ask Food Computer to graph {parameter}" See below for list of parameters supported
Show camera image (if Echo device has a display)
"Alexa, ask Food Computer to show top camera" Displays latest image from aerial camera
Alt text Note: plants at end of cycle
"Alexa, ask Food Computer to show side camera" Latest image from side camera
"Alexa, ask Food Computer to show top camera {time} ago" Top camera image from specified past time
Show plant size and number of leaves
"Alexa, ask Food Computer to show measurement view" uses openag_cv
Alt text Note: measurements still need work
Get plant health and any issues that need attention
"Alexa, ask Food Computer how my plants are" in progress
Get Food Computer diagnostics information
"Alexa, ask Food Computer for diagnostics" Returns the health of the system
"Alexa, ask Food Computer how its feeling" alt way of asking for diags
Set the value of a desired parameter
"Alexa ask Food Computer to set air temperature to 30 degrees" in progress
Get the value of a measured parameter (and desired parameter if set)
“Alexa, ask Food Computer for air carbon dioxide” Returns measured air CO2 in ppm
“Alexa, ask Food Computer for air temperature” Returns measured air temp and desired temp if set by recipe
Alt text Note: example with a desired value set
“Alexa, ask Food Computer for water potential hydrogen” aka pH
“Alexa, ask Food Computer for water pH level” Alias for potential hydrogen
“Alexa, ask Food Computer for desired {parameter}” See below for list of parameters supported
Start a recipe
"Alexa, ask Food Computer for recipe lettuce" Will start the recipe called "lettuce"
"Alexa, ask Food Computer to start recipe" lettuce alt way of starting recipe "lettuce"

Here is a list of currently supported Food Computer parameters accessible by Alexa.

Spoken Parameter Food Computer Database Parameter
"carbon dioxide" "air_carbon_dioxide"
"air carbon dioxide" "air_carbon_dioxide"
"air humidity" "air_humidity"
"humidity" "air_humidity"
"air temperature" "air_temperature"
"temperature" "air_temperature"
"light illuminance" "light_illuminance"
"illuminance" "light_illuminance"
"blue light intensity" "light_intensity_blue"
"blue intensity" "light_intensity_blue"
"red light intensity" "light_intensity_red"
"red intensity" "light_intensity_red"
"white light intensity" "light_intensity_white"
"white intensity" "light_intensity_white"
"water electrical conductivity" "water_electrical_conductivity"
"water conductivity" "water_electrical_conductivity"
"water potential hydrogen" "water_potential_hydrogen"
"water ph level" "water_potential_hydrogen"
"water ph" "water_potential_hydrogen"
"water temperature" "water_temperature"
"water level high" "water_level_high"

Licensing

Everything here is licensed under the MIT license.

Contact Information

For questions or comments about this project please contact the author goruck (Lindo St. Angel) at {lindostangel} AT {gmail} DOT {com}.

Appendix

Installing OpenAg_Brain (openag_brain) on the Food Computer's Raspberry Pi

  1. Download Raspbian Jessie Lite.
  2. Download Etcher.
  3. Use Etcher to write Raspbian image to a micro SD card. I used a 64GB card.
  4. Insert SD card in Raspberry Pi, boot to console. Make sure the Raspberry Pi has an Ethernet connection.
  5. Change default password:
$ echo "password" | passwd --stdin pi
  1. Update Raspbian:
$ sudo apt-get update && sudo apt-get upgrade
  1. Start sshd at reboot to allow headless operation via ssh:
$ sudo systemctl start ssh.service
$ sudo systemctl enable ssh.service
  1. Add US English as a locale (optional):
$ sudo dpkg-reconfigure locales
  1. Set time zone:
$ sudo tzconfig
  1. Install git:
$ sudo apt-get install git
  1. Clone my fork of openag_brain source code, similar to this.
$ git clone https://github.com/goruck/openag_brain.git ~/catkin_ws/src/openag_brain
  1. Increase swap space.
  2. Install and compile openag_brain:
$ cd ~/catkin_ws/src/openag_brain
$ ./scripts/install_dev
  1. Build firmware and flash to Arduino:
$ source /opt/ros/indigo/setup.bash
$ cd ~/catkin_ws/src/openag_brain
$ ./scripts/firmware -t upload

Note: you may have to remove and reinsert the Arduino USB cable from the Raspberry pi after the last command.

  1. Test that openag brain works:
$ rosrun openag_brain main personal_food_computer_v2.launch

Installing the OpenAg User Interface (openag_ui) on the Food Computer's Raspberry Pi

  1. Install NodeJs and NPM for openag_ui:
$ sudo apt-get install curl python-software-properties
$ curl -sL https://deb.nodesource.com/setup_7.x | sudo bash -
$ sudo apt-get install nodejs
  1. Clone openag_ui source code:
$ git clone https://github.com/OpenAgInitiative/openag_ui ~/
  1. Build and Deploy the UI:
$ cd openag_ui
$ npm install
$ npm run couchapp_deploy --app_db_url="http://localhost:5984/app"
  1. Test that the UI works Ok: Open your browser to http://localhost:5984/app/_design/app/_rewrite.

Installing OpenAg Computer Vision (openag_cv) on the Food Computer's Raspberry Pi

Clone my fork of openag_brain source code:

git clone https://github.com/goruck/openag_cv.git ~/catkin_ws/src/openag_cv

Misc OpenAg-related Setup and Configuration

  1. Setup wifi on the Raspberry Pi (optional).

  2. Modify rsyslog configuration file to prevent rsyslog messages flooding the logs which may eventually cause a system wide crash. See this for details. You may have this problem if the Raspberry Pi occasionally crashes and the only way to recover is power cycling it. Also you may see messages like the following in /var/log/messages:

Sep  3 11:27:12 raspberrypi rsyslogd-2007: action 'action 17' suspended, next retry is Sun Sep  3 11:28:42 2017 [try http://www.rsyslog.com/e/2007 ]
Sep  3 11:28:42 raspberrypi rsyslogd-2007: action 'action 17' suspended, next retry is Sun Sep  3 11:30:12 2017 [try http://www.rsyslog.com/e/2007 ]
\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\0

Note \00... is binary junk after the actual crash and the action 17 message before that is repeated many times.

To fix, comment out the last 4 lines of /etc/rsyslog.conf file like this:

#daemon.*;mail.*;\
#       news.err;\
#       *.=debug;*.=info;\
#       *.=notice;*.=warn       |/dev/xconsole
  1. Setup email on the Raspberry Pi (optional).

  2. Enable the watchdog timer. The PFC and the openag brain in particular is a complex system that is still in development and as such the Raspberry Pi software can occasionally freeze up. To ensure the health of the plants the Raspberry Pi's watchdog timer should be enabled which will automatically reboot the Raspberry Pi in the event of a software lockup. You also need to enable openag_brain to run as a service that automatically starts after the Raspberry Pi boots. The instructions for doing so can be found here. Note that you should test the watchdog by issuing a so-called forkbomb as follows.

$ swapoff -a
$ forkbomb(){ forkbomb | forkbomb & }; forkbomb

If you have email setup the watchdog will send you and email when it has timed out and triggered a reboot.

Securing the PFC

The Alexa skill's code runs in an AWS Lambda instance which communicates with the PFC over the Internet via CouchDB REST APIs and openag_brain REST APIs proxied by CouchDB. For security purposes, these APIs need to be authenticated and encrypted via TLS/SSL and the CouchDB "admin party" needs to be ended. Here are the steps to do this.

  1. Add an admin account to CouchDB, ending the admin party. The example (taken from this) below assumes the admin's username is name with password secret.
$ curl -X PUT $HOST/_config/admins/name -d '"secret"'
  1. Note that after the admin is created, couchdb needs to be initialized as follows. Note that the patch below needs to be applied so that couchdb accepts the admin name and password.
$ # (if required) detach from the local server first
$ openag db deinit
$ openag db init --db_url http://name:passwd@localhost:5984
  1. Generate server and client certificates and key pairs via OpenSSL. Mutual authentication and self-signed certs will be used. Production environments should use certs signed by an actual CA. This assumes OpenSSL is installed on the Raspberry Pi, it should be by default. See CouchDB HTTP Server for additional details.
$ # login as root to run all these commands
$ su -
$ mkdir /etc/couchdb/cert
$ cd /etc/couchdb/cert
$ # Generate ca private key
$ openssl genrsa -out ca.key 4096
$ # Create self-signed ca cert, COMMON_NAME="My CA"
$ openssl req -new -x509 -days 365 -key ca.key -out ca.crt -sha256
$ # Create client private key
$ openssl genrsa -out client.key 2048
$ # Create client cert signing request, COMMON_NAME="Client 1"
$ openssl req -new -key client.key -out client.csr -sha256
$ # Create self-signed client cert
$ openssl x509 -req -days 365 -in client.csr -CA ca.crt \
> -CAkey ca.key -set_serial 01 -out client.crt -sha256
$ # Create server private key
$ openssl genrsa -out server.key 2048
$ # Create server cert signing request, COMMON_NAME="localhost"
$ openssl req -new -key server.key -out server.csr -sha256
$ # Create signed server cert, where "key.ext" contains "subjectAltName = IP:xxx.xxx.xxx.xxx"
$ # IP is the external IP address of your PFC.
$ openssl x509 -req -days 365 -in server.csr -CA ca.crt \
$ > -CAkey ca.key -set_serial 02 -out server.crt -sha256 -extfile key.ext
$ # Copy client key pair and CA certificate to Lambda source code directory.
$ # These will need to be uploaded to AWS via a zip file which also includes the Lambda Node.js code.
$ cp client.crt path-to-lambda-code/lambda
$ cp client.key path-to-lambda-code/lambda
$ cp ca.crt path-to-lambda-code/lambda
$ # Set file ownership and permissions appropriately
$ chmod 600 *
$ chown couchdb:couchdb *
$ # logout as root
$ exit
  1. Edit CouchDB’s configuration, by editing your local.ini. Change the following sections in the local.ini file (should be in /etc/couchdb).
[daemons]
httpsd = {couch_httpd, start_link, [https]} # Enable the HTTPS daemon

[ssl]
cert_file = /etc/couchdb/cert/server.crt
key_file = /etc/couchdb/cert/server.key
cacert_file = /etc/couchdb/cert/ca.crt
verify_ssl_certificates = true # Set to true to validate peer (client) certificates
fail_if_no_peer_cert = true # Set to true to terminate the TLS/SSL handshake if the client does not send a certificate

Note: CouchDB v1.6.0 (standard with the PFC) does not support the fail_if_no_peer_cert directive but v2.0.0 does. Therefore to ensure robust mutual authentication v2.0.0 should be used with the PFC.

  1. Restart CouchDB so that the modified local.ini file will take effect.
$ sudo service couchdb stop
$ sudo service couchdb start
  1. Test using the external IP address and port number of the PFC.
$ curl --cacert ca.crt --key client.key --cert client.crt https://external-ip-addr:external-port-num/
{"couchdb":"Welcome","uuid":"1d737ecdddede0ece99992f4e8dea743","version":"1.6.0","vendor":{"version":"8.3","name":"Debian"}}

Operating the PFC behind a firewall

A big thanks to Mickael Crozes for introducing me to the concepts in this section!

Operating the PFC with Alexa behind a firewall may be impossible if you can't open the required ports so that the AWS lambda function can communicate with the PFC's REST APIs. This can be a typical problem with corporate firewalls for example. A good solution is to use SSH port forwarding using an EC2 proxy. Here are the steps to do this.

  1. Create a VPC with Public and Private Subnets (NAT). Make sure to enable auto-assignment of public IPv4 addresses for any instance launched in the VPC, see the AWS VPC documentation for more info.

  2. Configure the Lambda function to run in the VPC's Private Subnet(s) and to use the VPC default security group created in Step 1 above. You can do this in the Lambda Network Configuration section of the console.

  3. Set the PFC SSH parameters ServerAliveInterval and ServerAliveCountMax with non-default values to avoid SSH timeouts. I use ServerAliveInterval = 30 and ServerAliveCountMax = 6. See this Stack Exchange article for details.

  4. Launch an EC2 instance in the VPC's Public Subnet, configure it for tunneling, and start a SSH forwarding session on the PFC. See these bash scripts for the associated AWS CLI commands. Note that the EC2 instance needs to use the same security group as the Lambda function (see Step 2 above).

List of Modifications done to Openag_Brain for integration with Alexa (including stability fixes)

  1. OpenAgricultureFoundation/openag_brain#336
  2. OpenAgricultureFoundation/openag_brain#335
  3. OpenAgricultureFoundation/openag_brain#334
  4. OpenAgricultureFoundation/openag_brain#317
  5. OpenAgricultureFoundation/openag_brain#294
  6. OpenAgricultureFoundation/openag_brain#262
  7. OpenAgricultureFoundation/openag_brain#339