Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploying config from topside to the pi's #316

Open
cal-pratt opened this issue Jun 9, 2017 · 5 comments
Open

Deploying config from topside to the pi's #316

cal-pratt opened this issue Jun 9, 2017 · 5 comments

Comments

@cal-pratt
Copy link
Contributor

cal-pratt commented Jun 9, 2017

The last step in the configuration task #270 #276 is to load the config from the topside onto the raspberrypi's and restart the control software on each of these devices. I think the best way to do this would be to write a desktop script which scp's the files to the devices and then restarts.


Brief overview on how we launch the software, and how the configs play into this.

Each of the Raspberry pi's have a service on the operating system called systemd. Systemd is a program which reads service in a Unit file. Our application is run by systemd by launching our custom unit file eer.service found at playbooks/files/etc/systemd/system/eer.service. The command itself is the ExecStart command:

ExecStart=/usr/bin/java -cp "/opt/eer-{{ version }}/libs/*" {{ entry_point }} \
    --default /opt/eer-{{ version }}/defaultConfig.yml \
    --config /home/{{ ansible_user }}/config.yml

--default is the base configuration file you see under the playbooks/files directory. This file gets copied to the pi's every deploy. --config is the override config. This is the file that we want to replace using the topside script.

You'll notice this command has a lot of weird notation in it {{ variable }}. That is the ansible-playbooks templating language. As the file is copied using the template command, all of these variables get replaced with strings.

- name: Copy unit file for systemd
  template: src=files/etc/systemd/system/eer.service dest=/etc/systemd/system/eer.service mode=644

This is what the file looks like after a deloy:

// on rasprime
ExecStart=/usr/bin/java -cp "/opt/eer-9.0.0/libs/*" com.easternedgerobotics.rov.Rov \
    --default /opt/eer-9.0.0/defaultConfig.yml \
    --config /home/pi/config.yml

// on picamera A
ExecStart=/usr/bin/java -cp "/opt/eer-9.0.0/libs/*" com.easternedgerobotics.rov.PicameraA \
    --default /opt/eer-9.0.0/defaultConfig.yml \
    --config /home/pi/config.yml

// on picamera B
ExecStart=/usr/bin/java -cp "/opt/eer-9.0.0/libs/*" com.easternedgerobotics.rov.PicameraB \
    --default /opt/eer-9.0.0/defaultConfig.yml \
    --config /home/pi/config.yml

As you can see, each of the raspberry pi's launch a different class as the entry point into the software. This is how each device knows what role to play.

What are desktop files?

Desktop files are pretty similar to the systemd files. They define a few characteristics about the application and also have an Exec command (similar to ExecStart). We also use ansible to template them. Take Launcher.Desktop for example:

Exec=/usr/bin/java -cp "/opt/eer-{{ version }}/libs/*" {{ entry_point }} 
    --default=/opt/eer-{{ version }}/defaultConfig.yml 
    --config=/home/{{ ansible_user }}/config.yml

// which gets transformed into:
Exec=/usr/bin/java -cp "/opt/eer-9.0.0/libs/*" com.easternedgerobotics.rov.Topside
    --default=/opt/eer-9.0.0/defaultConfig.yml 
    --config=/home/eedge/config.yml

The file we use to create the commands are a little more complicated. Look at eer-command-vehicle:

{% for host in groups['rasprime'] %}
sshpass -p raspberry ssh pi@{{ host }} "sudo $COMMAND"
{% endfor %}
{% for host in groups['picamera'] %}
sshpass -p raspberry ssh pi@{{ host }} "sudo $COMMAND"
{% endfor %}

This is grabbing information from the hosts file to create multiple lines in the output. This transforms into:

sshpass -p raspberry ssh pi@192.168.88.4 "sudo $COMMAND"
sshpass -p raspberry ssh pi@192.168.88.5 "sudo $COMMAND"
sshpass -p raspberry ssh pi@192.168.88.6 "sudo $COMMAND"

Which is then called by the desktop scripts, eg Poweroff.desktop:

Exec=eer-command-vehicle poweroff

The $COMMAND variable in this case gets set to poweroff

Moving the configs:

An easy way to move the configs would be to use an scp call

Exec=eer-command-vehicle "scp eedge@192.168.88.2:~/config.yml ~/config.yml"
//transforms into
sshpass -p raspberry ssh pi@192.168.88.X "sudo scp eedge@192.168.88.2:~/config.yml ~/config.yml"

We then need to reset the control software. To do this we need to ask systemd to reset it, as systemd is what manages our application. To do this we need to make a call to systemctl

Exec=eer-command-vehicle "systemctl restart eer"
//transforms into
sshpass -p raspberry ssh pi@192.168.88.X "sudo systemctl restart eer"

Now the tricky part is running two commands from the one desktop file. Normally a desktop file is only allowed to run a single command, but in our case we need to run two. There's a few ways we can go about this, but I'm going to leave it to you to mess around with and research.
We're also going to need to deploy this file onto the topside.. this means you'll have to update the playbooks/topside.yml file to get it onto the eedge desktop.

This is a lot of info that I wrote fairly quickly. Feel free to ask any questions, or for clarification!!

@k-sutherland
Copy link
Contributor

What do need on my computer to create and run a .desktop file to experiment with stuff? I tried looking this up but all I could find was running from command line and I thought we had a desktop icon.

@cal-pratt
Copy link
Contributor Author

The desktop file displays as a desktop item only on the Unity desktop; found on Ubuntu systems. You can test the Exec command in a shell to see if it is valid. Try launching another vagrant to move files between two devices.

When you're ready to test the desktop icon itself you'll need to deploy it to the topside by adding it to the playbooks/topside.yml ansible task.

@k-sutherland
Copy link
Contributor

So where do I find the password for captain so I can ssh into it? I looked in the vagrantfile and found
shell.name = "Update Default Password", shell.inline = "echo 'ubuntu:eedge' | chpasswd"
I tried to use that password, but it failed.

@ConnorWhalen
Copy link
Contributor

ConnorWhalen commented Jun 14, 2017 via email

@cal-pratt
Copy link
Contributor Author

User is ubuntu and password is eedge. See hosts

[captain]
192.168.1.3 ansible_user=ubuntu ansible_ssh_pass=eedge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants