Skip to content

Commit

Permalink
Merge pull request #92 from innovationOUtside/wip
Browse files Browse the repository at this point in the history
week 7 and housekeeping
  • Loading branch information
psychemedia committed Oct 7, 2020
2 parents 71f259c + 514f30e commit b1937e4
Show file tree
Hide file tree
Showing 70 changed files with 3,676 additions and 2,395 deletions.
Expand Up @@ -68,7 +68,7 @@ Whilst the original `ev3devsim` simulator runs without any software requirements

## 3.2 The EV3 "brick"

The Lego Mindstorms EV3 robotos that inspired `ev3dev`, `ev2devsim` and `nbev3devsim` are based around a physical EV3 controller with a 300 MHz ARM9 processor, 16 MB of Flash memory, 64 MB RAM and a Linux based operating system.
The Lego Mindstorms EV3 robots that inspired `ev3dev`, `ev2devsim` and `nbev3devsim` are based around a physical EV3 controller with a 300 MHz ARM9 processor, 16 MB of Flash memory, 64 MB RAM and a Linux based operating system.

![figure ../tm129-19J-images/tm129_rob_p1_f005.jpg](../images/nogbad_ev3.jpg)

Expand Down Expand Up @@ -145,7 +145,7 @@ Click the *Run* button in the simulator to run the downloaded code there. The ro
%%sim_magic_preloaded

# Drive the robot forwards a short distance
tank_drive.on(SpeedPercent(50), SpeedPercent(50))
tank_drive.on_for_rotations(SpeedPercent(50), SpeedPercent(50), 2)
tank_turn.on_for_rotations(-100, SpeedPercent(75), 2)
```

Expand Down
Expand Up @@ -76,7 +76,7 @@
"source": [
"## 3.2 The EV3 \"brick\"\n",
"\n",
"The Lego Mindstorms EV3 robotos that inspired `ev3dev`, `ev2devsim` and `nbev3devsim` are based around a physical EV3 controller with a 300 MHz ARM9 processor, 16 MB of Flash memory, 64 MB RAM and a Linux based operating system.\n",
"The Lego Mindstorms EV3 robots that inspired `ev3dev`, `ev2devsim` and `nbev3devsim` are based around a physical EV3 controller with a 300 MHz ARM9 processor, 16 MB of Flash memory, 64 MB RAM and a Linux based operating system.\n",
"\n",
"![figure ../tm129-19J-images/tm129_rob_p1_f005.jpg](../images/nogbad_ev3.jpg)\n",
"\n",
Expand Down Expand Up @@ -221,7 +221,7 @@
"%%sim_magic_preloaded\n",
"\n",
"# Drive the robot forwards a short distance\n",
"tank_drive.on(SpeedPercent(50), SpeedPercent(50))\n",
"tank_drive.on_for_rotations(SpeedPercent(50), SpeedPercent(50), 2)\n",
"tank_turn.on_for_rotations(-100, SpeedPercent(75), 2)"
]
},
Expand Down Expand Up @@ -278,6 +278,18 @@
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
Expand Down
Expand Up @@ -410,9 +410,9 @@ else:
<!-- #endregion -->

<!-- #region activity=true heading_collapsed=true -->
#### Answer
#### Example solution

*Click the arrow in the sidebar or run this cell to reveal the answer.*
*Click the arrow in the sidebar or run this cell to reveal an example solution.*
<!-- #endregion -->

<!-- #region activity=true hidden=true -->
Expand Down Expand Up @@ -453,9 +453,9 @@ How does the behaviour of the program lead to the robot’s emergent behaviour i
<!-- #endregion -->

<!-- #region activity=true heading_collapsed=true -->
#### Discussion
#### Example discussion

*Click on the arrow in the sidebar or run this cell to reveal my observations*
*Click on the arrow in the sidebar or run this cell to reveal an example discussion.*
<!-- #endregion -->

<!-- #region activity=true hidden=true -->
Expand Down
Expand Up @@ -640,9 +640,9 @@
"heading_collapsed": true
},
"source": [
"#### Answer\n",
"#### Example solution\n",
"\n",
"*Click the arrow in the sidebar or run this cell to reveal the answer.*"
"*Click the arrow in the sidebar or run this cell to reveal an example solution.*"
]
},
{
Expand Down Expand Up @@ -716,9 +716,9 @@
"heading_collapsed": true
},
"source": [
"#### Discussion\n",
"#### Example discussion\n",
"\n",
"*Click on the arrow in the sidebar or run this cell to reveal my observations*"
"*Click on the arrow in the sidebar or run this cell to reveal an example discussion.*"
]
},
{
Expand Down Expand Up @@ -1000,6 +1000,18 @@
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
Expand Down

This file was deleted.

This file was deleted.

Expand Up @@ -13,27 +13,30 @@ jupyter:
name: python3
---

# 1 Introduction

# 1 Introducing neural networks

You have already been introduced to neural networks in the study materials: now you are going to have an opportunity to play with them in practice.

Neural networks can solve subtle pattern-recognition problems, which are very important in robotics. Although many of the activities are presented outside the robotics context, we will also try to show how they can be used to support robotics-related problems.

In this session, you will get hands-on experience of using a variety of neural networks, and you will build and train neural networks to perform specific tasks, particularly in the area of image classification.


# 1.1 Making sense of images

In recent years, great advances have been made in generating powerful neural network-based models often referred to as ‘deep learning’ models. But neural networks have been around for over 50 years, with advances every few years, often reflecting advances in computing availability, and then long periods of ‘AI Winter’ when not much progress appeared to be being made.

The following XKCD cartoon, [*Tasks*](https://xkcd.com/1425/), was first published in 2014. As is typical of XKCD cartoons, hovering over the cartoon reveals some hidden caption text. In this particular case: ‘_In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it_.’

[![](../images//xkcd_tasks.png)](https://xkcd.com/1425/)
[![](../images/xkcd_tasks.png)](https://xkcd.com/1425/)

At the time (this is only a few short years ago, remember), recognising arbitrary items in images was still a hard task and the sentiment of this cartoon rang true. But within a few months, advances in neural network research meant that AI models capable of performing similar tasks, albeit crudely and with limited success, had started to appear. Today, photographs are routinely tagged with labels that identify what can be seen in the photograph using much larger, much more powerful, and much more effective AI models.

However, identifying individual objects in an image on the one hand, and being able to generate a sensible caption that describes the image, is a different matter. A quick web search today will undoubtedly turn up some very enticing demos out there of automated caption generators. But ‘reading the scene’ presented by a picture and generating a caption from a set or keywords or tags associated with items that can be recognised in the image is an altogether more complex task: as well as performing the object recognition task correctly, we also need to be able to identify the relationships that exist between the different parts of the image; and do that in a meaningful way.

In this session, you will get hands-on experience of using a variety of neural networks, and you will build and train neural networks to perform specific tasks.

<!-- #region activity=true -->
### Activity – Example image tagging demo
### 1.1.1 Activity – Example image tagging demo

There are many commercial image-tagging services available on the web that are capable of tagging uploaded images or images that can be identified by a web URL.

Expand Down Expand Up @@ -74,7 +77,7 @@ What risks, if any, might be associated with using such a service in each of tho
<!-- #endregion -->

<!-- #region activity=true -->
### Activity – Recognising a static pose in an image
### 1.1.2 Activity – Recognising a static pose in an image

As well as tagging images, properly trained models can recognise individual people’s faces in photos (and not just of celebrities!) and human poses within a photograph.

Expand All @@ -83,7 +86,7 @@ Click through to the following web location to see an example of a neural networ
<!-- #endregion -->


## 1.1 Transfer learning
## 1.2 Transfer learning

Creating a neural network capable of recognising a particular image can take a lot of data and a lot of computing power. The training process typically involves showing the network being trained:

Expand All @@ -102,22 +105,11 @@ When you further train the model, it uses combinations of the features it can al


<!-- #region activity=true -->
## Optional activity – Distinguishing between two of your own poses from a live video feed
Although it can take *a lot* of data and *a lot* of computational effort to train a model, topping up a model with transfer learning applied to a previously trained model can be achieved quite simply.

This optional activity allows you to top-up a pre-trained model to recognise an image of you with your hand raised, and an image of you without you hand raised. Feeding a live image into the model allows it to detect in real time whether you have your hand or arm raised or not.

__ TO DO _ need to package this as a jupyter_server_proxy thing.__
__`demo-video-arm-pose` dir__

*This activity requires that you have a camera attached to your computer and that your web browser has permission to access the camera feed. The captured images do not leave your computer.*
<!-- #endregion -->

<!-- #region activity=true -->
## Optional activity – Training your own image or audio classifier
### 1.2.1 Activity — Training your own image or audio classifier (optional)

Although it can take *a lot* of data and *a lot* of computational effort to train a model, topping up a model with transfer learning applied to a previously trained model can be achieved quite simply.

If you have a camera or microphone attached to your computer, then you can top-up a pre-trained model to distinguish between two or more categories of image or sound of your own devising. The [tutorial here](https://blog.google/technology/ai/teachable-machine/) describes a process for training a neural network to distinguish between images representing two different situations.
In this (optional) activity, you can top-up a pre-trained model to distinguish between two or more categories of image or sound of your own devising. The [tutorial here](https://blog.google/technology/ai/teachable-machine/) describes a process for training a neural network to distinguish between images representing two different situations.

You can train your own neural network by:

Expand All @@ -130,6 +122,6 @@ You can train your own neural network by:

In this notebook you have seen how we can use a third-party application to recognise different objects within an image and return human-readable labels that can be used to ‘tag’ the image. These applications use large, pre-trained neural networks to perform the object-recognition task.

You have also seen how we can take a pre-trained neural network model and use an approach called *transfer learning* to ‘top it up’ with a bit of extra learning so that it recognises particular sorts of differences between two classes of input image that we have provided it with.
You have also been introduced to the idea that we can take a pre-trained neural network model and use an approach called *transfer learning* to ‘top it up’ with a bit of extra learning. This allows a network trained to distinguish items in one dataset to draw on that prior learning to recognise differences between additional categories of input image that we have provided it with.

In the following notebooks you will have an opportunity to train your own neural network, from scratch, on a simple classification task.

0 comments on commit b1937e4

Please sign in to comment.