From 6d438f451e8d9a589f41d19684447cdcb448df77 Mon Sep 17 00:00:00 2001 From: Tony Hirst Date: Fri, 13 Nov 2020 12:53:26 +0000 Subject: [PATCH 1/2] Update 08.1 Introducing remote services and multi-agent systems.md --- ...remote services and multi-agent systems.md | 46 +++++++++++++------ 1 file changed, 32 insertions(+), 14 deletions(-) diff --git a/content/08. Remote services and multi-agent systems/.md/08.1 Introducing remote services and multi-agent systems.md b/content/08. Remote services and multi-agent systems/.md/08.1 Introducing remote services and multi-agent systems.md index 255393e6..6d1e9aaf 100644 --- a/content/08. Remote services and multi-agent systems/.md/08.1 Introducing remote services and multi-agent systems.md +++ b/content/08. Remote services and multi-agent systems/.md/08.1 Introducing remote services and multi-agent systems.md @@ -6,7 +6,7 @@ jupyter: extension: .md format_name: markdown format_version: '1.2' - jupytext_version: 1.5.2 + jupytext_version: 1.6.0 kernelspec: display_name: Python 3 language: python @@ -27,7 +27,7 @@ In the first case, we might think of the robot as a remote data collector, colle The model is a bit like asking a research librarian for some specific information, the research librarian researching the topic, perhaps using resources you don't have direct access to, and then the research librarian providing you with the information you requested. -In a more dynamic multi-agent case we might consider the robot and the notebook environment to be acting as peers sending messages as and when they can between each other. For example, we might have two agents: a Lego mobile robot and a personal computer (PC), or the simulated robot and the notebook. In computational terms, *agents* are long-lived computational systems that can deliberate on the actions they may take in pursuit of their own goals based on their own internal state (often referred to as "beliefs") and sensory inputs. Their actions are then performed by means of some sort of effector system that can act on to change the state of the environment within which they reside. +In a more dynamic multi-agent case we might consider the robot and the notebook environment to be acting as peers sending messages as and when they can between each other. For example, we might have two agents: a Lego mobile robot and a personal computer (PC), or the simulated robot and the notebook. In computational terms, *agents* are long-lived computational systems that can deliberate on the actions they may take in pursuit of their own goals based on their own internal state (often referred to as "beliefs") and sensory inputs. Their actions are then performed by means of some sort of effector system that can act to change the state of the environment within which they reside. In a multi-agent system, two or more agents may work together to combine to perform some task that not only meets the (sub)goals of each individual agent, but that might also strive to attain some goal agreed upon by each member of the multi-agent system. Agents may communicate by making changes to the environment, for example, by leaving a trail that other agents may follow (an effect known as *stigmergy*), or by passing messages between themselves directly. @@ -45,9 +45,11 @@ The *MNIST_Digits* simulator background includes various digit images from the M Alongside each digit is a grey square, where the grey level is used to encode the actual label associated with the image. (You can see how the background was created in the `Background Image Generator.ipynb` notebook in the top-level `backgrounds` folder.) -In this notebook, you will use the light sensor as a simple low resolution camera, working with the pixel array data rather then the single value reflected light value. +Typically, we use the light sensor to return a single value, such as the reflected light intensity value. However, in this notebook, you will use the light sensor as a simple low resolution camera. Rather than returning a single value, the sensor returns an array of data containing the values associated individual pixel values from a sampled image. We can then use this square array of pixel data collected by the robot inside the simulator, rather than a single value reflected light value, as the basis for trying to detect what the robot can actually see. -*Note that this functionality is not supported by the real Lego light sensor.* + +*Note that this low resolution camera-like functionality is not supported by the real Lego light sensor.* + Let's start by loading in the simulator: @@ -59,7 +61,7 @@ from nbev3devsim.load_nbev3devwidget import roboSim, eds In order to collect the sensor image data, if the simulated robot program `print()` message starts with the word `image_data`, then we can send light sensor array data from the left, right or both light sensors to a data log in the notebook Python environment. -The `-R` switch in magic at the start of the following code cell will run the program in the simulator once it has been downloaded. +The `-R` (`--autorun`) switch in the magic at the start of the following code cell will run the program in the simulator once it has been downloaded. ```python %%sim_magic_preloaded -b MNIST_Digits -OA -R -x 400 -y 50 @@ -67,9 +69,6 @@ The `-R` switch in magic at the start of the following code cell will run the pr # Configure a light sensor colorLeft = ColorSensor(INPUT_2) -#Sample the light sensor reading -sensor_value = colorLeft.reflected_light_intensity - # This is a command invocation rather than a print statement print("image_data left") # The command is responded to by @@ -104,12 +103,16 @@ With the data pushed from the simulator to the notebook Python environment, we s roboSim.image_data() ``` +Each row of the dataframe represents a single captured image from one of the light sensors. + + ### 1.1.2 Previewing the sampled sensor array data (optional) Having grabbed the data, we can explore the data as rendered images. -The data representing the image is a long list of RGB (red green, blue) values. We can generate an image from a the a specific row of the dataframe, given it the row index: +The data representing the image is a long list of RGB (red green, blue) values. We can generate an image from a +specific row of the dataframe, given the row index: ```python from nn_tools.sensor_data import generate_image, zoom_img @@ -178,12 +181,12 @@ img = generate_image(roboSim.image_data(), index, ### 1.1.3 Collecting multiple sample images -The handwritten digit image sampling point locations in the *MINIST_Digits* simulator background can be found at the following locations: +The *MINIST_Digits* simulator background contains a selection of handwritten digit images arranged in a sparse grid on the background which we shall refer to as image sampling point locations. These image locations within the background can be found at the following co-ordinates: - along rows `100` pixels apart, starting at `x=100` and ending at `x=2000`; - along columns `100` pixels apart, starting at `y=50` and ending at `y=1050`. -We can collect the samples collected over a column by using line magic to teleport the simulated robot to each new location in turn and automatically run the program to log the sensor data. +We can collect images from this grid by using magic to teleport the robot to each sampling location and then automatically run the robot program to log the sensor data. For example, to collect images from one column of the background arrangement — that is, images with a particular *x* co-ordinate — we need to calculate the required *y* values for each sampling point. To start, let's just check we can generate the required *y* values: @@ -196,14 +199,29 @@ step = 100 list(range(min_value, max_value+1, step)) ``` -Using this as a pattern, we can now create a simple script to clear the datalog, then iterate through the desired *y* locations, using line magic to locate the robot at each step and run the already downloaded image sampling program. +To help us keep track of where we are in the sample collection, we can use a visual indicator such as a progress bar. -To access the value of the iterated *y* value in the magic, we need to prefix it with a `$` when we refer to it. Note that we also use the `tqdm.notebook.trange` argument to define the range: this enhance the range iterator to provide an interactive progress bar that allows us to follow the progress of the iterator. +The `tqdm` Python package provides a wide range of tools for displaying progress bars in Python programs. For example the `tqdm.notebook.trange` function enhances the range iterator with an interactive progress bar that allows us to follow the progress of the iterator: ```python # Provide a progress bar when iterating through the range from tqdm.notebook import trange +import time + +for i in trange(min_value, max_value, step): + #Wait a moment + time.sleep(0.5) +``` + +We can now create a simple script that will: +- clear the datalog; +- iterate through the desired *y* locations with a visual indicator of how much progress we have made; +- use line magic to locate the robot at each step and run the already downloaded image sampling program. + +To access the value of the iterated *y* value in the magic, we need to prefix it with a `$` when we refer to it. + +```python # We need to add a short delay between iterations to give # the data time to synchronise import time @@ -227,7 +245,7 @@ image_data_df We can access a centrally cropped black and white version of an image extracted from the retrieved data by index number (`--index / -i`) by calling the `%sim_bw_image_data` magic, optionally setting the `--threshold / -t` value away from its default value of `127`. Using the `--nocrop / -n` flag will prevent the autocropping. -We can convert the image to a black and white image by setting pixels above a specified threshold value to white (`255`), otherwise coloring the pixel black (`0`) using the `generate_bw_image()` function. This will select a row from the datalog at a specific location, optionally crop it to a specific area, and then pixel values greater than threshold to white (`255`), with values equal to or below the threshold to `0`. +We can convert the image to a black and white image by setting pixels above a specified threshold value to white (`255`), otherwise coloring the pixel black (`0`) using the `generate_bw_image()` function. This will select a row from the datalog at a specific location, optionally crop it to a specific area, and then set pixel values greater than threshold to white (`255`), with values equal to or below the threshold to `0`. ```python from nn_tools.sensor_data import generate_bw_image From 41f9d9362ae7aeb11e8291fb540e1d34efaeef38 Mon Sep 17 00:00:00 2001 From: Tony Hirst Date: Fri, 13 Nov 2020 13:18:55 +0000 Subject: [PATCH 2/2] Week 8 tidy up and restructure --- ...remote services and multi-agent systems.md | 231 +- ...image and class data from the simulator.md | 259 ++ ...onvolutional neural network (optional).md} | 16 +- ... 08.4 Recognising patterns on the move.md} | 47 +- ... 08.5 Messaging in multi-agent systems.md} | 28 +- ...{08.5 Conclusion.md => 08.6 Conclusion.md} | 8 +- ...ote services and multi-agent systems.ipynb | 480 +-- ...ge and class data from the simulator.ipynb | 481 +++ ...olutional neural network (optional).ipynb} | 29 +- ...8.3 Recognising patterns on the move.ipynb | 3403 ----------------- ...8.4 Recognising patterns on the move.ipynb | 1329 +++++++ ....5 Messaging in multi-agent systems.ipynb} | 41 +- ...Conclusion.ipynb => 08.6 Conclusion.ipynb} | 21 +- 13 files changed, 2292 insertions(+), 4081 deletions(-) create mode 100644 content/08. Remote services and multi-agent systems/.md/08.2 Collecting digit image and class data from the simulator.md rename content/08. Remote services and multi-agent systems/.md/{08.2 Recognising digits using a convolutional neural network (optional).md => 08.3 Recognising digits using a convolutional neural network (optional).md} (92%) rename content/08. Remote services and multi-agent systems/.md/{08.3 Recognising patterns on the move.md => 08.4 Recognising patterns on the move.md} (94%) rename content/08. Remote services and multi-agent systems/.md/{08.4 Messaging in multi-agent systems.md => 08.5 Messaging in multi-agent systems.md} (97%) rename content/08. Remote services and multi-agent systems/.md/{08.5 Conclusion.md => 08.6 Conclusion.md} (88%) create mode 100644 content/08. Remote services and multi-agent systems/08.2 Collecting digit image and class data from the simulator.ipynb rename content/08. Remote services and multi-agent systems/{08.2 Recognising digits using a convolutional neural network (optional).ipynb => 08.3 Recognising digits using a convolutional neural network (optional).ipynb} (92%) delete mode 100644 content/08. Remote services and multi-agent systems/08.3 Recognising patterns on the move.ipynb create mode 100644 content/08. Remote services and multi-agent systems/08.4 Recognising patterns on the move.ipynb rename content/08. Remote services and multi-agent systems/{08.4 Messaging in multi-agent systems.ipynb => 08.5 Messaging in multi-agent systems.ipynb} (97%) rename content/08. Remote services and multi-agent systems/{08.5 Conclusion.ipynb => 08.6 Conclusion.ipynb} (86%) diff --git a/content/08. Remote services and multi-agent systems/.md/08.1 Introducing remote services and multi-agent systems.md b/content/08. Remote services and multi-agent systems/.md/08.1 Introducing remote services and multi-agent systems.md index 6d1e9aaf..ffd0da9f 100644 --- a/content/08. Remote services and multi-agent systems/.md/08.1 Introducing remote services and multi-agent systems.md +++ b/content/08. Remote services and multi-agent systems/.md/08.1 Introducing remote services and multi-agent systems.md @@ -312,7 +312,7 @@ zoom_img( focal_image ) ``` -## 1.2 Testing the robot sample images using a pre-retrained MLP +## 1.2 Testing the robot sampled images using a pre-retrained MLP Having grabbed the image data, we can pre-process it as required and then present it to an appropriately trained neural network to see if the network can identify the digit it represents. @@ -470,29 +470,29 @@ Increase the level of light sensor noise to it's maximum value and rerun the exp *Add your own notes and observations on how well the network performed the classification task in the presence of sensor noise here.* - + #### Example discussion *Click on the arrow in the sidebar or run this cell to reveal an example discussion.* - + We can collect the image data by calling the `%sim_magic` with the `-R` switch so that it runs the current program directly. We also need to set the location using the `-x` and `-y` parameters. -```python activity=true +```python activity=true hidden=true %sim_magic -R -x 600 -y 750 ``` - + The data is available in a dataframe returned by calling `roboSim.image_data()`. - + To view the result, we can zoom the display of the last collected image in the notebook synched datalog. -```python activity=true +```python activity=true hidden=true # Get data for the last image in the dataframe index = -1 my_img = generate_bw_image(roboSim.image_data(), index, @@ -500,231 +500,26 @@ my_img = generate_bw_image(roboSim.image_data(), index, zoom_img(my_img) ``` - + By my observation, the digit represented by the image at the specified location is a figure `3`. The trained MLP classifies the object as follows: -```python activity=true +```python activity=true hidden=true image_class_predictor(MLP, my_img) ``` - + This appears to match my prediction. -## 1.3 Collecting digit image and class data from the simulator - -If you look carefully at the *MNIST_Digits* background in the simulator, you will see that alongside each digit is a solid coloured area. This area is a greyscale value that represents the value of the digit represented by the image. That is, it represents a training label for the digit. - -Before we proceed, clear out the datalog to give ourselves a clean datalog to work with: - -```python -%sim_data --clear -``` - -The solid coloured areas are arranged so that when the left light sensor is over the image, the right sensor is over the training label area. - -```python -%%sim_magic_preloaded -b MNIST_Digits -O -R -AH -x 400 -y 50 - -#Sample the light sensor reading -sensor_value = colorLeft.reflected_light_intensity - -# This is essentially a command invocation -# not just a print statement! -print("image_data both") -``` - -We can retrieve the last pair of images from the `roboSim.image_data()` dataframe using the `get_sensor_image_pair()` function: - -```python -from nn_tools.sensor_data import get_sensor_image_pair - -# The sample pair we want from the logged image data -pair_index = -1 - -left_img, right_img = get_sensor_image_pair(roboSim.image_data(), - pair_index) - -zoom_img(left_img), zoom_img(right_img) - -``` - - -The image labels are encoded as follows: - -`greyscale_value = 25 * digit_value` - - -One way of decoding the label is as follows: - -- divide each of the greyscale pixel values collected from the right hand sensor array by 25; -- take the median of these values and round to the nearest integer; *in a noise free environment, using the median should give a reasonable estimate of the dominant pixel value in the frame.* -- ensure we have an integer by casting the result to an integer. - -The *pandas* package has some operators that can help us with that if we put all the data into a *pandas* *Series* (essentially, a single column dataframe): - -```python -import pandas as pd - -def get_training_label_from_sensor(img): - """Return a training class label from a sensor image.""" - # Get the pixels data as a pandas series - # (similar to a single column dataframe) - image_pixels = pd.Series(list(img.getdata())) - - # Divide each value in the first column (name: 0) by 25 - image_pixels = image_pixels / 25 - - # Find the median value - pixels_median = image_pixels.median() - - # Find the nearest integer and return it - return int( pixels_median.round(0)) - -# Try it out -get_training_label_from_sensor(right_img) -``` - -The following function will grab right and left image from the data log, decode the label from the right hand image, and return the handwritten digit from the left light sensor along with the training label: - -```python -def get_training_data(raw_df, pair_index): - """Get training image and label from raw data frame.""" - - # Get the left and right images - # at specified pair index - left_img, right_img = get_sensor_image_pair(raw_df, - pair_index) - - # Find the training label value as the median - # value of the right habd image. - # Really, we should properly try to check that - # we do have a proper training image, for example - # by encoding a recognisable pattern - # such as a QR code - training_label = get_training_label_from_sensor(right_img) - return training_label, left_img - - -# Try it out -label, img = get_training_data(roboSim.image_data(), - pair_index) -print(f'Label: {label}') -zoom_img(img) -``` - - -We're actually taking quite a lot on trust in extracting the data from the dataframe in this way. Ideally, we would have a unique identifiers that reliably associate the left and right images as having been sampled from the same location. As it is, we assume the left and right image datasets appear in that order, one after the other, so we can count back up the dataframe to collect different pairs of data. - - -We can now test that image against the classifier: - -```python -image_class_predictor(MLP, img) -``` - - -### 1.3.1 Activity — Testing the ability to recognise images slight off-center in the image array - -Write a simple program to collect sample data at a particular location and then display the digit image and the decoded label value. - -Modify the x or y co-ordinates used to locate the robot by by a few pixel values away from the sampling point origins and test the ability of the network to recognise digits that are lightly off-center in the image array. - -How well does the network perform? - -*Hint: when you have run your program to collect the data in the simulator, run the `get_training_data()` with the `roboSim.image_data()` to generate the test image and retrieve its decoded training label.* - -*Hint: use the `image_class_predictor()` function with the test image to see if the classifier can recognise the image.* - - -```python -# Your code here -``` - - -*Record your observations here.* - - - -### 1.3.2 Activity — Collecting image sample data from the *MNIST_Digits* background (optional) - -In this activity, you will need to collect a complete set of sample data from the simulator to test the ability of the network to correctly identify the handwritten digit images. - -Recall that the sampling positions are arranged along rows 100 pixels apart, starting at x=100 and ending at x=2000; -along columns 100 pixels apart, starting at y=50 and ending at y=1050. - -Write a program to automate the collection of data at each of these locations. - -How would you then retrieve the hand written digit image and it's decoded training label? - - - -*Your program design notes here.* - - -```python student=true -# Your program code -``` - - -*Describe here how you would retrieve the hand written digit image and it's decoded training label.* - - - -#### Example solution - -*Click on the arrow in the sidebar or run this cell to reveal an example solution.* - - - -To collect the data, I use two `range()` commands, one inside the other, to iterate through the *x* and *y* coordinate values. The outer loop generates the *x* values and the inner loop generates the *y* values: - - -```python activity=true -# Clear the datalog so we know it's empty -%sim_data --clear - - -# Generate a list of integers with desired range and gap -min_value = 50 -max_value = 1050 -step = 100 - -for _x in trange(100, 501, 100): - for _y in range(min_value, max_value+1, step): - - %sim_magic -R -x $_x -y $_y - # Give the data time to synchronise - time.sleep(1) -``` - - -We can now grab view the data we have collected: - - -```python activity=true -training_df = roboSim.image_data() -training_df -``` - - -The `get_training_data()` function provides a convenient way of retrieving the handwritten digit image and the decoded training label. - - -```python activity=true -label, img = get_training_data(training_df, pair_index) -zoom_img(img), label -``` - -## 1.4 Summary +## 1.3 Summary In this notebook, you have seen how we can use the robot's light sensor as a simple low resolution camera to sample handwritten digit images from the background. Collecting the data from the robot, we can then convert it to an image and preprocess is before testing it with a pre-trained multi-layer perceptron. Using captured images that are slightly offset from the center of the image array essentially provides us with a "jiggled" image, which tends to increase the classification error. -You have also seen how we might automate the collection of large amounts of data by "teleporting" the robot to particular locations and sampling the data. With the background defined as it is, we can also pick up encoded label data an use this to generate training data made up of scanned handwritten digit and image label pairs. In principle, we could use the image and test label data collected in this way as a training data set for an MLP or convolutional neural network. +You have also seen how we can automate the way the robot collects image data by "teleporting" the robot to a particular location and then sampling the data there. -The next notebook in the series is optional and demonstrates the performance of a CNN on the MNIST dataset. The required content continues with a look at how we can start to collect image data using the simulated robot whilst it is on the move. +In the next notebook in the series, you will see how we can use this automation approach to collect image and class data "in bulk" from the simulator. diff --git a/content/08. Remote services and multi-agent systems/.md/08.2 Collecting digit image and class data from the simulator.md b/content/08. Remote services and multi-agent systems/.md/08.2 Collecting digit image and class data from the simulator.md new file mode 100644 index 00000000..87a7c684 --- /dev/null +++ b/content/08. Remote services and multi-agent systems/.md/08.2 Collecting digit image and class data from the simulator.md @@ -0,0 +1,259 @@ +--- +jupyter: + jupytext: + formats: ipynb,.md//md + text_representation: + extension: .md + format_name: markdown + format_version: '1.2' + jupytext_version: 1.6.0 + kernelspec: + display_name: Python 3 + language: python + name: python3 +--- + +## 2. Collecting digit image and class data from the simulator + +If we wanted to collect image data from the background and then train a network using those images, we would need to generate the training label somehow. We could do this manually, looking at each image and then by observation recording the digit value, associating it with the image location co-ordinates. But could we also encode the digit value explicitly somehow? + +If you look carefully at the *MNIST_Digits* background in the simulator, you will see that alongside each digit is a solid coloured area. This area is a greyscale value that represents the value of the digit represented by the image. That is, it represents a training label for the digit. + + +*The greyscale encoding is quite a crude encoding method that is perhaps subject to noise. Another approach might be to use a simple QR code to encode the digit value.* + + +As usual, load in the simulator in the normal way: + +```python +from nbev3devsim.load_nbev3devwidget import roboSim, eds + +%load_ext nbev3devsim +``` + +Clear the datalog just to ensure we have a clean datalog to work with: + +```python +%sim_data --clear +``` + +The solid greyscale areas are arranged so that when the left light sensor is over the image, the right sensor is over the training label area. + +```python +%%sim_magic_preloaded -b MNIST_Digits -O -R -AH -x 400 -y 50 + +#Sample the light sensor reading +sensor_value = colorLeft.reflected_light_intensity + +# This is essentially a command invocation +# not just a print statement! +print("image_data both") +``` + +We can retrieve the last pair of images from the `roboSim.image_data()` dataframe using the `get_sensor_image_pair()` function: + +```python +from nn_tools.sensor_data import zoom_img +from nn_tools.sensor_data import get_sensor_image_pair + +# The sample pair we want from the logged image data +pair_index = -1 + +left_img, right_img = get_sensor_image_pair(roboSim.image_data(), + pair_index) + +zoom_img(left_img), zoom_img(right_img) + +``` + + +The image labels are encoded as follows: + +`greyscale_value = 25 * digit_value` + + +One way of decoding the label is as follows: + +- divide each of the greyscale pixel values collected from the right hand sensor array by 25; +- take the median of these values and round to the nearest integer; *in a noise free environment, using the median should give a reasonable estimate of the dominant pixel value in the frame.* +- ensure we have an integer by casting the result to an integer. + +The *pandas* package has some operators that can help us with that if we put all the data into a *pandas* *Series* (essentially, a single column dataframe): + +```python +import pandas as pd + +def get_training_label_from_sensor(img): + """Return a training class label from a sensor image.""" + # Get the pixels data as a pandas series + # (similar to a single column dataframe) + image_pixels = pd.Series(list(img.getdata())) + + # Divide each value in the first column (name: 0) by 25 + image_pixels = image_pixels / 25 + + # Find the median value + pixels_median = image_pixels.median() + + # Find the nearest integer and return it + return int( pixels_median.round(0)) + +# Try it out +get_training_label_from_sensor(right_img) +``` + +The following function will grab the right and left images from the data log, decode the label from the right hand image, and return the handwritten digit from the left light sensor along with the training label: + +```python +def get_training_data(raw_df, pair_index): + """Get training image and label from raw data frame.""" + + # Get the left and right images + # at specified pair index + left_img, right_img = get_sensor_image_pair(raw_df, + pair_index) + + # Find the training label value as the median + # value of the right habd image. + # Really, we should properly try to check that + # we do have a proper training image, for example + # by encoding a recognisable pattern + # such as a QR code + training_label = get_training_label_from_sensor(right_img) + return training_label, left_img + + +# Try it out +label, img = get_training_data(roboSim.image_data(), + pair_index) +print(f'Label: {label}') +zoom_img(img) +``` + + +We're actually taking quite a lot on trust in extracting the data from the dataframe in this way. Ideally, we would have a unique identifiers that reliably associate the left and right images as having been sampled from the same location. As it is, we assume the left and right image datasets appear in that order, one after the other, so we can count back up the dataframe to collect different pairs of data. + + +Load in our previously trained MLP classifier: + +```python +# Load model +from joblib import load + +MLP = load('mlp_mnist14x14.joblib') +``` + +We can now test that image against the classifier: + +```python +from nn_tools.network_views import image_class_predictor + +image_class_predictor(MLP, img) +``` + + +### 2.3.1 Activity — Testing the ability to recognise images slight off-center in the image array + +Write a simple program to collect sample data at a particular location and then display the digit image and the decoded label value. + +Modify the x or y co-ordinates used to locate the robot by by a few pixel values away from the sampling point origins and test the ability of the network to recognise digits that are lightly off-center in the image array. + +How well does the network perform? + +*Hint: when you have run your program to collect the data in the simulator, run the `get_training_data()` with the `roboSim.image_data()` to generate the test image and retrieve its decoded training label.* + +*Hint: use the `image_class_predictor()` function with the test image to see if the classifier can recognise the image.* + +*Hint: if you seem to have more data in the dataframe than you thought you had collected, did you remember to clear the datalog before collecting your data?* + + +```python +# Your code here +``` + + +*Record your observations here.* + + + +### 2.3.2 Activity — Collecting image sample data from the *MNIST_Digits* background (optional) + +In this activity, you will need to collect a complete set of sample data from the simulator to test the ability of the network to correctly identify the handwritten digit images. + +Recall that the sampling positions are arranged along rows 100 pixels apart, starting at x=100 and ending at x=2000; +along columns 100 pixels apart, starting at y=50 and ending at y=1050. + +Write a program to automate the collection of data at each of these locations. + +How would you then retrieve the hand written digit image and it's decoded training label? + +*Hint: import the `time` package and use the `time.sleep` function to provide a short delay between each sample collection. You may also find it convenient to import the `trange` function to provide a progress bar indicator when iterating through the list of collection locations: `from tqdm.notebook import trange`.* + + + +*Your program design notes here.* + + +```python student=true +# Your program code +``` + + +*Describe here how you would retrieve the hand written digit image and it's decoded training label.* + + + +#### Example solution + +*Click on the arrow in the sidebar or run this cell to reveal an example solution.* + + + +To collect the data, I use two `range()` commands, one inside the other, to iterate through the *x* and *y* coordinate values. The outer loop generates the *x* values and the inner loop generates the *y* values: + + +```python activity=true hidden=true +# Make use of the progress bar indicated range +from tqdm.notebook import trange +import time + +# Clear the datalog so we know it's empty +%sim_data --clear + + +# Generate a list of integers with desired range and gap +min_value = 50 +max_value = 1050 +step = 100 + +for _x in trange(100, 501, 100): + for _y in range(min_value, max_value+1, step): + + %sim_magic -R -x $_x -y $_y + # Give the data time to synchronise + time.sleep(1) +``` + + +We can now grab and view the data we have collected: + + +```python activity=true hidden=true +training_df = roboSim.image_data() +training_df +``` + + +The `get_training_data()` function provides a convenient way of retrieving the handwritten digit image and the decoded training label. + + +```python activity=true hidden=true +label, img = get_training_data(training_df, pair_index) +zoom_img(img), label +``` + +## 2.4 Summary + +In this notebook, you have automated the collection of hand-written digit and encoded label image data from the simulator ad seen how this can be used to generate training data made up of scanned handwritten digit and image label pairs. In principle, we could use the image and test label data collected in this way as a training data set for an MLP or convolutional neural network. + +The next notebook in the series is optional and demonstrates the performance of a CNN on the MNIST dataset. The required content continues with a look at how we can start to collect image data using the simulated robot whilst it is on the move. diff --git a/content/08. Remote services and multi-agent systems/.md/08.2 Recognising digits using a convolutional neural network (optional).md b/content/08. Remote services and multi-agent systems/.md/08.3 Recognising digits using a convolutional neural network (optional).md similarity index 92% rename from content/08. Remote services and multi-agent systems/.md/08.2 Recognising digits using a convolutional neural network (optional).md rename to content/08. Remote services and multi-agent systems/.md/08.3 Recognising digits using a convolutional neural network (optional).md index f3b79335..eef0627a 100644 --- a/content/08. Remote services and multi-agent systems/.md/08.2 Recognising digits using a convolutional neural network (optional).md +++ b/content/08. Remote services and multi-agent systems/.md/08.3 Recognising digits using a convolutional neural network (optional).md @@ -7,7 +7,7 @@ jupyter: extension: .md format_name: markdown format_version: '1.2' - jupytext_version: 1.5.2 + jupytext_version: 1.6.0 kernelspec: display_name: Python 3 language: python @@ -20,14 +20,14 @@ __This notebook contains optional study material. You are not required to work t *This notebook demonstrates the effectiveness of a pre-trained convolutional neural network (CNN) at classifying MLP handwritten digit images.* -# 2 Recognising digits using a convolutional neural network (optional) +# 3 Recognising digits using a convolutional neural network (optional) In the previous notebook, you saw how we could collect image data sampled by the robot within the simulator into the notebook environment and then test the collected images against an "offboard" pre-trained multilayer perceptron run via the notebook's Python environment. However, even with an MLP tested on "jiggled" images, the network's classification performance degrades when "off-center" images are presented to it. In this notebook, you will see how we can use a convolutional neural network running in the notebook's Python environment to classify images retrieved from the robot in the simulator. -## 2.1 Using a pre-trained convolutional neural network +## 3.1 Using a pre-trained convolutional neural network Although training a convolutional neural network can take quite a lot of time, and a *lot* of computational effort, off-the-shelf pre-trained models are also increasingly available. However, whilst this means you may be able to get started on a recognition task without the requirement to build your own model, you would do well to remember the phrase *caveat emptor*: buyer beware. @@ -42,7 +42,7 @@ However, you should be aware when using third party models that they may incorpo The following example uses a pre-trained convolutional neural network model implemented as a TensorFlow Lite model. [*TensorFlow Lite*](https://www.tensorflow.org/lite/) is a framework developed to support the deployment of TensorFlow Model on internet of things (IoT) devices. As such, the models are optimised to be as small as possible and to be evaluated as computationally quickly and efficiently as possible. -### 2.1.1 Loading the CNN +### 3.1.1 Loading the CNN The first thing we need to do is to load in the model. The actual TensorFlow Lite framework code is a little bit fiddly in places, so we'll use some convenience functions to make using the framework slightly easier. @@ -62,10 +62,10 @@ from nn_tools.network_views import cnn_get_details cnn_get_details(cnn, report=True) ``` -The main take away from this report are the items the describe the structure of the input and output arrays. In particular, we have an input array of a single 28x28 pixel greyscale image array, and an output of 10 classification classes. Each output gives the probability with which the CNN believes the image represents the corresponding digit. +The main take away from this report are the items that describe the structure of the input and output arrays. In particular, we have an input array of a single 28x28 pixel greyscale image array, and an output of 10 classification classes. Each output gives the probability with which the CNN believes the image represents the corresponding digit. -### 2.1.2 Testing the network +### 3.1.2 Testing the network We'll test the network with images retrieved from the simulator. @@ -162,7 +162,7 @@ Let's test this offset image to see if our convolutional neural network can stil cnn_test_with_image(cnn, img, rank=2) ``` -### 2.1.3 Activity — Testing the CNN using robot collected image samples +### 3.1.3 Activity — Testing the CNN using robot collected image samples The `ipywidget` powered end user application defined in the code cell below will place the robot at a randomly selected digit location and display and then test the image grabbed from *the previous location* using the CNN. @@ -200,7 +200,7 @@ def random_MNIST_location(location_noise = False): cnn_test_with_image(cnn, img, rank=3) ``` -## 2.2 Summary +## 3.2 Summary In this notebook, you have seen how we can use a convolutional neural network to identify handwritten digits scanned by the robot in the simulator. diff --git a/content/08. Remote services and multi-agent systems/.md/08.3 Recognising patterns on the move.md b/content/08. Remote services and multi-agent systems/.md/08.4 Recognising patterns on the move.md similarity index 94% rename from content/08. Remote services and multi-agent systems/.md/08.3 Recognising patterns on the move.md rename to content/08. Remote services and multi-agent systems/.md/08.4 Recognising patterns on the move.md index 4dbdc46b..b20af40c 100644 --- a/content/08. Remote services and multi-agent systems/.md/08.3 Recognising patterns on the move.md +++ b/content/08. Remote services and multi-agent systems/.md/08.4 Recognising patterns on the move.md @@ -14,7 +14,7 @@ jupyter: name: python3 --- -# 3 Recognising patterns on the move +# 4 Recognising patterns on the move To be really useful a robot needs to recognise things as it goes along, or ‘on the fly’. In this notebook, you will train a neural network to use a simple MLP classifier to try to identify different shapes on the background. The training samples themselves, images *and* training labels, will be captured by the robot from the simulator background. @@ -27,7 +27,6 @@ To begin with we will contrive things somewhat to collect the data at specific l - *There is quite a lot of provided code in this notebook. You are not necessarily expected to be able to create this sort of code yourself. Instead, try to focus on the process of how various tasks are broken down into smaller discrete steps, as well as how small code fragments can be combined to create "higher level" functions that perform ever more powerful tasks.* @@ -38,6 +37,8 @@ Before continuing, ensure the simulator is loaded and available: + + ```python from nbev3devsim.load_nbev3devwidget import roboSim, eds %load_ext nbev3devsim @@ -51,9 +52,9 @@ Just below each shape is a grey square, whose fill colour is used to distinguish %sim_magic -b Simple_Shapes -x 600 -y 900 ``` -### 3.1 Evaluating the possible training data +### 4.1 Evaluating the possible training data -In this initial training pass, we will check whether the robot can clearly observe the potential training pairs. Each training pair consists of the actual shape image as well as a solid grey square, where the grey colour is use to represent one of eight (8) different training classes. +In this initial training pass, we will check whether the robot can clearly observe the potential training pairs. Each training pair consists of the actual shape image as well as a solid grey square, where the grey colour is use to represent one of six (6) different training classes. The left light sensor will be used to sample the shape image data and the right light sensor will be used to collect the simpler grey classification group pattern. @@ -121,7 +122,7 @@ _x = 280 %sim_magic -x $_x -y 900 -RAH ``` -### 3.1.1 Investigating the training data samples +### 4.1.1 Investigating the training data samples Let's start by seeing if we can collect image data samples for each of the shapes. @@ -211,7 +212,7 @@ codemap = {shapemap[k]:k for k in shapemap} codemap ``` -### 3.1.2 Counting the number of black pixels in each shape +### 4.1.2 Counting the number of black pixels in each shape Ever mindful that we are on the look out for features that might help us distinguish between the different shapes, let's check a really simple measure: the number of black filled pixels in each shape. @@ -243,13 +244,13 @@ for index in range(len(clean_left_images_df)): Observing the black (`0` value) pixel counts, we see that they do not uniquely identify the shapes. For example, the left and right facing triangles and the diamond all have 51 black pixels. A simple pixel count does not provide a way to distinguish between the shapes. -### 3.1.3 Activity — Using bounding box sizes as a feature for distinguishing between shapes +### 4.1.3 Activity — Using bounding box sizes as a feature for distinguishing between shapes -When we trained a neural network to recognise shape data, we use the dimensions of a bounding box drawn around the fruit as the input features to our network. +When we trained a neural network to recognise shape data, we used the dimensions of a bounding box drawn around the fruit as the input features to our network. Will the bounding box approach used there also allow us to distinguish between the shape images? -Run the following code cell to convert the raw data associated with an image to a data frame, and then prune the rows and columns the edges that only contain white space. +Run the following code cell to convert the raw data associated with an image to a data frame, and then prune the rows and columns around the edges that only contain white space. The dimensions of the dataframe, which is to say, the `.shape` of the dataframe, given as the 2-tuple `(rows, columns)`, corresponds to the bounding box of the shape. @@ -322,7 +323,7 @@ Inspecting the results from my run (yours may be slightly different), several of The square is clearly separated from the other shapes on the basis of its bounding box dimensions, but the other shapes all have dimensions that may be hard to distinguish between. -### 3.1.4 Decoding the training label image +### 4.1.4 Decoding the training label image The grey filled squares alongside the shape images are used to encode a label describing the associated shape. @@ -446,16 +447,16 @@ In summary, we can now: - label the corresponding shape image with the appropriate label. -## 3.2 Real time data collection +## 4.2 Real time data collection In this section, you will start to explore how to collect data in real time as the robot drives over the images, rather than being teleported directly on top of them. -### 3.2.1 Identifying when the robot is over a pattern in real time +### 4.2.1 Identifying when the robot is over a pattern in real time If we want to collect data from the robot as it drives slowly over the images we need to be able to identify when it is passing over the images so we can trigger the image sampling. -The following program will slow drive over the test patterns, logging the reflected light sensor values every so often. Start the program using the simulator *Run* button or the simulator `R` keyboard shortcut. +The following program will slowly drive over the test patterns, logging the reflected light sensor values every so often. Start the program using the simulator *Run* button or the simulator `R` keyboard shortcut. From the traces on the simulator chart, can you identify when the robot passes over the images? @@ -491,7 +492,7 @@ say('All done') *Based on your observations, describe a strategy you might use to capture image sample data when the test images are largely in view.* -### 3.2.2 Challenge — capturing image data in real time (optional) +### 4.2.2 Challenge — capturing image data in real time (optional) Using your observations regarding the reflected light sensor values as the robot crosses the images, or otherwise, write a program to collect image data from the simulator in real time as the robot drives over them. @@ -502,7 +503,7 @@ Using your observations regarding the reflected light sensor values as the robot # Your code here ``` -### 3.2.3 Capturing image data in real time +### 4.2.3 Capturing image data in real time By observation of the reflected light sensor data in the chart, the robot appears to be over the a shape, as the reflected light sensor values drop below about 85%. @@ -579,7 +580,7 @@ training_labels = training_df['code'].to_list() We are now in a position to try to use the data collected by travelling over the test track to train the neural network. -## 3.2 Training an MLP to recognise the patterns +## 4.3 Training an MLP to recognise the patterns In an earlier activity, we discovered that the bounding box method we used to distinguish fruits did not provide a set of features that we could use to distinguish the different shapes. @@ -620,7 +621,7 @@ predict_and_report_from_image(MLP, test_image, test_label) *Record your observations about how well the network performs.* -## 3.3 Testing the network on a new set of collected data +## 4.4 Testing the network on a new set of collected data Let's collect some data again by driving the robot over a second, slightly shorter test track at `y=700` to see if we can recognise the images. @@ -628,7 +629,7 @@ Let's collect some data again by driving the robot over a second, slightly short There are no encoded training label images in this track, so we will either have to rely on just the reflected light sensor value to capture legitimate images for us, or we will need to preprocess the images to discard ones that are only partial image captures. -### 3.3.1 Collecting the test data +### 4.4.1 Collecting the test data The following program will stop as soon as the reflected light value from the left sensor drops below 85. How much of the image can we see? @@ -683,7 +684,7 @@ while int(tank_drive.left_motor.position)<800: say("All done.") ``` -### 3.3.2 Generating the test set +### 4.4.2 Generating the test set We can now generate a clean test set of images based on a minimum required number of black pixels. The following function grabs the test images and also counts the black pixels in the left image. @@ -736,7 +737,7 @@ for i in trange(int(len(roboSim.image_data())/2)): test_images ``` -### 3.3.3 Testing the data +### 4.4.3 Testing the data Having got our images, we can now try to test them with the MLP. @@ -757,7 +758,7 @@ display(codemap[prediction]) zoom_img(test_img) ``` -### 3.3.4 Save the MLP +### 4.4.4 Save the MLP Save the MLP so we can use it again: @@ -772,7 +773,7 @@ dump(MLP, 'mlp_shapes_14x14.joblib') #MLP = load('mlp_shapes_14x14.joblib') ``` -## Summary +## 4.5 Summary In this notebook, you have seen how we can collect data in real time from the simulator by sampling images when the robot detects a change in the reflected light levels. @@ -780,6 +781,6 @@ Using a special test track, with paired shape and encoded label images, we were Investigation of the shape images revealed that simple black pixel counts and bounding box dimensions did not distinguish between the shapes, so we simply trained the network on the raw images. -Running the robot over a test track without and paired encoded label image, we were still able to detect when the robot was over the image based on the black pixel count of the shape image. On testing the MLP against newly collected and shapes, the neural network was able to correctly classify the collected patterns. +Running the robot over a test track without any paired encoded label images, we were still able to detect when the robot was over the image based on the black pixel count of the shape image. On testing the MLP against newly collected and shapes, the neural network was able to correctly classify the collected patterns. In the next notebook, you will explore how the robot may be able to identify the shapes in real time as part of a multi-agent system working in partnership with a pattern recognising agent running in the notebook Python environment. diff --git a/content/08. Remote services and multi-agent systems/.md/08.4 Messaging in multi-agent systems.md b/content/08. Remote services and multi-agent systems/.md/08.5 Messaging in multi-agent systems.md similarity index 97% rename from content/08. Remote services and multi-agent systems/.md/08.4 Messaging in multi-agent systems.md rename to content/08. Remote services and multi-agent systems/.md/08.5 Messaging in multi-agent systems.md index 8cdec4ab..10688c28 100644 --- a/content/08. Remote services and multi-agent systems/.md/08.4 Messaging in multi-agent systems.md +++ b/content/08. Remote services and multi-agent systems/.md/08.5 Messaging in multi-agent systems.md @@ -14,13 +14,13 @@ jupyter: name: python3 --- -# 4 Messaging in multi-agent systems +# 5 Messaging in multi-agent systems In the previous notebooks in this session, you have seen how we can pull data collected in the simulator into the notebook's Python environment, and then analyse it in that environment at our convenience. -In particular, we could convert the raw data to an image based representation, as well as presenting in as raw data to a pre-trained multilayer perceptron (MLP) or a pre-trained convolutional neural network (CNN). +In particular, we could convert the raw data to an image based representation, as well as presenting it as raw data to a pre-trained multilayer perceptron (MLP) or a pre-trained convolutional neural network (CNN). -We could also capture and decode test labels for the images, allowing is to train a classifier neural network purely using information retrieved from the simulated robot. +We could also capture and decode test labels for the images, allowing us to train a classifier neural network purely using information retrieved from the simulated robot. To simplify data collection matters in the original experiments, we "teleported" the robot to specific sampling locations, rather than expecting it to explore the environment and try to detect images on its own. @@ -29,7 +29,7 @@ In the previous notebook, you saw how we could collect data "on the move", getti In this notebook, we will try to make things even more dynamic. In particular, we will make use of a communication mechanism where the robot can send data back to the notebook environment for analysis, and then when the analysis is complete, have a message sent from the notebook Python environment back to the robot identifying how a potential image was classified. -## 4.1 ROS — the Robot Operating System +## 5.1 ROS — the Robot Operating System *ROS*, the *Robot Operating System*, provides one possible architecture for implementing a dynamic message passing architecture. In a ROS environment, separate *nodes* publish details of one or more *services* they can perform along with *topics* that act act as the nodes address that other nodes can subscribe. Nodes then pass messages between each other in order to perform a particular task. The ROS architecture is rather elaborate for our needs, however, so we shall use a much simpler and more direct approach. @@ -42,7 +42,7 @@ In this notebook, we will try to make things even more dynamic. In particular, w The approach we will use, although much simpler approach than the full ROS architecture, will also be based on a message passing approach. To implement the communication system, we need to define a "message" handler in the notebook's Python environment that can accept messages sent from the simulated robot, perform some sort of analysis task on the received data, and then provide a response back to the simulated robot. -### 4.1.1 Communicating between the notebook and the robot +### 5.1.1 Communicating between the notebook and the robot A simple diagram helps to explain the architecture we are using. @@ -76,7 +76,7 @@ The diagram shows three boxes: The figure is intended to convey the idea that the robot sends a message to a message handler running in the notebook's Python environment, which presents the decoded message contents to a neural network. The network classifies the data, passes the classification "back" to the message handler, and this in turn passes a response message back to the simulated robot. -### 4.1.2 Defining a simple message handler +### 5.1.2 Defining a simple message handler Inside the robot, a simple mechanism is already defined that allows the robot to send a message to the Python environment, but there is nothing defined on the Python end to handle it. @@ -274,11 +274,11 @@ We can also view the logfile to see a report from the Python side of the transac %cat logger.txt ``` -### 4.1.3 Passing state +### 5.1.3 Passing state Passing messages is all very well, but can we go a step further? Can we pass *data objects* between the robot and the Python environment, and back again? -Let's start by adding another level of indirection to out program. In this case, let's create a simple agent that takes a parsed message object, does something to it (which in this case isn't very interesting!) and passes a modified object back: +Let's start by adding another level of indirection to our program. In this case, let's create a simple agent that takes a parsed message object, does something to it (which in this case isn't very interesting!) and passes a modified object back: ```python def simple_echo_agent(msg): @@ -435,9 +435,9 @@ Again, we can also view the logfile giving the Python agent's perspective: %cat logger.txt ``` -### 4.1.4 Extending the message parser +### 5.1.4 Extending the message parser -Let's now look at how we might retrieve real time sensor data in out message passing system. +Let's now look at how we might retrieve real time sensor data in our message passing system. As well as the `PY::` message processor, the robot also has a special `IMG_DATA` message processor. Printing the message `IMG_DATA` to the simulator output window causes a special message to be passed to the Python environment. This message starts with the phrase `IMG_DATA::`, followed by the sensor data. @@ -518,7 +518,7 @@ You should hopefully see an echo of a large amount of sensor data appear in the %cat logger.txt ``` -### 4.1.5 Activity — Reviewing the inter-agent message protocol and communication activity +### 5.1.5 Activity — Reviewing the inter-agent message protocol and communication activity At this point, let's quickly recap on the messaging protocol we have defined by way of another sequence diagram: @@ -549,12 +549,12 @@ robot <- responder [label = "JSON::response"]; In the sequence diagram, the *robot* passes a message (`IMG_DATA`) containing image data to the Python *agent*. The *agent* has the message parsed by the *parser*, converts the response to an image pair, and passes one of the images to the *MLP* neural network. The *MLP* classifies the image and returns a prediction to the a *agent*. The *agent* creates a response and passes it to the *responder*, which encodes the response as a text message and sends it to the *robot*. The robot then parses the message as a Javascript object and uses it as required. -## 4.2 Putting the pieces together — a multi-agent system +## 5.2 Putting the pieces together — a multi-agent system With our message protocol defined, let's see if we can now create a multi-agent system where the robot collects some image data and passes it to the Python agent. The Python agent should then decode the image data, present it to a pre-trained multi-layer perceptron neural network, and identify a presented shape. The Python agent should then inform the robot about the shape the robot of the object it can see. -### 4.2.1 The image classifier agent +### 5.2.1 The image classifier agent To perform the recognition task, we need to implement our agent. The agent will take the image data and place it in a two row dataframe in the correct form. Then it will generate an image pair from the dataframe, and present the left-hand shape image to the neural network. The neural network will return a shape prediction and this will be passed in a message back to the robot. @@ -656,7 +656,7 @@ random_shape_x = 200 + random.randint(0, 5)*80 *Note down any other ideas you have about how a robot might be able to co-operate with other agents as apart of a multi-agent system.* -## 4.3 Summary +## 5.3 Summary In this notebook, you have seen how we can create a simple protocol that allows the passage of messages between the robot and Python agent in a simple multi-agent system. The Python agent picks up the message received from the robot, parses it and decodes it as an image. The image is then classified by an MLP and the agent responds to the robot with a predicted image classification. diff --git a/content/08. Remote services and multi-agent systems/.md/08.5 Conclusion.md b/content/08. Remote services and multi-agent systems/.md/08.6 Conclusion.md similarity index 88% rename from content/08. Remote services and multi-agent systems/.md/08.5 Conclusion.md rename to content/08. Remote services and multi-agent systems/.md/08.6 Conclusion.md index a137c642..9ba5b867 100644 --- a/content/08. Remote services and multi-agent systems/.md/08.5 Conclusion.md +++ b/content/08. Remote services and multi-agent systems/.md/08.6 Conclusion.md @@ -7,14 +7,14 @@ jupyter: extension: .md format_name: markdown format_version: '1.2' - jupytext_version: 1.5.2 + jupytext_version: 1.6.0 kernelspec: display_name: Python 3 language: python name: python3 --- -# 5 Conclusion +# 6 Conclusion Phew... you made it... Well done:-) @@ -38,7 +38,7 @@ When it came to actually programming the robot, you hopefully learned that somet As well as sequential programs, you also saw how we could use rule based systems to create rich programs that react to particular events, from the simple conversational agent originally implemented many decades ago in the form of *Eliza*, to more elaborate rule based systems created using the `durable-rules` framework. -You then learned how we could use simple neural networks to perform a range of classification tasks. *MLPs*, which is to say, *multilayer perceptrons*, can be quite quick to train, but may struggle when it comes to all but the simplest or most well behaved classification tasks. If you worked through the optinal materials, you will also have seen how *CNNs*, or *convolutional neural networks*, offer far more robust behaviour, particularly when it comes to image based recognition tasks. However, they are far more expensive to train in many senses of the word — in terms of training data required, computational effort and time. Trying to make sense of how neural networks actually perform their classification tasks is a significant challenge, but you saw how certain visualisation techniques could be used to help us peer inside the "mind" of a neural network. +You then learned how we could use simple neural networks to perform a range of classification tasks. *MLPs*, which is to say, *multilayer perceptrons*, can be quite quick to train, but may struggle when it comes to all but the simplest or most well behaved classification tasks. If you worked through the optional materials, you will also have seen how *CNNs*, or *convolutional neural networks*, offer far more robust behaviour, particularly when it comes to image based recognition tasks. However, they are far more expensive to train in many senses of the word — in terms of training data required, computational effort and time. Trying to make sense of how neural networks actually perform their classification tasks is a significant challenge, but you saw how certain visualisation techniques could be used to help us peer inside the "mind" of a neural network. Finally, you saw how we could start to consider the robot+Python notebook computational environment as a *multi-agent* system, in which we programmed the robot and Python agents separately, and then created a simple message passing protocol to allow them to communicate. Just as complex emergent behaviours can arise from multiple interacting rules in a rule based system, or the combined behaviour of the weighted connections between neural network neurons, so too might we create complex behaviours from the combined behaviour of agents in a simple multi-agent system. @@ -47,7 +47,7 @@ You have been exposed to a *lot* of code in this module, and you are not expecte So give yourself a pat on the back, and grab a quick cup of your favourite hot drink, or a long glass of your favourite cold drink; and as you savour it for a while, reflect on just how much you've covered over the last eight weeks. -*Jot down a few notes here to reflect on what you enjoyed, what you learned, and what you found particularly challenging studying this block. Are there any things you could have done differently that would have made it easier or more rewarding?* +*Jot down a few notes here to reflect on what you enjoyed, what you learned, what you found particularly challenging studying this block and what surprised you. Are there any things you could have done differently that would have made it easier or more rewarding?* So with the practical material for the block now completed, it'll be time to finish off that TMA, and start thinking about making a start on the next block... diff --git a/content/08. Remote services and multi-agent systems/08.1 Introducing remote services and multi-agent systems.ipynb b/content/08. Remote services and multi-agent systems/08.1 Introducing remote services and multi-agent systems.ipynb index 744c03d2..cb1933bc 100644 --- a/content/08. Remote services and multi-agent systems/08.1 Introducing remote services and multi-agent systems.ipynb +++ b/content/08. Remote services and multi-agent systems/08.1 Introducing remote services and multi-agent systems.ipynb @@ -18,7 +18,7 @@ "\n", "The model is a bit like asking a research librarian for some specific information, the research librarian researching the topic, perhaps using resources you don't have direct access to, and then the research librarian providing you with the information you requested.\n", "\n", - "In a more dynamic multi-agent case we might consider the robot and the notebook environment to be acting as peers sending messages as and when they can between each other. For example, we might have two agents: a Lego mobile robot and a personal computer (PC), or the simulated robot and the notebook. In computational terms, *agents* are long-lived computational systems that can deliberate on the actions they may take in pursuit of their own goals based on their own internal state (often referred to as \"beliefs\") and sensory inputs. Their actions are then performed by means of some sort of effector system that can act on to change the state of the environment within which they reside.\n", + "In a more dynamic multi-agent case we might consider the robot and the notebook environment to be acting as peers sending messages as and when they can between each other. For example, we might have two agents: a Lego mobile robot and a personal computer (PC), or the simulated robot and the notebook. In computational terms, *agents* are long-lived computational systems that can deliberate on the actions they may take in pursuit of their own goals based on their own internal state (often referred to as \"beliefs\") and sensory inputs. Their actions are then performed by means of some sort of effector system that can act to change the state of the environment within which they reside.\n", "\n", "In a multi-agent system, two or more agents may work together to combine to perform some task that not only meets the (sub)goals of each individual agent, but that might also strive to attain some goal agreed upon by each member of the multi-agent system. Agents may communicate by making changes to the environment, for example, by leaving a trail that other agents may follow (an effect known as *stigmergy*), or by passing messages between themselves directly.\n", "\n", @@ -48,10 +48,24 @@ "\n", "Alongside each digit is a grey square, where the grey level is used to encode the actual label associated with the image. (You can see how the background was created in the `Background Image Generator.ipynb` notebook in the top-level `backgrounds` folder.)\n", "\n", - "In this notebook, you will use the light sensor as a simple low resolution camera, working with the pixel array data rather then the single value reflected light value.\n", - "\n", - "*Note that this functionality is not supported by the real Lego light sensor.*\n", - "\n", + "Typically, we use the light sensor to return a single value, such as the reflected light intensity value. However, in this notebook, you will use the light sensor as a simple low resolution camera. Rather than returning a single value, the sensor returns an array of data containing the values associated individual pixel values from a sampled image. We can then use this square array of pixel data collected by the robot inside the simulator, rather than a single value reflected light value, as the basis for trying to detect what the robot can actually see." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [ + "alert-danger" + ] + }, + "source": [ + "*Note that this low resolution camera-like functionality is not supported by the real Lego light sensor.*" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "Let's start by loading in the simulator:" ] }, @@ -72,7 +86,7 @@ "source": [ "In order to collect the sensor image data, if the simulated robot program `print()` message starts with the word `image_data`, then we can send light sensor array data from the left, right or both light sensors to a data log in the notebook Python environment.\n", "\n", - "The `-R` switch in magic at the start of the following code cell will run the program in the simulator once it has been downloaded. " + "The `-R` (`--autorun`) switch in the magic at the start of the following code cell will run the program in the simulator once it has been downloaded. " ] }, { @@ -86,9 +100,6 @@ "# Configure a light sensor\n", "colorLeft = ColorSensor(INPUT_2)\n", "\n", - "#Sample the light sensor reading\n", - "sensor_value = colorLeft.reflected_light_intensity\n", - "\n", "# This is a command invocation rather than a print statement\n", "print(\"image_data left\")\n", "# The command is responded to by\n", @@ -153,6 +164,13 @@ "roboSim.image_data()" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Each row of the dataframe represents a single captured image from one of the light sensors." + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -166,7 +184,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The data representing the image is a long list of RGB (red green, blue) values. We can generate an image from a the a specific row of the dataframe, given it the row index:" + "The data representing the image is a long list of RGB (red green, blue) values. We can generate an image from a \n", + "specific row of the dataframe, given the row index:" ] }, { @@ -309,12 +328,12 @@ "source": [ "### 1.1.3 Collecting multiple sample images\n", "\n", - "The handwritten digit image sampling point locations in the *MINIST_Digits* simulator background can be found at the following locations:\n", + "The *MINIST_Digits* simulator background contains a selection of handwritten digit images arranged in a sparse grid on the background which we shall refer to as image sampling point locations. These image locations within the background can be found at the following co-ordinates:\n", "\n", "- along rows `100` pixels apart, starting at `x=100` and ending at `x=2000`;\n", "- along columns `100` pixels apart, starting at `y=50` and ending at `y=1050`.\n", "\n", - "We can collect the samples collected over a column by using line magic to teleport the simulated robot to each new location in turn and automatically run the program to log the sensor data.\n", + "We can collect images from this grid by using magic to teleport the robot to each sampling location and then automatically run the robot program to log the sensor data. For example, to collect images from one column of the background arrangement — that is, images with a particular *x* co-ordinate — we need to calculate the required *y* values for each sampling point.\n", "\n", "To start, let's just check we can generate the required *y* values:" ] @@ -337,9 +356,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Using this as a pattern, we can now create a simple script to clear the datalog, then iterate through the desired *y* locations, using line magic to locate the robot at each step and run the already downloaded image sampling program.\n", + "To help us keep track of where we are in the sample collection, we can use a visual indicator such as a progress bar. \n", "\n", - "To access the value of the iterated *y* value in the magic, we need to prefix it with a `$` when we refer to it. Note that we also use the `tqdm.notebook.trange` argument to define the range: this enhance the range iterator to provide an interactive progress bar that allows us to follow the progress of the iterator." + "The `tqdm` Python package provides a wide range of tools for displaying progress bars in Python programs. For example the `tqdm.notebook.trange` function enhances the range iterator with an interactive progress bar that allows us to follow the progress of the iterator:" ] }, { @@ -350,7 +369,32 @@ "source": [ "# Provide a progress bar when iterating through the range\n", "from tqdm.notebook import trange\n", + "import time\n", + "\n", + "for i in trange(min_value, max_value, step):\n", + " #Wait a moment\n", + " time.sleep(0.5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now create a simple script that will:\n", + "\n", + "- clear the datalog;\n", + "- iterate through the desired *y* locations with a visual indicator of how much progress we have made;\n", + "- use line magic to locate the robot at each step and run the already downloaded image sampling program.\n", "\n", + "To access the value of the iterated *y* value in the magic, we need to prefix it with a `$` when we refer to it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ "# We need to add a short delay between iterations to give\n", "# the data time to synchronise\n", "import time\n", @@ -392,7 +436,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can convert the image to a black and white image by setting pixels above a specified threshold value to white (`255`), otherwise coloring the pixel black (`0`) using the `generate_bw_image()` function. This will select a row from the datalog at a specific location, optionally crop it to a specific area, and then pixel values greater than threshold to white (`255`), with values equal to or below the threshold to `0`." + "We can convert the image to a black and white image by setting pixels above a specified threshold value to white (`255`), otherwise coloring the pixel black (`0`) using the `generate_bw_image()` function. This will select a row from the datalog at a specific location, optionally crop it to a specific area, and then set pixel values greater than threshold to white (`255`), with values equal to or below the threshold to `0`." ] }, { @@ -507,7 +551,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 1.2 Testing the robot sample images using a pre-retrained MLP\n", + "## 1.2 Testing the robot sampled images using a pre-retrained MLP\n", "\n", "Having grabbed the image data, we can pre-process it as required and then present it to an appropriately trained neural network to see if the network can identify the digit it represents." ] @@ -797,7 +841,8 @@ { "cell_type": "markdown", "metadata": { - "activity": true + "activity": true, + "heading_collapsed": true }, "source": [ "#### Example discussion\n", @@ -808,7 +853,8 @@ { "cell_type": "markdown", "metadata": { - "activity": true + "activity": true, + "hidden": true }, "source": [ "We can collect the image data by calling the `%sim_magic` with the `-R` switch so that it runs the current program directly. We also need to set the location using the `-x` and `-y` parameters." @@ -818,7 +864,8 @@ "cell_type": "code", "execution_count": null, "metadata": { - "activity": true + "activity": true, + "hidden": true }, "outputs": [], "source": [ @@ -828,7 +875,8 @@ { "cell_type": "markdown", "metadata": { - "activity": true + "activity": true, + "hidden": true }, "source": [ "The data is available in a dataframe returned by calling `roboSim.image_data()`." @@ -837,7 +885,8 @@ { "cell_type": "markdown", "metadata": { - "activity": true + "activity": true, + "hidden": true }, "source": [ "To view the result, we can zoom the display of the last collected image in the notebook synched datalog." @@ -847,7 +896,8 @@ "cell_type": "code", "execution_count": null, "metadata": { - "activity": true + "activity": true, + "hidden": true }, "outputs": [], "source": [ @@ -861,7 +911,8 @@ { "cell_type": "markdown", "metadata": { - "activity": true + "activity": true, + "hidden": true }, "source": [ "By my observation, the digit represented by the image at the specified location is a figure `3`.\n", @@ -873,7 +924,8 @@ "cell_type": "code", "execution_count": null, "metadata": { - "activity": true + "activity": true, + "hidden": true }, "outputs": [], "source": [ @@ -883,7 +935,8 @@ { "cell_type": "markdown", "metadata": { - "activity": true + "activity": true, + "hidden": true }, "source": [ "This appears to match my prediction." @@ -893,371 +946,15 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 1.3 Collecting digit image and class data from the simulator\n", - "\n", - "If you look carefully at the *MNIST_Digits* background in the simulator, you will see that alongside each digit is a solid coloured area. This area is a greyscale value that represents the value of the digit represented by the image. That is, it represents a training label for the digit.\n", - "\n", - "Before we proceed, clear out the datalog to give ourselves a clean datalog to work with:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%sim_data --clear" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The solid coloured areas are arranged so that when the left light sensor is over the image, the right sensor is over the training label area." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%sim_magic_preloaded -b MNIST_Digits -O -R -AH -x 400 -y 50\n", - "\n", - "#Sample the light sensor reading\n", - "sensor_value = colorLeft.reflected_light_intensity\n", - "\n", - "# This is essentially a command invocation\n", - "# not just a print statement!\n", - "print(\"image_data both\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can retrieve the last pair of images from the `roboSim.image_data()` dataframe using the `get_sensor_image_pair()` function:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from nn_tools.sensor_data import get_sensor_image_pair\n", - "\n", - "# The sample pair we want from the logged image data\n", - "pair_index = -1\n", - "\n", - "left_img, right_img = get_sensor_image_pair(roboSim.image_data(),\n", - " pair_index)\n", - "\n", - "zoom_img(left_img), zoom_img(right_img)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "tags": [ - "alert-success" - ] - }, - "source": [ - "The image labels are encoded as follows:\n", - "\n", - "`greyscale_value = 25 * digit_value`" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "One way of decoding the label is as follows:\n", - "\n", - "- divide each of the greyscale pixel values collected from the right hand sensor array by 25;\n", - "- take the median of these values and round to the nearest integer; *in a noise free environment, using the median should give a reasonable estimate of the dominant pixel value in the frame.*\n", - "- ensure we have an integer by casting the result to an integer.\n", - "\n", - "The *pandas* package has some operators that can help us with that if we put all the data into a *pandas* *Series* (essentially, a single column dataframe):" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import pandas as pd\n", - "\n", - "def get_training_label_from_sensor(img):\n", - " \"\"\"Return a training class label from a sensor image.\"\"\"\n", - " # Get the pixels data as a pandas series\n", - " # (similar to a single column dataframe)\n", - " image_pixels = pd.Series(list(img.getdata()))\n", - "\n", - " # Divide each value in the first column (name: 0) by 25\n", - " image_pixels = image_pixels / 25\n", - "\n", - " # Find the median value\n", - " pixels_median = image_pixels.median()\n", - "\n", - " # Find the nearest integer and return it\n", - " return int( pixels_median.round(0))\n", - "\n", - "# Try it out\n", - "get_training_label_from_sensor(right_img)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The following function will grab right and left image from the data log, decode the label from the right hand image, and return the handwritten digit from the left light sensor along with the training label:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def get_training_data(raw_df, pair_index):\n", - " \"\"\"Get training image and label from raw data frame.\"\"\"\n", - " \n", - " # Get the left and right images\n", - " # at specified pair index\n", - " left_img, right_img = get_sensor_image_pair(raw_df,\n", - " pair_index)\n", - " \n", - " # Find the training label value as the median\n", - " # value of the right habd image.\n", - " # Really, we should properly try to check that\n", - " # we do have a proper training image, for example\n", - " # by encoding a recognisable pattern \n", - " # such as a QR code\n", - " training_label = get_training_label_from_sensor(right_img)\n", - " return training_label, left_img\n", - " \n", - "\n", - "# Try it out\n", - "label, img = get_training_data(roboSim.image_data(),\n", - " pair_index)\n", - "print(f'Label: {label}')\n", - "zoom_img(img)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "tags": [ - "alert-danger" - ] - }, - "source": [ - "We're actually taking quite a lot on trust in extracting the data from the dataframe in this way. Ideally, we would have a unique identifiers that reliably associate the left and right images as having been sampled from the same location. As it is, we assume the left and right image datasets appear in that order, one after the other, so we can count back up the dataframe to collect different pairs of data." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can now test that image against the classifier:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "image_class_predictor(MLP, img)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "### 1.3.1 Activity — Testing the ability to recognise images slight off-center in the image array\n", - "\n", - "Write a simple program to collect sample data at a particular location and then display the digit image and the decoded label value.\n", - "\n", - "Modify the x or y co-ordinates used to locate the robot by by a few pixel values away from the sampling point origins and test the ability of the network to recognise digits that are lightly off-center in the image array.\n", - "\n", - "How well does the network perform?\n", - "\n", - "*Hint: when you have run your program to collect the data in the simulator, run the `get_training_data()` with the `roboSim.image_data()` to generate the test image and retrieve its decoded training label.*\n", - "\n", - "*Hint: use the `image_class_predictor()` function with the test image to see if the classifier can recognise the image.*" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Your code here" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "student": true - }, - "source": [ - "*Record your observations here.*" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "### 1.3.2 Activity — Collecting image sample data from the *MNIST_Digits* background (optional)\n", - "\n", - "In this activity, you will need to collect a complete set of sample data from the simulator to test the ability of the network to correctly identify the handwritten digit images.\n", - "\n", - "Recall that the sampling positions are arranged along rows 100 pixels apart, starting at x=100 and ending at x=2000;\n", - "along columns 100 pixels apart, starting at y=50 and ending at y=1050.\n", - "\n", - "Write a program to automate the collection of data at each of these locations.\n", - "\n", - "How would you then retrieve the hand written digit image and it's decoded training label?" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "student": true - }, - "source": [ - "*Your program design notes here.*" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "student": true - }, - "outputs": [], - "source": [ - "# Your program code" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "student": true - }, - "source": [ - "*Describe here how you would retrieve the hand written digit image and it's decoded training label.*" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "#### Example solution\n", - "\n", - "*Click on the arrow in the sidebar or run this cell to reveal an example solution.*" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "To collect the data, I use two `range()` commands, one inside the other, to iterate through the *x* and *y* coordinate values. The outer loop generates the *x* values and the inner loop generates the *y* values:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "activity": true - }, - "outputs": [], - "source": [ - "# Clear the datalog so we know it's empty\n", - "%sim_data --clear\n", - "\n", - "\n", - "# Generate a list of integers with desired range and gap\n", - "min_value = 50\n", - "max_value = 1050\n", - "step = 100\n", - "\n", - "for _x in trange(100, 501, 100):\n", - " for _y in range(min_value, max_value+1, step):\n", - "\n", - " %sim_magic -R -x $_x -y $_y\n", - " # Give the data time to synchronise\n", - " time.sleep(1)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "We can now grab view the data we have collected:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "activity": true - }, - "outputs": [], - "source": [ - "training_df = roboSim.image_data()\n", - "training_df" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "The `get_training_data()` function provides a convenient way of retrieving the handwritten digit image and the decoded training label." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "activity": true - }, - "outputs": [], - "source": [ - "label, img = get_training_data(training_df, pair_index)\n", - "zoom_img(img), label" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 1.4 Summary\n", + "## 1.3 Summary\n", "\n", "In this notebook, you have seen how we can use the robot's light sensor as a simple low resolution camera to sample handwritten digit images from the background. Collecting the data from the robot, we can then convert it to an image and preprocess is before testing it with a pre-trained multi-layer perceptron.\n", "\n", "Using captured images that are slightly offset from the center of the image array essentially provides us with a \"jiggled\" image, which tends to increase the classification error.\n", "\n", - "You have also seen how we might automate the collection of large amounts of data by \"teleporting\" the robot to particular locations and sampling the data. With the background defined as it is, we can also pick up encoded label data an use this to generate training data made up of scanned handwritten digit and image label pairs. In principle, we could use the image and test label data collected in this way as a training data set for an MLP or convolutional neural network.\n", + "You have also seen how we can automate the way the robot collects image data by \"teleporting\" the robot to a particular location and then sampling the data there. \n", "\n", - "The next notebook in the series is optional and demonstrates the performance of a CNN on the MNIST dataset. The required content continues with a look at how we can start to collect image data using the simulated robot whilst it is on the move." + "In the next notebook in the series, you will see how we can use this automation approach to collect image and class data \"in bulk\" from the simulator." ] } ], @@ -1280,7 +977,20 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": false, + "sideBar": false, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": false } }, "nbformat": 4, diff --git a/content/08. Remote services and multi-agent systems/08.2 Collecting digit image and class data from the simulator.ipynb b/content/08. Remote services and multi-agent systems/08.2 Collecting digit image and class data from the simulator.ipynb new file mode 100644 index 00000000..02d11844 --- /dev/null +++ b/content/08. Remote services and multi-agent systems/08.2 Collecting digit image and class data from the simulator.ipynb @@ -0,0 +1,481 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2. Collecting digit image and class data from the simulator\n", + "\n", + "If we wanted to collect image data from the background and then train a network using those images, we would need to generate the training label somehow. We could do this manually, looking at each image and then by observation recording the digit value, associating it with the image location co-ordinates. But could we also encode the digit value explicitly somehow?\n", + "\n", + "If you look carefully at the *MNIST_Digits* background in the simulator, you will see that alongside each digit is a solid coloured area. This area is a greyscale value that represents the value of the digit represented by the image. That is, it represents a training label for the digit." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [ + "alert-success" + ] + }, + "source": [ + "*The greyscale encoding is quite a crude encoding method that is perhaps subject to noise. Another approach might be to use a simple QR code to encode the digit value.*" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As usual, load in the simulator in the normal way:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nbev3devsim.load_nbev3devwidget import roboSim, eds\n", + "\n", + "%load_ext nbev3devsim" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Clear the datalog just to ensure we have a clean datalog to work with:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%sim_data --clear" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The solid greyscale areas are arranged so that when the left light sensor is over the image, the right sensor is over the training label area." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%sim_magic_preloaded -b MNIST_Digits -O -R -AH -x 400 -y 50\n", + "\n", + "#Sample the light sensor reading\n", + "sensor_value = colorLeft.reflected_light_intensity\n", + "\n", + "# This is essentially a command invocation\n", + "# not just a print statement!\n", + "print(\"image_data both\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can retrieve the last pair of images from the `roboSim.image_data()` dataframe using the `get_sensor_image_pair()` function:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nn_tools.sensor_data import zoom_img\n", + "from nn_tools.sensor_data import get_sensor_image_pair\n", + "\n", + "# The sample pair we want from the logged image data\n", + "pair_index = -1\n", + "\n", + "left_img, right_img = get_sensor_image_pair(roboSim.image_data(),\n", + " pair_index)\n", + "\n", + "zoom_img(left_img), zoom_img(right_img)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [ + "alert-success" + ] + }, + "source": [ + "The image labels are encoded as follows:\n", + "\n", + "`greyscale_value = 25 * digit_value`" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "One way of decoding the label is as follows:\n", + "\n", + "- divide each of the greyscale pixel values collected from the right hand sensor array by 25;\n", + "- take the median of these values and round to the nearest integer; *in a noise free environment, using the median should give a reasonable estimate of the dominant pixel value in the frame.*\n", + "- ensure we have an integer by casting the result to an integer.\n", + "\n", + "The *pandas* package has some operators that can help us with that if we put all the data into a *pandas* *Series* (essentially, a single column dataframe):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import pandas as pd\n", + "\n", + "def get_training_label_from_sensor(img):\n", + " \"\"\"Return a training class label from a sensor image.\"\"\"\n", + " # Get the pixels data as a pandas series\n", + " # (similar to a single column dataframe)\n", + " image_pixels = pd.Series(list(img.getdata()))\n", + "\n", + " # Divide each value in the first column (name: 0) by 25\n", + " image_pixels = image_pixels / 25\n", + "\n", + " # Find the median value\n", + " pixels_median = image_pixels.median()\n", + "\n", + " # Find the nearest integer and return it\n", + " return int( pixels_median.round(0))\n", + "\n", + "# Try it out\n", + "get_training_label_from_sensor(right_img)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The following function will grab the right and left images from the data log, decode the label from the right hand image, and return the handwritten digit from the left light sensor along with the training label:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def get_training_data(raw_df, pair_index):\n", + " \"\"\"Get training image and label from raw data frame.\"\"\"\n", + " \n", + " # Get the left and right images\n", + " # at specified pair index\n", + " left_img, right_img = get_sensor_image_pair(raw_df,\n", + " pair_index)\n", + " \n", + " # Find the training label value as the median\n", + " # value of the right habd image.\n", + " # Really, we should properly try to check that\n", + " # we do have a proper training image, for example\n", + " # by encoding a recognisable pattern \n", + " # such as a QR code\n", + " training_label = get_training_label_from_sensor(right_img)\n", + " return training_label, left_img\n", + " \n", + "\n", + "# Try it out\n", + "label, img = get_training_data(roboSim.image_data(),\n", + " pair_index)\n", + "print(f'Label: {label}')\n", + "zoom_img(img)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [ + "alert-danger" + ] + }, + "source": [ + "We're actually taking quite a lot on trust in extracting the data from the dataframe in this way. Ideally, we would have a unique identifiers that reliably associate the left and right images as having been sampled from the same location. As it is, we assume the left and right image datasets appear in that order, one after the other, so we can count back up the dataframe to collect different pairs of data." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Load in our previously trained MLP classifier:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Load model\n", + "from joblib import load\n", + "\n", + "MLP = load('mlp_mnist14x14.joblib')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now test that image against the classifier:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nn_tools.network_views import image_class_predictor\n", + "\n", + "image_class_predictor(MLP, img)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true + }, + "source": [ + "### 2.3.1 Activity — Testing the ability to recognise images slight off-center in the image array\n", + "\n", + "Write a simple program to collect sample data at a particular location and then display the digit image and the decoded label value.\n", + "\n", + "Modify the x or y co-ordinates used to locate the robot by by a few pixel values away from the sampling point origins and test the ability of the network to recognise digits that are lightly off-center in the image array.\n", + "\n", + "How well does the network perform?\n", + "\n", + "*Hint: when you have run your program to collect the data in the simulator, run the `get_training_data()` with the `roboSim.image_data()` to generate the test image and retrieve its decoded training label.*\n", + "\n", + "*Hint: use the `image_class_predictor()` function with the test image to see if the classifier can recognise the image.*\n", + "\n", + "*Hint: if you seem to have more data in the dataframe than you thought you had collected, did you remember to clear the datalog before collecting your data?*" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Your code here" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "student": true + }, + "source": [ + "*Record your observations here.*" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true + }, + "source": [ + "### 2.3.2 Activity — Collecting image sample data from the *MNIST_Digits* background (optional)\n", + "\n", + "In this activity, you will need to collect a complete set of sample data from the simulator to test the ability of the network to correctly identify the handwritten digit images.\n", + "\n", + "Recall that the sampling positions are arranged along rows 100 pixels apart, starting at x=100 and ending at x=2000;\n", + "along columns 100 pixels apart, starting at y=50 and ending at y=1050.\n", + "\n", + "Write a program to automate the collection of data at each of these locations.\n", + "\n", + "How would you then retrieve the hand written digit image and it's decoded training label?\n", + "\n", + "*Hint: import the `time` package and use the `time.sleep` function to provide a short delay between each sample collection. You may also find it convenient to import the `trange` function to provide a progress bar indicator when iterating through the list of collection locations: `from tqdm.notebook import trange`.*" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "student": true + }, + "source": [ + "*Your program design notes here.*" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "student": true + }, + "outputs": [], + "source": [ + "# Your program code" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "student": true + }, + "source": [ + "*Describe here how you would retrieve the hand written digit image and it's decoded training label.*" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true, + "heading_collapsed": true + }, + "source": [ + "#### Example solution\n", + "\n", + "*Click on the arrow in the sidebar or run this cell to reveal an example solution.*" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true, + "hidden": true + }, + "source": [ + "To collect the data, I use two `range()` commands, one inside the other, to iterate through the *x* and *y* coordinate values. The outer loop generates the *x* values and the inner loop generates the *y* values:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "activity": true, + "hidden": true + }, + "outputs": [], + "source": [ + "# Make use of the progress bar indicated range\n", + "from tqdm.notebook import trange\n", + "import time\n", + "\n", + "# Clear the datalog so we know it's empty\n", + "%sim_data --clear\n", + "\n", + "\n", + "# Generate a list of integers with desired range and gap\n", + "min_value = 50\n", + "max_value = 1050\n", + "step = 100\n", + "\n", + "for _x in trange(100, 501, 100):\n", + " for _y in range(min_value, max_value+1, step):\n", + "\n", + " %sim_magic -R -x $_x -y $_y\n", + " # Give the data time to synchronise\n", + " time.sleep(1)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true, + "hidden": true + }, + "source": [ + "We can now grab and view the data we have collected:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "activity": true, + "hidden": true + }, + "outputs": [], + "source": [ + "training_df = roboSim.image_data()\n", + "training_df" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true, + "hidden": true + }, + "source": [ + "The `get_training_data()` function provides a convenient way of retrieving the handwritten digit image and the decoded training label." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "activity": true, + "hidden": true + }, + "outputs": [], + "source": [ + "label, img = get_training_data(training_df, pair_index)\n", + "zoom_img(img), label" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2.4 Summary\n", + "\n", + "In this notebook, you have automated the collection of hand-written digit and encoded label image data from the simulator ad seen how this can be used to generate training data made up of scanned handwritten digit and image label pairs. In principle, we could use the image and test label data collected in this way as a training data set for an MLP or convolutional neural network.\n", + "\n", + "The next notebook in the series is optional and demonstrates the performance of a CNN on the MNIST dataset. The required content continues with a look at how we can start to collect image data using the simulated robot whilst it is on the move." + ] + } + ], + "metadata": { + "jupytext": { + "formats": "ipynb,.md//md" + }, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.8" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": false, + "sideBar": false, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": false + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/content/08. Remote services and multi-agent systems/08.2 Recognising digits using a convolutional neural network (optional).ipynb b/content/08. Remote services and multi-agent systems/08.3 Recognising digits using a convolutional neural network (optional).ipynb similarity index 92% rename from content/08. Remote services and multi-agent systems/08.2 Recognising digits using a convolutional neural network (optional).ipynb rename to content/08. Remote services and multi-agent systems/08.3 Recognising digits using a convolutional neural network (optional).ipynb index 45538ed7..4c8e7440 100644 --- a/content/08. Remote services and multi-agent systems/08.2 Recognising digits using a convolutional neural network (optional).ipynb +++ b/content/08. Remote services and multi-agent systems/08.3 Recognising digits using a convolutional neural network (optional).ipynb @@ -17,7 +17,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# 2 Recognising digits using a convolutional neural network (optional)\n", + "# 3 Recognising digits using a convolutional neural network (optional)\n", "\n", "In the previous notebook, you saw how we could collect image data sampled by the robot within the simulator into the notebook environment and then test the collected images against an \"offboard\" pre-trained multilayer perceptron run via the notebook's Python environment. However, even with an MLP tested on \"jiggled\" images, the network's classification performance degrades when \"off-center\" images are presented to it.\n", "\n", @@ -28,7 +28,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 2.1 Using a pre-trained convolutional neural network\n", + "## 3.1 Using a pre-trained convolutional neural network\n", "\n", "Although training a convolutional neural network can take quite a lot of time, and a *lot* of computational effort, off-the-shelf pre-trained models are also increasingly available. However, whilst this means you may be able to get started on a recognition task without the requirement to build your own model, you would do well to remember the phrase *caveat emptor*: buyer beware.\n", "\n", @@ -59,7 +59,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### 2.1.1 Loading the CNN\n", + "### 3.1.1 Loading the CNN\n", "\n", "The first thing we need to do is to load in the model. The actual TensorFlow Lite framework code is a little bit fiddly in places, so we'll use some convenience functions to make using the framework slightly easier.\n", "\n", @@ -99,14 +99,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The main take away from this report are the items the describe the structure of the input and output arrays. In particular, we have an input array of a single 28x28 pixel greyscale image array, and an output of 10 classification classes. Each output gives the probability with which the CNN believes the image represents the corresponding digit." + "The main take away from this report are the items that describe the structure of the input and output arrays. In particular, we have an input array of a single 28x28 pixel greyscale image array, and an output of 10 classification classes. Each output gives the probability with which the CNN believes the image represents the corresponding digit." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### 2.1.2 Testing the network\n", + "### 3.1.2 Testing the network\n", "\n", "We'll test the network with images retrieved from the simulator.\n", "\n", @@ -307,7 +307,7 @@ "activity": true }, "source": [ - "### 2.1.3 Activity — Testing the CNN using robot collected image samples\n", + "### 3.1.3 Activity — Testing the CNN using robot collected image samples\n", "\n", "The `ipywidget` powered end user application defined in the code cell below will place the robot at a randomly selected digit location and display and then test the image grabbed from *the previous location* using the CNN." ] @@ -369,7 +369,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 2.2 Summary\n", + "## 3.2 Summary\n", "\n", "In this notebook, you have seen how we can use a convolutional neural network to identify handwritten digits scanned by the robot in the simulator.\n", "\n", @@ -397,7 +397,20 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": false, + "sideBar": false, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": false } }, "nbformat": 4, diff --git a/content/08. Remote services and multi-agent systems/08.3 Recognising patterns on the move.ipynb b/content/08. Remote services and multi-agent systems/08.3 Recognising patterns on the move.ipynb deleted file mode 100644 index bf729084..00000000 --- a/content/08. Remote services and multi-agent systems/08.3 Recognising patterns on the move.ipynb +++ /dev/null @@ -1,3403 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "lines_to_next_cell": 2 - }, - "source": [ - "# 3 Recognising patterns on the move\n", - "\n", - "To be really useful a robot needs to recognise things as it goes along, or ‘on the fly’. In this notebook, you will train a neural network to use a simple MLP classifier to try to identify different shapes on the background. The training samples themselves, images *and* training labels, will be captured by the robot from the simulator background.\n", - "\n", - "We will use the two light sensors to collect the data used to train the network:\n", - "\n", - "- one light sensor will capture the shape image data;\n", - "- one light sensor will capture the training class data.\n", - "\n", - "To begin with we will contrive things somewhat to collect the data at specific locations on the background. But then you will explore how we can collect images as the robot moves more naturally within the environment.\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "tags": [ - "alert-warning" - ] - }, - "source": [ - "*There is quite a lot of provided code in this notebook. You are not necessarily expected to be able to create this sort of code yourself. Instead, try to focus on the process of how various tasks are broken down into smaller discrete steps, as well as how small code fragments can be combined to create \"higher level\" functions that perform ever more powerful tasks.*" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "lines_to_next_cell": 2 - }, - "source": [ - "Before continuing, ensure the simulator is loaded and available:\n", - "\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "" - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "application/javascript": [ - "\n", - "$(function() {\n", - " $(\"#notebook-container\").resizable({\n", - " handles: 'e',\n", - " //containment: '#container',\n", - "\n", - " }); \n", - "}); \n" - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "6e5f8d7e489447f6927d576908af96c5", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "Ev3DevWidget(status='deferring flush until render')" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], - "source": [ - "from nbev3devsim.load_nbev3devwidget import roboSim, eds\n", - "%load_ext nbev3devsim" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The background image *Simple_Shapes* contains several shapes arranged in a line, including a square, a circle, four equilateral triangles (arrow heads) with different orientations, a diamond and a rectangle.\n", - "\n", - "Just below each shape is a grey square, whose fill colour is used to distinguish between the different shapes." - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "%sim_magic -b Simple_Shapes -x 600 -y 900" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.1 Evaluating the possible training data\n", - "\n", - "In this initial training pass, we will check whether the robot can clearly observe the potential training pairs. Each training pair consists of the actual shape image as well as a solid grey square, where the grey colour is use to represent one of eight (8) different training classes.\n", - "\n", - "The left light sensor will be used to sample the shape image data and the right light sensor will be used to collect the simpler grey classification group pattern.\n", - "\n", - "As we are going to be pulling data into the notebook Python environment from the simulator, ensure the local notebook datalog is cleared:" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [], - "source": [ - "roboSim.clear_datalog()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The *Simple_Shapes* background we are using in this notebook contains several small regular shapes, with label encoding patterns alongside.\n", - "\n", - "The *x* and *y* locations for sampling the eight different images, along with a designator for each shape, as are follows:\n", - "\n", - "- 200 900 square\n", - "- 280 900 right facing triangle\n", - "- 360 900 left facing triangle\n", - "- 440 900 downwards facing triangle\n", - "- 520 900 upwards facing triangle\n", - "- 600 900 diamond" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can now start to collect image data from the robot's light sensors. The `-R` switch runs the program once it has been downloaded to the simulator:" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "If we print the message `\"image_data both\"` we can collect data from both the left and the right light sensors at the same time." - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [], - "source": [ - "%%sim_magic_preloaded -b Simple_Shapes -AR -x 520 -y 900 -O\n", - "\n", - "#Sample the light sensor reading\n", - "sensor_value = colorLeft.reflected_light_intensity\n", - "\n", - "# This is essentially a command invocation\n", - "# not just a print statement!\n", - "print(\"image_data both\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can preview the collected image data in the usual way:" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
sidevalsclock
0left245,226,225,245,226,225,245,226,225,245,226,22...1
1right245,226,225,245,226,225,245,226,225,245,226,22...1
\n", - "
" - ], - "text/plain": [ - " side vals clock\n", - "0 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "1 right 245,226,225,245,226,225,245,226,225,245,226,22... 1" - ] - }, - "execution_count": 5, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "roboSim.image_data()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can also collect consecutive rows of data from the dataframe and decode them as left and right images:" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "(None, None)" - ] - }, - "execution_count": 6, - "metadata": {}, - "output_type": "execute_result" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAATgAAAEzCAYAAACluB+pAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAATyUlEQVR4nO3de6xlZXnH8e/jAIUZqCA3kYEOrTixpYpIKHihImIQiaNYE4i2UGlN4w28xEBJsMTQ1mpaTZrUGLGSimMoglKqMlMFqYmigAPOMAwgohyEAesFgQYY5+kfaw05Hedyznrfc2bNu7+fZOfs6zPPnrP37+y99l7PG5mJJLXoGTu6AUmaKwacpGYZcJKaZcBJapYBJ6lZBpykZhUFXEScHBHrIuLuiDivVlOSVEMM/R5cRCwA7gROAqaA7wJnZObt9dqTpOFKXsEdA9ydmfdk5pPA54FlddqSpHIlAXcwcN+001P9eZI0CrvM9T8QEW8D3gaw++67v/jQQw+tUnfjxo084xn1PiOpWW9SepuU+1m73qT0Vvt+3nnnnT/NzP1ndaPMHHQAjgOunXb6fOD8bd3mec97XtZy3XXXVatVu96k9DYp97N2vUnprfb9BG7KWeZUSbx+Fzg8Ig6LiN2A04GrC+pJUlWD36Jm5oaIeCdwLbAA+HRmrqnWmSQVKtoGl5lfBr5cqRdJqso9GSQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ss0rXZPh0RDwUEatrNSRJtZS+gvsMcHKFPiSpuqKAy8wbgJ9V6kWSqhq8qtbTBSKWANdk5hFbufzpkeX777//iy+//PKif2+TRx99lD333LNKrdr1JqW3SbmftetNSm+17+cJJ5xwc2YePasbzXYE8OYHYAmweibXdWT5jq831lq169lbW7Uy539kuSSNmgEnqVmlXxNZDnwLWBoRUxFxdp22JKlc6ZoMZ9RqRJJq8y2qpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmjU44CLikIi4LiJuj4g1EXFOzcYkqVTJvqgbgPdl5i0RsRdwc0SszMzbK/UmSUUGv4LLzAcy85b++K+AtcDBtRqTpFJVtsH1Y8tfBNxYo54k1VBjTYY9gW8AF2fmlVu43DUZRlRvrLUA1q9fz9TUVJVaS5cunYjfQe16Y60FO2BNBmBX4FrgvTO5vmsy7Ph6Y62VmfnRj340gSqHSfkd1K431lqZ87wmQ0QEcAmwNjP/cWgdSZorJdvgXgr8KfDKiFjVH06p1JckFRv8NZHM/CYQFXuRpKrck0FSsww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNKtnZfveI+E5E3NqPLL+oZmOSVKpkZPkTwCsz89GI2BX4ZkR8JTO/Xak3SSpSsrN9Ao/2J3ftD2XTMyWpoqJtcBGxICJWAQ8BKzPTkeWSRqN4ZDlAROwNXAW8KzNXb3aZI8tHVK92rXXr1lWpBbB48eJqI8tr1oK6I9An6fGxU48sn34ALgTev63rOLJ8x9erXYtKI8aBqiPLa9ai8gj0SXp81MQ8jyzfv3/lRkTsAZwE3DG0niTVVvIp6kHApRGxgG5b3uWZeU2dtiSpXMmnqLfRrYUqSaPkngySmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVnFAdcPvfxeRLijvaRRqfEK7hxgbYU6klRV6cjyxcBrgU/VaUeS6il9Bfcx4APAxvJWJKmuwWsyRMSpwCmZ+faIeAXduPJTt3A912QYUT3XZBjGNRl2bC2Y5zUZgL8DpoB7gQeBx4HPbus2rsmw4+u5JoNrMuyMtTLneU2GzDw/Mxdn5hLgdODrmfmWofUkqTa/ByepWSWLzjwtM68Hrq9RS5Jq8RWcpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmlW0q1ZE3Av8Cvg1sCFnO8pEkuZQjX1RT8jMn1aoI0lV+RZVUrNKAy6BFRFxcz+5V5JGY/DIcoCIODgz74+IA4CVwLsy84bNruPI8hHVW79+/WjHgo95ZHnNejXHn8N4H7s79cjyzQ/A39Cty+DI8hH3Nuax4JPS25gfH2OtlTnPI8sjYlFE7LXpOPBqYPXQepJUW8mnqAcCV0XEpjqfy8yvVulKkioYHHCZeQ/wwoq9SFJVfk1EUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDWrKOAiYu+IuCIi7oiItRFxXK3GJKlU6ZoMHwe+mpl/EhG7AQsr9CRJVQwOuIh4JnA8cBZAZj4JPFmnLUkqN3hkeUQcCXwSuJ1ubNLNwDmZ+dhm13Nk+YjqObJ8x9dzZPkw8zqyHDga2AD8UX/648CHtnUbR5bv+HqTMhZ8zL2N+fEx1lqZ8zyyHJgCpjLzxv70FcBRBfUkqarBAZeZDwL3RcTS/qwT6d6uStIolH6K+i7gsv4T1HuAPy9vSZLqKAq4zFxFty1OkkbHPRkkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPUrMEBFxFLI2LVtMMjEXFuxd4kqcjgfVEzcx1wJEBELADuB66q05Yklav1FvVE4AeZ+aNK9SSpWK2AOx1YXqmWJFUxeE2Gpwt0s+B+AvxBZq7fwuWuyVBorOsojHndg0nqreYaD2N+HszrmgybDsAyYMVMruuaDMOMda2CMa97MEm9jfWxu7OvybDJGfj2VNIIla5svwg4CbiyTjuSVE/pyPLHgH0r9SJJVbkng6RmGXCSmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqVunO9u+JiDURsToilkfE7rUak6RSJYvOHAy8Gzg6M48AFtBN9pWkUSh9i7oLsEdE7AIspJvsK0mjUDSyPCLOAS4G/pduqu+bt3AdR5YXcmT5jq1Vu54jy4eZ15HlwD7A14H9gV2BLwJv2dZtHFk+zFjHZU/SWPAx9zbWx+7OPrL8VcAPM/PhzHyKbqrvSwrqSVJVJQH3Y+DYiFgYEUG3NuraOm1JUrnBAZeZNwJXALcA3+9rfbJSX5JUrHRNhg8CH6zUiyRV5Z4MkpplwElqlgEnqVkGnKRmGXCSmmXASWqWASepWQacpGYZcJKaZcBJalbpyPJz+nHlayLi3Eo9SVIVJSPLjwD+EjgGeCFwakQ8t1ZjklSq5BXc84EbM/PxzNwAfAM4rU5bklSuJOBWAy+PiH0jYiFwCnBInbYkqVzpmgxnA28HHgPWAE9k5rmbXWfi1mSouYYCjHc9gEla92BSehvr+g4wz2sybH4A/hZ4+7auMylrMkzKegCTcj8nqbfW1mQoGngZEQdk5kMRcSjd9rdjS+pJUk1FAQd8ISL2BZ4C3pGZvyhvSZLqKB1Z/vJajUhSbe7JIKlZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmmXASWqWASepWQacpGZtN+Ai4tMR8VBErJ523rMiYmVE3NX/3Gdu25Sk2ZvJK7jPACdvdt55wNcy83Dga/1pSRqV7QZcZt4A/Gyzs5cBl/bHLwVeX7ctSSo3dBvcgZn5QH/8QeDASv1IUjUzGlkeEUuAazLziP70LzJz72mX/zwzt7gdzpHl5cY6LnvMo7ftbZiJHFkOLAFWTzu9DjioP34QsG4mdRxZvuNHUo+1lr2No15rI8uHvkW9GjizP34m8KWBdSRpzszkayLLgW8BSyNiql9J6++BkyLiLuBV/WlJGpXtjizPzDO2ctGJlXuRpKrck0FSsww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNWvomgxviog1EbExImY3n0mS5snQNRlWA6cBN9RuSJJqmck0kRv6ib7Tz1sLEBFz1JYklXMbnKRmDVqTYdr51wPvz8ybtnHbnWJNhprrKIx55v5Ya9WuZ2/jqHXggfXWo5q3NRmmnX89cPRM56OPeU2GSZm5P9Za9jaOerVr1cQ8rskgSaM3aE2GiHhDREwBxwH/GRHXznWjkjRbJWsyXFW5F0mqyreokpplwElqlgEnqVkGnKRmGXCSmmXASWqWASepWQacpGYZcJKaZcBJatbQkeUfiYg7IuK2iLgqIvae0y4laYChI8tXAkdk5guAO4HzK/clScW2G3CZeQPws83OW5GZG/qT3wYWz0FvklSkxja4twJfqVBHkqoqHVl+AXA0cFpupdDOMrK8Zr1J6W1S7mftepPSW+37Oa8jy4Gz6AZhLpzp+OAxjyyvWW9SepuU+1m73qT0Vvt+MmBk+XYHXm5JRJwMfAD448x8fEgNSZprg0aWA/8M7AWsjIhVEfGJOe5TkmZt6MjyS+agF0mqyj0ZJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzhq7J8KF+PYZVEbEiIp4zt21K0uwNXZPhI5n5gsw8ErgGuLByX5JUbOiaDI9MO7kI2P5YYEmaZ4NHlkfExcCfAb8ETsjMh7dyW0eWj6jeWGvVrmdvbdWCeR5ZPu2y84GLZlLHkeU7vt5Ya9WuZ29t1cocNrK8xqeolwFvrFBHkqoaFHARcfi0k8uAO+q0I0n1bHdkeb8mwyuA/SJiCvggcEpELAU2Aj8C/moum5SkIVyTQVKz3JNBUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzDDhJzRo0snzaZe+LiIyI/eamPUkabujIciLiEODVwI8r9yRJVQwaWd77J+ADOK5c0kgNnQe3DLg/M2+t3I8kVbPdcUmbi4iFwF/TvT2dyfWfXpMBeGJL2/IG2g/4aaVatetNSm+Tcj9r15uU3mrfz6WzvsVM5pozbU0G4A+Bh4B7+8MGuu1wz55BnVnPVJ+PWva242vZ2zjqjbXW0HqzfgWXmd8HDth0OiLuBY7OzJpJLUnFZvI1keXAt4ClETEVEWfPfVuSVG7oyPLply+Zxb/3yVlcdz5r1a43Kb1Nyv2sXW9Setvh93NGCz9L0s7IXbUkNWteAi4iTo6IdRFxd0ScV1hrq7uODah1SERcFxG3R8SaiDinsN7uEfGdiLi1r3dRhR4XRMT3IuKaCrXujYjvR8SqiLipsNbeEXFFRNwREWsj4riBdZb2/Ww6PBIR5xb29p7+/391RCyPiN0Lap3T11kzpK8tPV4j4lkRsTIi7up/7lNQ6019bxsj4ugKvX2k/53eFhFXRcTeBbU+1NdZFRErIuI5Jb1Nu2zmu4jW/Bh3Kx/tLgB+APwusBtwK/D7BfWOB46i/9pKYW8HAUf1x/cC7izsLYA9++O7AjcCxxb2+F7gc8A1Fe7vvcB+lX6vlwJ/0R/fDdi70mPlQeB3CmocDPwQ2KM/fTlw1sBaRwCrgYV026v/C3juLGv8xuMV+AfgvP74ecCHC2o9n+77YdfTfZuhtLdXA7v0xz9c2NtvTzv+buATJb315x8CXEu3HvN2H8vz8QruGODuzLwnM58EPg8sG1ost77r2JBaD2TmLf3xXwFr6Z4gQ+tlZj7an9y1PwzeyBkRi4HXAp8aWmMuRMQz6R6AlwBk5pOZ+YsKpU8EfpCZPyqsswuwR0TsQhdOPxlY5/nAjZn5eGZuAL4BnDabAlt5vC6j+wNB//P1Q2tl5trMXDebnrZTb0V/XwG+DSwuqPXItJOLmMVzYRvP81ntIjofAXcwcN+001MUhMhciYglwIvoXnWV1FkQEavovgy9MjNL6n2M7pe5saSnaRJYERE393uYDHUY8DDwr/3b509FxKIK/Z0OLC8pkJn3Ax+l+/L5A8AvM3PFwHKrgZdHxL79Hjyn0L2CKHVgZj7QH38QOLBCzbnwVuArJQUi4uKIuA94M3BhYa1Z7yLqhwxAROwJfAE4d7O/OrOWmb/OzCPp/vIdExFHDOzpVOChzLy5pJ/NvCwzjwJeA7wjIo4fWGcXurcP/5KZLwIeo3urNVhE7Aa8Dvj3wjr70L1COgx4DrAoIt4ypFZmrqV7m7YC+CqwCvh1SX9b+DeSEQ6siIgL6PZSuqykTmZekJmH9HXeWdDPpl1EZxWS8xFw9/P//+ot7s8bhYjYlS7cLsvMK2vV7d+yXccWRk3N0EuB1/V7inweeGVEfLawp/v7nw8BV9FtPhhiCpia9ur0CrrAK/Ea4JbMXF9Y51XADzPz4cx8CrgSeMnQYpl5SWa+ODOPB35Ot5221PqIOAig//lQhZrVRMRZwKnAm/sAruEy4I0Ft/89uj9at/bPicXALRHx7G3daD4C7rvA4RFxWP9X+nTg6nn4d7crIoJuO9LazPzHCvX23/SpU0TsAZwE3DGkVmaen5mLs/si9enA1zNz0CuRvp9FEbHXpuN0G5MHfRKdmQ8C90XEpp2fTwRuH9pb7wwK3572fgwcGxEL+9/viXTbVgeJiAP6n4fSbX/7XIUerwbO7I+fCXypQs0qIuJkus0ir8vMxwtrHT7t5DIGPheg20U0Mw/IzCX9c2KK7gPCB7d3wzk/0G27uJPu09QLCmstp9u28lR/J88uqPUyurcHt9G9/VgFnFJQ7wXA9/p6q4ELK/3/vYLCT1HpPsW+tT+sqfB7OBK4qb+vXwT2Kai1CPgf4JmV/r8uonsyrQb+Dfitglr/TRfetwInDrj9bzxegX2BrwF30X0y+6yCWm/ojz8BrAeuLeztbrpt5pueDzP65HMrtb7Q/w5uA/4DOLikt80uv5cZfIrqngySmuWHDJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRm/R9P/SWNKwt+CwAAAABJRU5ErkJggg==\n", - "text/plain": [ - "
" - ] - }, - "metadata": { - "needs_background": "light" - }, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAATgAAAEzCAYAAACluB+pAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAAS/UlEQVR4nO3df6zldX3n8edrZ6D8kAoiUGTowhZC3LCKlhCtP9aKGKQEWrslkLqrW3ebTWoL3SYGlqzGNF3Xtem2ySbbGLE1K2JalK3rVh3WatkmShUccGBQQVFmCoyua/HHRhx47x/nO7u3I8PM/X4+F775zPOR3Nxzzj33dd5n5t7XPd9zzvfzTVUhSSP6e0/3AJK0USw4ScOy4CQNy4KTNCwLTtKwLDhJw2oquCQXJvliknuTXN1rKEnqIXPfB5dkE/Al4AJgJ/BZ4IqqurvfeJI0X8sjuPOAe6vqK1X1KPAB4NI+Y0lSu5aCOwV4YM35ndNlkrQImzf6BpL8KvCrAEccccRPn3KKHShp/e67775vVtUJ6/meloLbBZy65vyW6bK/o6reBbwL4Iwzzqi3v/3tDTf5/23evJk9e/Z0yeqdd6jMdqjcz955h8psve/nZZdd9rX1fk/LJupngTOTnJ7kcOBy4MMNeZLU1exHcFW1J8mbgI8Dm4D3VNVd3SaTpEZNz8FV1Z8Df95pFknqyj0ZJA3LgpM0LAtO0rAsOEnDsuAkDcuCkzQsC07SsCw4ScOy4CQNy4KTNCwLTtKwWo/J8J4ku5Ns7zWQJPXS+gjuj4ELO8whSd01FVxV3QJ8q9MsktTVU7pk+QknnMDmzX1uMkm3rN55h8psh8r97J13qMzW+37OseG3vu+S5UtdDnnJSzUvdbZD5X72zjtUZut9P+fwVVRJw7LgJA2r9W0iNwCfBs5KsjPJG/uMJUntWo/JcEWvQSSpNzdRJQ3LgpM0LAtO0rAsOEnDsuAkDcuCkzQsC07SsCw4ScOy4CQNy4KTNKzZBZfk1CSfTHJ3kruSXNlzMElq1bIv6h7gt6rq9iTHALclubmq7u40myQ1mf0IrqoerKrbp9PfAXYAp/QaTJJadXkOLslpwAuAW3vkSVIPzUuWJ3kG8EHgqqp65Am+7jEZFpS31Kzeec42VtZcTbee5DBW5XZ9VX3oia7jMRmWlbfUrN55zjZW1lwtr6IGuA7YUVW/128kSeqj5Tm4lwD/FHhlkm3Tx0Wd5pKkZrM3Uavqr4B0nEWSunJPBknDsuAkDcuCkzQsC07SsCw4ScOy4CQNy4KTNCwLTtKwLDhJw7LgJA2rZWf7I5L8dZI7piXL39ZzMElq1bJc0g+AV1bVd6dlk/4qyUer6jOdZpOkJi072xfw3ensYdNH9RhKknpoeg4uyaYk24DdwM1V5ZLlkhajaUXfqnoMOCfJscBNSc6uqu1rr+OS5cvKW2pW7zxnGytrri63XlXfTvJJ4EJg+z5fc8nyBeUtNat3nrONlTVXy6uoJ0yP3EhyJHABcE+nuSSpWcsjuJOB9ybZxKoo/6SqPtJnLElq1/Iq6p2sjoUqSYvkngyShmXBSRqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkalgUnaVjNBTctevn5JO5oL2lRejyCuxLY0SFHkrpqXbJ8C/BzwLv7jCNJ/bQ+gvt94M3A4+2jSFJfs9eDS3IxsLuqbkvyiie5nsdkWFDeUrN65znbWFlztdz6S4BLklwEHAH8eJL3VdXr1l7JYzIsK2+pWb3znG2srLlmb6JW1TVVtaWqTgMuB/5i33KTpKeT74OTNKxehw38FPCpHlmS1IuP4CQNy4KTNCwLTtKwLDhJw7LgJA3LgpM0LAtO0rAsOEnDsuAkDcuCkzSspl21ktwPfAd4DNhTVef2GEqSeuixL+rPVtU3O+RIUlduokoaVmvBFbA1yW3Tyr2StBitm6gvrapdSU4Ebk5yT1XdsvYKLlm+rLylZvXOc7axsuZquvWq2jV93p3kJuA84JZ9ruOS5QvKW2pW7zxnGytrrtmbqEmOTnLM3tPAq4HtvQaTpFYtj+BOAm5Ksjfn/VX1sS5TSVIHswuuqr4CPL/jLJLUlW8TkTQsC07SsCw4ScOy4CQNy4KTNCwLTtKwLDhJw7LgJA3LgpM0LAtO0rAsOEnDaiq4JMcmuTHJPUl2JHlxr8EkqVXranR/AHysqv5JksOBozrMJEldzC64JM8EXg68AaCqHgUe7TOWJLVreQR3OvAN4I+SPB+4Dbiyqr639kouWb6svKVm9c5ztrGy5mq59c3AC4Ffr6pbk/wBcDXwb9deySXLl5W31Kzeec42VtZcLS8y7AR2VtWt0/kbWRWeJC3C7IKrqoeAB5KcNV10PnB3l6kkqYPWDeRfB66fXkH9CvDP20eSpD5aDxu4DTi3zyiS1Jd7MkgalgUnaVgWnKRhWXCShmXBSRqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYswsuyVlJtq35eCTJVR1nk6Qms/dFraovAucAJNkE7AJu6jOWJLXrtYl6PnBfVX2tU54kNetVcJcDN3TKkqQumhdMn9aCuwS4Zj9f95gMC8pbalbvPGcbK2uuHrf+GuD2qnr4ib7oMRmWlbfUrN55zjZW1lw9NlGvwM1TSQvUemT7o4ELgA/1GUeS+mldsvx7wPGdZpGkrtyTQdKwLDhJw7LgJA3LgpM0LAtO0rAsOEnDsuAkDcuCkzQsC07SsCw4ScOy4CQNq3Vn+99McleS7UluSHJEr8EkqVXLQWdOAX4DOLeqzgY2sVrZV5IWoXUTdTNwZJLNwFHA37SPJEl9tBxVa1eS3wW+DvwfYGtVbd33ei5Zvqy8pWb1znO2sbLmmn3rSY4DLgVOB74N/GmS11XV+9ZezyXLl5W31Kzeec42VtZcLZuorwK+WlXfqKofslrV92f6jCVJ7VoK7uvAi5IclSSsjo26o89YktRudsFV1a3AjcDtwBemrHd1mkuSmrUek+GtwFs7zSJJXbkng6RhWXCShmXBSRqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkobVumT5ldNy5XcluarTTJLURcuS5WcD/xI4D3g+cHGSM3oNJkmtWh7BPRe4taq+X1V7gL8EXttnLElq11Jw24GXJTk+yVHARcCpfcaSpHYtx2TYkeQdwFbge8A24LF9r+cxGZaVt9Ss3nnONlbWXK3rwV0HXAeQ5N8BO5/gOh6TYUF5S83qnedsY2XNnqHlm5OcWFW7k/wkq+ffXtRnLElq1/r48YNJjgd+CPxaVX27fSRJ6qN1E/VlvQaRpN7ck0HSsCw4ScOy4CQNy4KTNCwLTtKwLDhJw7LgJA3LgpM0LAtO0rAsOEnDOmDBJXlPkt1Jtq+57FlJbk7y5enzcRs7piSt38E8gvtj4MJ9Lrsa+ERVnQl8YjovSYtywIKrqluAb+1z8aXAe6fT7wV+vu9YktRu7nNwJ1XVg9Pph4CTOs0jSd00rydcVZWk9vd1lyxfVt5Ss3rnOdtYWXPNvfWHk5xcVQ8mORnYvb8rumT5svKWmtU7z9nGyppr7ibqh4HXT6dfD/xZn3EkqZ+DeZvIDcCngbOS7EzyRuDfAxck+TLwqum8JC3KATdRq+qK/Xzp/M6zSFJX7skgaVgWnKRhWXCShmXBSRqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoY195gMv5TkriSPJzl3Y0eUpHnmHpNhO/Ba4JbeA0lSLwezmsgtSU7b57IdsFqxU5KWyufgJA1rwxdM95gMy8pbalbvPGcbK2uuDb91j8mwrLylZvXOc7axsuZyE1XSsGYdkyHJLyTZCbwY+O9JPr7Rg0rSerUck+GmzrNIUlduokoalgUnaVgWnKRhWXCShmXBSRqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYc5csf2eSe5LcmeSmJMdu6JSSNMPcJctvBs6uqucBXwKu6TyXJDU7YMFV1S3At/a5bGtV7V3o6TPAlg2YTZKa9HgO7leAj3bIkaSumlb0TXItsAe4/kmu45LlC8pbalbvPGcbK2uu2bee5A3AxcD5VVX7u55Lli8rb6lZvfOcbays2TPM+aYkFwJvBv5xVX2/70iS1MesJcuB/wQcA9ycZFuSP9zgOSVp3eYuWX7dBswiSV25J4OkYVlwkoZlwUkalgUnaVgWnKRhWXCShmXBSRqWBSdpWBacpGFZcJKGZcFJGtbcYzL89nQ8hm1JtiZ5zsaOKUnrN/eYDO+squdV1TnAR4C3dJ5LkprNPSbDI2vOHg3sd8FLSXq6tKzo+zvAPwP+FvjZJ7meS5YvKG+pWb3znG2srLlm33pVXQtcm+Qa4E3AW/dzPZcsX1DeUrN65znbWFlz9XgV9XrgFzvkSFJXswouyZlrzl4K3NNnHEnq54CbqNMxGV4BPDvJTlabohclOQt4HPga8K82ckhJmsNjMkgalnsySBqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkalgUnaVizlixf87XfSlJJnr0x40nSfHOXLCfJqcCrga93nkmSupi1ZPnkPwJvxuXKJS3U3PXgLgV2VdUdneeRpG7WvWR5kqOAf8Nq8/Rgrv//jskA/OCyyy77kefyZno28M1OWb3zDpXZDpX72TvvUJmt9/08a93fUVUH/ABOA7ZPp/8RsBu4f/rYw+p5uJ84iJzPHcztHeRM3bKc7enPcrZl5C01a27euh/BVdUXgBP3nk9yP3BuVfVsaklqdjBvE7kB+DRwVpKdSd648WNJUru5S5av/fpp67i9d63juk9lVu+8Q2W2Q+V+9s47VGZ72u9npm1bSRqOu2pJGtZTUnBJLkzyxST3Jrm6MWu/u47NyDo1ySeT3J3kriRXNuYdkeSvk9wx5b2tw4ybknw+yUc6ZN2f5AtJtiX5XGPWsUluTHJPkh1JXjwz56xpnr0fjyS5qnG235z+/bcnuSHJEQ1ZV045d82Z64l+XpM8K8nNSb48fT6uIeuXptkeT3Juh9neOf2f3pnkpiTHNmT99pSzLcnWJM9pmW3N1w5+F9GeL+Pu56XdTcB9wD8ADgfuAP5hQ97LgRcyvW2lcbaTgRdOp48BvtQ4W4BnTKcPA24FXtQ4478G3g98pMP9vR94dqf/1/cC/2I6fThwbKeflYeAv9+QcQrwVeDI6fyfAG+YmXU2sB04itXz1f8DOGOdGT/y8wr8B+Dq6fTVwDsasp7L6v1hn2L1bobW2V4NbJ5Ov6Nxth9fc/o3gD9smW26/FTg46yOx3zAn+Wn4hHcecC9VfWVqnoU+ABw6dyw2v+uY3OyHqyq26fT3wF2sPoFmZtXVfXd6exh08fsJzmTbAF+Dnj33IyNkOSZrH4ArwOoqker6tsdos8H7quqrzXmbAaOTLKZVTn9zcyc5wK3VtX3q2oP8JfAa9cTsJ+f10tZ/YFg+vzzc7OqakdVfXE9Mx0gb+t0XwE+A2xpyHpkzdmjWcfvwpP8nq9rF9GnouBOAR5Yc34nDSWyUZKcBryA1aOulpxNSbaxejP0zVXVkvf7rP4zH2+ZaY0Ctia5bdrDZK7TgW8AfzRtPr87ydEd5rscuKEloKp2Ab/L6s3nDwJ/W1VbZ8ZtB16W5PhpD56LWD2CaHVSVT04nX4IOKlD5kb4FeCjLQFJfifJA8AvA29pzFr3LqK+yAAkeQbwQeCqff7qrFtVPVZV57D6y3dekrNnznQxsLuqbmuZZx8vraoXAq8Bfi3Jy2fmbGa1+fCfq+oFwPdYbWrNluRw4BLgTxtzjmP1COl04DnA0UleNyerqnaw2kzbCnwM2AY81jLfE9xGscAFK5Jcy2ovpetbcqrq2qo6dcp5U8M8e3cRXVdJPhUFt4u/+1dvy3TZIiQ5jFW5XV9VH+qVO22yfZInWGrqIL0EuGTaU+QDwCuTvK9xpl3T593ATayePphjJ7BzzaPTG1kVXovXALdX1cONOa8CvlpV36iqHwIfAn5mblhVXVdVP11VLwf+N6vnaVs9nORkgOnz7g6Z3SR5A3Ax8MtTAfdwPfCLDd//U6z+aN0x/U5sAW5P8hNP9k1PRcF9FjgzyenTX+nLgQ8/Bbd7QEnC6nmkHVX1ex3yTtj7qlOSI4ELgHvmZFXVNVW1pVZvpL4c+IuqmvVIZJrn6CTH7D3N6snkWa9EV9VDwANJ9u78fD5w99zZJlfQuHk6+TrwoiRHTf+/57N6bnWWJCdOn3+S1fNv7+8w44eB10+nXw/8WYfMLpJcyOppkUuq6vuNWWeuOXspM38XYLWLaFWdWFWnTb8TO1m9QPjQgb5xwz9YPXfxJVavpl7bmHUDq+dWfjjdyTc2ZL2U1ebBnaw2P7YBFzXkPQ/4/JS3HXhLp3+/V9D4KiqrV7HvmD7u6vD/cA7wuem+/lfguIaso4H/BTyz07/X21j9Mm0H/gvwYw1Z/5NVed8BnD/j+3/k5xU4HvgE8GVWr8w+qyHrF6bTPwAeBj7eONu9rJ4z3/v7cFCvfO4n64PT/8GdwH8DTmmZbZ+v389BvIrqngyShuWLDJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkalgUnaVgWnKRh/V958NXNiSlfNwAAAABJRU5ErkJggg==\n", - "text/plain": [ - "
" - ] - }, - "metadata": { - "needs_background": "light" - }, - "output_type": "display_data" - } - ], - "source": [ - "from nn_tools.sensor_data import get_sensor_image_pair\n", - "from nn_tools.sensor_data import zoom_img\n", - "\n", - "pair_index = -1\n", - "\n", - "left_img, right_img = get_sensor_image_pair(roboSim.image_data(),\n", - " pair_index)\n", - "zoom_img(left_img), zoom_img(right_img)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "If you don't see a figure image displayed, check that the robot is placed over a figure by reviewing the sensor array display in the simulator. If the image is there, rerun the previous code cell to see if the data is now available. If it isn't, rerun the data collecting magic cell, wait a view seconds, and then try to view the zoomed image display." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can run the previously downloaded program again from a simple line magic that situates the robot at a specific location and then runs the program to collect the sensor data." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [], - "source": [ - "_x = 280\n", - "\n", - "%sim_magic -x $_x -y 900 -RAH" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.1.1 Investigating the training data samples\n", - "\n", - "Let's start by seeing if we can collect image data samples for each of the shapes." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "metadata": {}, - "outputs": [ - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "066163cb2bb641bcac2ab8f6cdeb9020", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "HBox(children=(HTML(value=''), FloatProgress(value=0.0, max=6.0), HTML(value='')))" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n" - ] - } - ], - "source": [ - "from tqdm.notebook import trange\n", - "from nbev3devsim.load_nbev3devwidget import tqdma\n", - "\n", - "import time\n", - "\n", - "# Clear the datalog to give us a fresh start\n", - "roboSim.clear_datalog()\n", - "\n", - "# x-coordinate for centreline of first shape\n", - "_x_init = 200\n", - "\n", - "# Distance between shapes\n", - "_x_gap = 80\n", - "\n", - "# Number of shapes\n", - "_n_shapes = 6\n", - "\n", - "# y-coordinate for centreline of shapes\n", - "_y = 900\n", - "\n", - "# Load in the required background\n", - "%sim_magic -b Simple_Shapes\n", - "\n", - "# Generate x coordinate for each shape in turn\n", - "for _x in trange(_x_init, _x_init+(_n_shapes*_x_gap), _x_gap):\n", - " \n", - " # Jump to shape and run program to collect data\n", - " %sim_magic -x $_x -y $_y -R\n", - " \n", - " # Wait a short period to allow time for\n", - " # the program to run and capture the sensor data,\n", - " # and for the data to be passed from the simulator\n", - " # to the notebook Python environment\n", - " time.sleep(1)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We should now be able to access multiple image samples via `roboSim.image_data()`, which returns a dataframe containing as many rows as images we scanned:" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
sidevalsclock
0left245,226,225,245,226,225,245,226,225,245,226,22...1
1right245,226,225,245,226,225,245,226,225,245,226,22...1
2left245,226,225,245,226,225,245,226,225,245,226,22...1
3right245,226,225,245,226,225,245,226,225,245,226,22...1
4left245,226,225,245,226,225,245,226,225,245,226,22...1
5right245,226,225,245,226,225,245,226,225,245,226,22...1
6left245,226,225,245,226,225,245,226,225,245,226,22...2
7right245,226,225,245,226,225,245,226,225,245,226,22...2
8left245,226,225,245,226,225,245,226,225,245,226,22...1
9right245,226,225,245,226,225,245,226,225,245,226,22...1
10left245,226,225,245,226,225,245,226,225,245,226,22...1
11right245,226,225,245,226,225,245,226,225,245,226,22...1
\n", - "
" - ], - "text/plain": [ - " side vals clock\n", - "0 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "1 right 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "2 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "3 right 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "4 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "5 right 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "6 left 245,226,225,245,226,225,245,226,225,245,226,22... 2\n", - "7 right 245,226,225,245,226,225,245,226,225,245,226,22... 2\n", - "8 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "9 right 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "10 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "11 right 245,226,225,245,226,225,245,226,225,245,226,22... 1" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "clean_data_df = roboSim.image_data()\n", - "clean_data_df" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The original sensor data is collected as three channel RGB data. By default, the `get_sensor_image_pair()` function, which extracts a pair of consecutive images from the datalog, converts these to greyscale images:" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "(None, None)" - ] - }, - "execution_count": 10, - "metadata": {}, - "output_type": "execute_result" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAATgAAAEzCAYAAACluB+pAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAAT10lEQVR4nO3de6xlZXnH8e/jAIUZqCA3kYFCK05sqSISCl6oiBhE4ijWBKItVFrTeAMvMVASLDG0tZpWkyY1Rqyk4hiKoJSqzFRBaqIo4IAzDAOIKAeBwXpBoOHiPP1jrSGn41zOWe97zlnz7u8n2Tn7+syzZ+/zO3uvvdfzRmYiSS16xkI3IElzxYCT1CwDTlKzDDhJzTLgJDXLgJPUrKKAi4iTImJ9RNwVEefWakqSaoih34OLiEXAHcCJwBTwXeD0zLytXnuSNFzJK7ijgbsy8+7MfAL4PLC8TluSVK4k4A4E7p12eqo/T5JGYae5/gci4m3A2wB23XXXFx988MFV6m7cuJFnPKPeZyQ1601Kb5NyP2vXm5Teat/PO+6446eZue+sbpSZgw7AscA1006fB5y3rds873nPy1quvfbaarVq15uU3iblftauNym91b6fwI05y5wqidfvAodFxKERsQtwGnBVQT1JqmrwW9TMfCoi3glcAywCPp2Za6t1JkmFirbBZeaXgS9X6kWSqnJPBknNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPUrNI1GT4dERsiYk2thiSpltJXcJ8BTqrQhyRVVxRwmXk98LNKvUhSVYNX1Xq6QMQhwNWZefhWLn96ZPm+++774ssuu6zo39vkkUceYffdd69Sq3a9SeltUu5n7XqT0lvt+3n88cfflJlHzepGsx0BvPkBOARYM5PrOrJ84euNtVbtevbWVq3M+R9ZLkmjZsBJalbp10RWAN8ClkXEVEScVactSSpXuibD6bUakaTafIsqqVkGnKRmGXCSmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZBpykZg0OuIg4KCKujYjbImJtRJxdszFJKlWyL+pTwPsy8+aI2AO4KSJWZeZtlXqTpCKDX8Fl5v2ZeXN//FfAOuDAWo1JUqkq2+D6seUvAm6oUU+SaqixJsPuwDeAizLzii1c7poMI6pXu9b69eur1AJYtmzZKO9n7XqT0tsOvyYDsDNwDfDemVzfNRkWvl7tWkC1w1jvZ+16k9LbDr0mQ0QEcDGwLjP/cWgdSZorJdvgXgr8KfDKiFjdH06u1JckFRv8NZHM/CYQFXuRpKrck0FSsww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNKtnZfteI+E5E3NKPLL+wZmOSVKpkZPnjwCsz85GI2Bn4ZkR8JTO/Xak3SSpSsrN9Ao/0J3fuD2XTMyWpoqJtcBGxKCJWAxuAVZnpyHJJo1E8shwgIvYErgTelZlrNrvMkeUjqvfggw8yNTVVpdbSpUur1apdr+b4c5ic58dYa8ECjCyffgAuAN6/res4snzh6330ox+tNmK8Zq3a9cb8GExKbzv6yPJ9+1duRMRuwInA7UPrSVJtJZ+iHgBcEhGL6LblXZaZV9dpS5LKlXyKeivdWqiSNEruySCpWQacpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmlUccP3Qy+9FhDvaSxqVGq/gzgbWVagjSVWVjixfCrwW+FSddiSpntJXcB8DPgBsLG9FkuoavCZDRJwCnJyZb4+IV9CNKz9lC9dzTYZCY11HYcxrMtTureYaD5Py3N2h12QA/g6YAu4BHgAeAz67rdu4JsMwY11HYcxrMtTubczPj7H2tkOvyZCZ52Xm0sw8BDgN+HpmvmVoPUmqze/BSWpWyaIzT8vM64DratSSpFp8BSepWQacpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmFe2qFRH3AL8Cfg08lbMdZSJJc6jGvqjHZ+ZPK9SRpKp8iyqpWaUBl8DKiLipn9wrSaMxeGQ5QEQcmJn3RcR+wCrgXZl5/WbXcWR5IUeWL2wtcGT5QteCeR5ZvvkB+Bu6dRkcWe7I8gWv58jytmplzvPI8ohYEhF7bDoOvBpYM7SeJNVW8inq/sCVEbGpzucy86tVupKkCgYHXGbeDbywYi+SVJVfE5HULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzDDhJzSoKuIjYMyIuj4jbI2JdRBxbqzFJKlW6JsPHga9m5p9ExC7A4go9SVIVgwMuIp4JHAecCZCZTwBP1GlLksoNHlkeEUcAnwRuoxubdBNwdmY+utn1HFleyJHlC1sLHFm+0LVgnkeWA0cBTwF/1J/+OPChbd1mUkaWT8oo70m5n7Xrjfm5O9ZamfM8shyYAqYy84b+9OXAkQX1JKmqwQGXmQ8A90bEsv6sE+jerkrSKJR+ivou4NL+E9S7gT8vb0mS6igKuMxcTbctTpJGxz0ZJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1KzBARcRyyJi9bTDwxFxTsXeJKnI4H1RM3M9cARARCwC7gOurNOWJJWr9Rb1BOAHmfmjSvUkqVitgDsNWFGpliRVMXhNhqcLdLPgfgL8QWY+uIXLd4g1Gca67kHtemOtVbveJPU21vUidug1GTYdgOXAyplcd8xrMkzKegBjrWVvww9jXUdhR1+TYZPT8e2ppBEqXdl+CXAicEWddiSpntKR5Y8Ce1fqRZKqck8GSc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNcuAk9Ss0p3t3xMRayNiTUSsiIhdazUmSaVKFp05EHg3cFRmHg4sopvsK0mjUPoWdSdgt4jYCVhMN9lXkkahaGR5RJwNXAT8L91U3zdv4To7xMjymvVqjj+H8Y7LnqSx4DXr1RwxDuMdM75DjywH9gK+DuwL7Ax8EXjLtm4z5pHlNetNyrjsSbmfteuN+bk71lqZ8z+y/FXADzPzocx8km6q70sK6klSVSUB92PgmIhYHBFBtzbqujptSVK5wQGXmTcAlwM3A9/va32yUl+SVKx0TYYPAh+s1IskVeWeDJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmmXASWpW6cjys/tx5Wsj4pxKPUlSFSUjyw8H/hI4GnghcEpEPLdWY5JUquQV3POBGzLzscx8CvgGcGqdtiSpXEnArQFeHhF7R8Ri4GTgoDptSVK50jUZzgLeDjwKrAUez8xzNrvOxK3JULu3mms8TMq6B7V7q7mOwqQ8d3foNRk2PwB/C7x9W9eZlDUZavc21rUKxrzuQe3exvz8GGtvY1iToWjgZUTsl5kbIuJguu1vx5TUk6SaigIO+EJE7A08CbwjM39R3pIk1VE6svzltRqRpNrck0FSsww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnN2m7ARcSnI2JDRKyZdt6zImJVRNzZ/9xrbtuUpNmbySu4zwAnbXbeucDXMvMw4Gv9aUkale0GXGZeD/xss7OXA5f0xy8BXl+3LUkqN3Qb3P6ZeX9//AFg/0r9SFI1MxpZHhGHAFdn5uH96V9k5p7TLv95Zm5xO5wjy8s5snxha4Ejyxe6FszhyHLgEGDNtNPrgQP64wcA62dSx5Hlw4x1lLcjy4eZlOfuGEaWD32LehVwRn/8DOBLA+tI0pyZyddEVgDfApZFxFS/ktbfAydGxJ3Aq/rTkjQq2x1Znpmnb+WiEyr3IklVuSeDpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmmXASWrW0DUZ3hQRayNiY0TMbj6TJM2ToWsyrAFOBa6v3ZAk1TKTaSLX9xN9p5+3DiAi5qgtSSrnNjhJzRq0JsO0868D3p+ZN27jtq7JMKJ6Y13foXa9mmsowOQ8P8ZaC+ZxTYZp518HHDXT+eiuybDw9ca87kHNemN+DCaltx15TQZJGr1BazJExBsiYgo4FvjPiLhmrhuVpNkqWZPhysq9SFJVvkWV1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ss4aOLP9IRNweEbdGxJURseecdilJAwwdWb4KODwzXwDcAZxXuS9JKrbdgMvM64GfbXbeysx8qj/5bWDpHPQmSUVqbIN7K/CVCnUkqarSkeXnA0cBp+ZWCjmyfFz1atdav359lVpQd8z4pDwGteuNtRbM88hy4Ey6QZiLZzo+2JHlC1+vdi0qjiwf6/2sXW9SehvDyPLtDrzckog4CfgA8MeZ+diQGpI01waNLAf+GdgDWBURqyPiE3PcpyTN2tCR5RfPQS+SVJV7MkhqlgEnqVkGnKRmGXCSmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZBpykZg1dk+FD/XoMqyNiZUQ8Z27blKTZG7omw0cy8wWZeQRwNXBB5b4kqdjQNRkennZyCd3AQkkalcEjyyPiIuDPgF8Cx2fmQ1u5rSPLR1TPkeULX29SetuhR5ZPu+w84MKZ1HFk+cLXG2ut2vXsra1amcNGltf4FPVS4I0V6khSVYMCLiIOm3ZyOXB7nXYkqZ7tjizv12R4BbBPREwBHwROjohlwEbgR8BfzWWTkjSEazJIapZ7MkhqlgEnqVkGnKRmGXCSmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZg0aWT7vsfRGREbHP3LQnScMNHVlORBwEvBr4ceWeJKmKQSPLe/8EfADHlUsaqaHz4JYD92XmLZX7kaRqtjsuaXMRsRj4a7q3pzO5/tNrMgCPb2lb3kD7AD+tVKt2vUnpbVLuZ+16k9Jb7fu5bNa3mMlcc6atyQD8IbABuKc/PEW3He7ZM6gz65nq81HL3ha+lr2No95Yaw2tN+tXcJn5fWC/Tacj4h7gqMysmdSSVGwmXxNZAXwLWBYRUxFx1ty3JUnlho4sn375IbP49z45i+vOZ63a9Salt0m5n7XrTUpvC34/Z7TwsyTtiNxVS1Kz5iXgIuKkiFgfEXdFxLmFtba669iAWgdFxLURcVtErI2Iswvr7RoR34mIW/p6F1bocVFEfC8irq5Q656I+H5ErI6IGwtr7RkRl0fE7RGxLiKOHVhnWd/PpsPDEXFOYW/v6f//10TEiojYtaDW2X2dtUP62tLzNSKeFRGrIuLO/udeBbXe1Pe2MSKOqtDbR/rH9NaIuDIi9iyo9aG+zuqIWBkRzynpbdplM99FtObHuFv5aHcR8APgd4FdgFuA3y+odxxwJP3XVgp7OwA4sj++B3BHYW8B7N4f3xm4ATimsMf3Ap8Drq5wf+8B9qn0uF4C/EV/fBdgz0rPlQeA3ymocSDwQ2C3/vRlwJkDax0OrAEW022v/i/gubOs8RvPV+AfgHP74+cCHy6o9Xy674ddR/dthtLeXg3s1B//cGFvvz3t+LuBT5T01p9/EHAN3XrM230uz8cruKOBuzLz7sx8Avg8sHxosdz6rmNDat2fmTf3x38FrKP7BRlaLzPzkf7kzv1h8EbOiFgKvBb41NAacyEinkn3BLwYIDOfyMxfVCh9AvCDzPxRYZ2dgN0iYie6cPrJwDrPB27IzMcy8yngG8Cpsymwlefrcro/EPQ/Xz+0Vmauy8z1s+lpO/VW9vcV4NvA0oJaD087uYRZ/C5s4/d8VruIzkfAHQjcO+30FAUhMlci4hDgRXSvukrqLIqI1XRfhl6VmSX1Pkb3YG4s6WmaBFZGxE39HiZDHQo8BPxr//b5UxGxpEJ/pwErSgpk5n3AR+m+fH4/8MvMXDmw3Brg5RGxd78Hz8l0ryBK7Z+Z9/fHHwD2r1BzLrwV+EpJgYi4KCLuBd4MXFBYa9a7iPohAxARuwNfAM7Z7K/OrGXmrzPzCLq/fEdHxOEDezoF2JCZN5X0s5mXZeaRwGuAd0TEcQPr7ET39uFfMvNFwKN0b7UGi4hdgNcB/15YZy+6V0iHAs8BlkTEW4bUysx1dG/TVgJfBVYDvy7pbwv/RjLCgRURcT7dXkqXltTJzPMz86C+zjsL+tm0i+isQnI+Au4+/v9fvaX9eaMQETvThdulmXlFrbr9W7Zr2cKoqRl6KfC6fk+RzwOvjIjPFvZ0X/9zA3Al3eaDIaaAqWmvTi+nC7wSrwFuzswHC+u8CvhhZj6UmU8CVwAvGVosMy/OzBdn5nHAz+m205Z6MCIOAOh/bqhQs5qIOBM4BXhzH8A1XAq8seD2v0f3R+uW/ndiKXBzRDx7Wzeaj4D7LnBYRBza/5U+DbhqHv7d7YqIoNuOtC4z/7FCvX03feoUEbsBJwK3D6mVmedl5tLsvkh9GvD1zBz0SqTvZ0lE7LHpON3G5EGfRGfmA8C9EbFp5+cTgNuG9tY7ncK3p70fA8dExOL+8T2BbtvqIBGxX//zYLrtb5+r0ONVwBn98TOAL1WoWUVEnES3WeR1mflYYa3Dpp1czsDfBeh2Ec3M/TLzkP53YoruA8IHtnfDOT/Qbbu4g+7T1PMLa62g27byZH8nzyqo9TK6twe30r39WA2cXFDvBcD3+nprgAsq/f+9gsJPUek+xb6lP6yt8DgcAdzY39cvAnsV1FoC/A/wzEr/XxfS/TKtAf4N+K2CWv9NF963ACcMuP1vPF+BvYGvAXfSfTL7rIJab+iPPw48CFxT2NtddNvMN/0+zOiTz63U+kL/GNwK/AdwYElvm11+DzP4FNU9GSQ1yw8ZJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc36P/R+G7OqnZ0oAAAAAElFTkSuQmCC\n", - "text/plain": [ - "
" - ] - }, - "metadata": { - "needs_background": "light" - }, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAATgAAAEzCAYAAACluB+pAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAATkklEQVR4nO3df7DldX3f8eerCxt+SAQRCAIppEHGdmtQGUZjtEbEAcJIYpoZGG21sc10Eg2k6ThYpjpOxk5TO2kyk5lmHLGRijgJuo2lUZcalWZGiYIL7rIoqOguAVZriL86rsi7f5zv0pt1N7v3+/ncy+nnPh8zd+459577uu+ze+/rnu/3nO/nm6pCkkb0d57sASRprVhwkoZlwUkalgUnaVgWnKRhWXCShtVUcEkuSfL5JPcnubbXUJLUQ+a+Di7JJuALwMXAHuDTwFVVdU+/8SRpvpZHcBcC91fVl6pqH/A+4Io+Y0lSu5aCOwPYveL6nuljkrQUjlrrb5DkV4BfATjmmGOed9ZZZ3XJrSqSdMnqnbdRZtso97N33kaZrff9vO+++75eVaes5mtaCu5BYGVbnTl97G+oqncA7wB45jOfWVu3bm34lv/P7t276VWWvfM2ymwb5X72ztsos/W+n1u2bPnKar+mZRP108C5Sc5Jshm4EvhgQ54kdTX7EVxVPZbk9cBHgE3Au6pqZ7fJJKlR0z64qvpT4E87zSJJXXkkg6RhWXCShmXBSRqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkaVus5Gd6VZG+SHb0GkqReWh/B/SFwSYc5JKm7poKrqtuAb3SaRZK6mn1WrScCkrOBW6pqyyE+/8SS5aeccsrzbrjhhqbvt9++ffvYvHlzl6zeeRtlto1yP3vnbZTZet/PSy+99I6qumA1X7Pm52Q4cMnyZV0OeZmXal7W2TbK/eydt1Fm630/5/BZVEnDsuAkDav1ZSI3AZ8EzkuyJ8nr+owlSe1az8lwVa9BJKk3N1ElDcuCkzQsC07SsCw4ScOy4CQNy4KTNCwLTtKwLDhJw7LgJA3LgpM0rNkFl+SsJB9Lck+SnUmu7jmYJLVqORb1MeA3q+rOJCcAdyS5taru6TSbJDWZ/Qiuqh6qqjuny98CdgFn9BpMklp12Qc3LVv+HOD2HnmS1EOPczI8BfgE8Laq+sBBPu85GZYob1mzeuc521hZ8CSckyHJ0cD7gRsPVm7gORmWLW9Zs3rnOdtYWXO1PIsa4HpgV1X9Tr+RJKmPln1wLwT+CfDSJNunt8s6zSVJzWZvolbVnwPpOIskdeWRDJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkalgUnaVgWnKRhWXCShmXBSRpWy8H2xyT5iyR3TUuWv7XnYJLUqmW5pO8BL62qb0/LJv15kg9V1ac6zSZJTVoOti/g29PVo6e3ttUzJamjpn1wSTYl2Q7sBW6tKpcsl7Q0mpcsB0hyIrAVeENV7Tjgcy5ZvkR5y5rVO8/ZxsqCJ2HJ8v2q6tEkHwMuAXYc8DmXLF+ivGXN6p3nbGNlzdXyLOop0yM3khwLXAzc22kuSWrW8gjudODdSTaxKMo/qqpb+owlSe1ankW9m8W5UCVpKXkkg6RhWXCShmXBSRqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkaVnPBTYtefjaJB9pLWio9HsFdDezqkCNJXbUuWX4m8HPAO/uMI0n9tD6C+13gjcDj7aNIUl+zz8mQ5HLgsqr61SQvAf51VV1+kNt5ToYlylvWrN55zjZWFqz/ORleCLwiyWXAMcCPJnlPVb165Y08J8Ny5S1rVu88Zxsra67Zm6hV9aaqOrOqzgauBP7swHKTpCeTr4OTNKxepw38OPDxHlmS1IuP4CQNy4KTNCwLTtKwLDhJw7LgJA3LgpM0LAtO0rAsOEnDsuAkDcuCkzSspkO1kjwAfAv4AfDYapcykaS11ONY1J+tqq93yJGkrtxElTSs1oIrYFuSO6aVeyVpacxeshwgyRlV9WCSU4FbgTdU1W0H3MYly5cob1mzeuc521hZsP5LllNVD07v9ybZClwI3HbAbVyyfInyljWrd56zjZU11+xN1CTHJzlh/2Xg5cCOXoNJUquWR3CnAVuT7M95b1V9uMtUktTB7IKrqi8BP9VxFknqypeJSBqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkalgUnaVgWnKRhNRVckhOT3Jzk3iS7kryg12CS1Kr1nAy/B3y4qv5xks3AcR1mkqQuZhdckqcCLwZeC1BV+4B9fcaSpHazlyxPcj6LlXrvYbFs0h3A1VX1nQNu55LlS5S3rFm985xtrCxY/yXLjwKey+I8DLcn+T3gWuDfrryRS5YvV96yZvXOc7axsuZqeZJhD7Cnqm6frt/MovAkaSnMLriqehjYneS86UMXsdhclaSl0Pos6huAG6dnUL8E/LP2kSSpj9bTBm4HVrXTT5LWi0cySBqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkalgUnaVizCy7JeUm2r3j7ZpJrOs4mSU1mH4taVZ8HzgdIsgl4ENjaZyxJatdrE/Ui4ItV9ZVOeZLUrFfBXQnc1ClLkrqYfU6GJwIWa8H9JfAPquqRg3zeczIsUd6yZvXOc7axsmD9z8nwxPcF7jxYuYHnZFi2vGXN6p3nbGNlzdVjE/Uq3DyVtIRaz2x/PHAx8IE+40hSP61Lln8HOLnTLJLUlUcySBqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkalgUnaVgWnKRhtR5s/xtJdibZkeSmJMf0GkySWrWcdOYM4NeBC6pqC7CJxcq+krQUWjdRjwKOTXIUcByLlX0laSk0LVme5GrgbcD/AbZV1asOchuXLF+ivGXN6p3nbGNlwTovWZ7kJOAK4BzgUeCPk7y6qt6z8nYuWb5cecua1TvP2cbKmqtlE/VlwJer6mtV9X0Wq/r+dJ+xJKldS8F9FXh+kuOShMW5UXf1GUuS2s0uuKq6HbgZuBP43JT1jk5zSVKz1nMyvAV4S6dZJKkrj2SQNCwLTtKwLDhJw7LgJA3LgpM0LAtO0rAsOEnDsuAkDcuCkzQsC07SsFqXLL96Wq58Z5JrOs0kSV20LFm+BfgXwIXATwGXJ/nJXoNJUquWR3DPAm6vqu9W1WPAJ4BX9hlLktq1FNwO4EVJTk5yHHAZ8OQu3ylJK7Sek+F1wK8C3wF2At+rqmsOuI3nZFiivGXN6p3nbGNlwTqfkwGgqq4HrgdI8u+APQe5jedkWKK8Zc3qnedsY2XN1VRwSU6tqr1JfpzF/rfn9xlLkto1FRzw/iQnA98Hfq2qHm0fSZL6aN1EfVGvQSSpN49kkDQsC07SsCw4ScOy4CQNy4KTNCwLTtKwLDhJw7LgJA3LgpM0LAtO0rAOW3BJ3pVkb5IdKz72tCS3Jrlven/S2o4pSat3JI/g/hC45ICPXQt8tKrOBT46XZekpXLYgquq24BvHPDhK4B3T5ffDfx837Ekqd3cfXCnVdVD0+WHgdM6zSNJ3RzRkuVJzgZuqaot0/VHq+rEFZ//q6o66H44lyxfrrxlzeqd52xjZcH6Lln+SJLTq+qhJKcDew91Q5csX668Zc3qnedsY2XNNXcT9YPAa6bLrwH+pM84ktTPkbxM5Cbgk8B5SfZMZ9L698DFSe4DXjZdl6SlcthN1Kq66hCfuqjzLJLUlUcySBqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkalgUnaVgWnKRhzT0nwy8l2Znk8SSrWp9JktbL3HMy7ABeCdzWeyBJ6uVIVhO5bVrRd+XHdgEkWaOxJKmd++AkDWvukuVH7IBzMrB79+4uufv27euW1Ttvo8y2Ue5n77yNMlvv+znHmhec52RYrrxlzeqd52xjZc3lJqqkYc06J0OSX0iyB3gB8D+SfGStB5Wk1Wo5J8PWzrNIUlduokoalgUnaVgWnKRhWXCShmXBSRqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYc5csf3uSe5PcnWRrkhPXdEpJmmHukuW3Aluq6tnAF4A3dZ5LkpodtuCq6jbgGwd8bFtVPTZd/RRw5hrMJklNeuyD+2XgQx1yJKmrVNXhb7Q46cwtVbXlgI9fB1wAvLIOEXTAkuXPu+GGG1pnBhbLIW/evLlLVu+8jTLbRrmfvfM2ymy97+ell156R1Wt6jSls5csT/Ja4HLgokOVG7hk+bLlLWtW7zxnGytrrlkFl+QS4I3AP6qq7/YdSZL6mLVkOfD7wAnArUm2J/mDNZ5TklZt7pLl16/BLJLUlUcySBqWBSdpWBacpGFZcJKGZcFJGpYFJ2lYFpykYVlwkoZlwUkalgUnaVgWnKRhzT0nw29N52PYnmRbkmes7ZiStHpzz8nw9qp6dlWdD9wCvLnzXJLUbO45Gb654urxwOGXBZakdTZ7yfIkbwP+KfDXwM9W1dcO8bUuWb5Eecua1TvP2cbKgnVesryqrgOuS/Im4PXAWw5xO5csX6K8Zc3qnedsY2XN1eNZ1BuBX+yQI0ldzSq4JOeuuHoFcG+fcSSpn8Nuok7nZHgJ8PQke1hsil6W5DzgceArwL9cyyElaQ7PySBpWB7JIGlYFpykYVlwkoZlwUkalgUnaVgWnKRhWXCShmXBSRqWBSdpWBacpGHNWrJ8xed+M0klefrajCdJ881dspwkZwEvB77aeSZJ6mLWkuWT/wS8EZcrl7Sk5q4HdwXwYFXd1XkeSepm1UuWJzkO+DcsNk+P5PZPnJMB+N6WLVt+aF/eTE8Hvt4pq3feRplto9zP3nkbZbbe9/O8VX9FVR32DTgb2DFd/ofAXuCB6e0xFvvhfuwIcj5zJN/vCGfqluVsT36Wsy1H3rJmzc1b9SO4qvoccOr+60keAC6oqp5NLUnNjuRlIjcBnwTOS7InyevWfixJajd3yfKVnz97Fd/vHau47Xpm9c7bKLNtlPvZO2+jzPak388jOvGzJP3/yEO1JA1rXQouySVJPp/k/iTXNmYd8tCxGVlnJflYknuS7ExydWPeMUn+IsldU95bO8y4Kclnk9zSIeuBJJ9Lsj3JZxqzTkxyc5J7k+xK8oKZOedN8+x/+2aSaxpn+43p339HkpuSHNOQdfWUs3POXAf7eU3ytCS3Jrlven9SQ9YvTbM9nuSCDrO9ffo/vTvJ1iQnNmT91pSzPcm2JM9omW3F5478ENGeT+Me4qndTcAXgZ8ANgN3AX+/Ie/FwHOZXrbSONvpwHOnyycAX2icLcBTpstHA7cDz2+c8V8B7wVu6XB/HwCe3un/9d3AP58ubwZO7PSz8jDwdxsyzgC+DBw7Xf8j4LUzs7YAO4DjWOyv/p/AT64y44d+XoH/AFw7Xb4W+O2GrGexeH3Yx1m8mqF1tpcDR02Xf7txth9dcfnXgT9omW36+FnAR1icj/mwP8vr8QjuQuD+qvpSVe0D3gdcMTesDn3o2Jysh6rqzunyt4BdLH5B5uZVVX17unr09DZ7J2eSM4GfA945N2MtJHkqix/A6wGqal9VPdoh+iLgi1X1lcaco4BjkxzFopz+cmbOs4Dbq+q7VfUY8AnglasJOMTP6xUs/kAwvf/5uVlVtauqPr+amQ6Tt226rwCfAs5syPrmiqvHs4rfhb/l93xVh4iuR8GdAexecX0PDSWyVpKcDTyHxaOulpxNSbazeDH0rVXVkve7LP4zH2+ZaYUCtiW5YzrCZK5zgK8B/2XafH5nkuM7zHclcFNLQFU9CPxHFi8+fwj466raNjNuB/CiJCdPR/BcxuIRRKvTquqh6fLDwGkdMtfCLwMfaglI8rYku4FXAW9uzFr1IaI+yQAkeQrwfuCaA/7qrFpV/aCqzmfxl+/CJFtmznQ5sLeq7miZ5wA/U1XPBS4Ffi3Ji2fmHMVi8+E/V9VzgO+w2NSaLclm4BXAHzfmnMTiEdI5wDOA45O8ek5WVe1isZm2DfgwsB34Qct8B/kexRIuWJHkOhZHKd3YklNV11XVWVPO6xvm2X+I6KpKcj0K7kH+5l+9M6ePLYUkR7Motxur6gO9cqdNto9xkKWmjtALgVdMR4q8D3hpkvc0zvTg9H4vsJXF7oM59gB7Vjw6vZlF4bW4FLizqh5pzHkZ8OWq+lpVfR/4APDTc8Oq6vqqel5VvRj4Kxb7aVs9kuR0gOn93g6Z3SR5LXA58KqpgHu4EfjFhq//eyz+aN01/U6cCdyZ5Mf+ti9aj4L7NHBuknOmv9JXAh9ch+97WEnCYj/Srqr6nQ55p+x/1inJscDFwL1zsqrqTVV1Zi1eSH0l8GdVNeuRyDTP8UlO2H+Zxc7kWc9EV9XDwO4k+w9+vgi4Z+5sk6to3DydfBV4fpLjpv/fi1jsW50lyanT+x9nsf/tvR1m/CDwmunya4A/6ZDZRZJLWOwWeUVVfbcx69wVV69g5u8CLA4RrapTq+rs6XdiD4snCB8+3Beu+RuLfRdfYPFs6nWNWTex2Lfy/elOvq4h62dYbB7czWLzYztwWUPes4HPTnk7gDd3+vd7CY3PorJ4Fvuu6W1nh/+H84HPTPf1vwEnNWQdD/xv4Kmd/r3eyuKXaQfwX4Efacj6XyzK+y7gohlf/0M/r8DJwEeB+1g8M/u0hqxfmC5/D3gE+EjjbPez2Ge+//fhiJ75PETW+6f/g7uB/w6c0TLbAZ9/gCN4FtUjGSQNyycZJA3LgpM0LAtO0rAsOEnDsuAkDcuCkzQsC07SsCw4ScP6v1pQfTIhNYPvAAAAAElFTkSuQmCC\n", - "text/plain": [ - "
" - ] - }, - "metadata": { - "needs_background": "light" - }, - "output_type": "display_data" - } - ], - "source": [ - "from nn_tools.sensor_data import get_sensor_image_pair\n", - "\n", - "pair_index = -1\n", - "\n", - "left_img, right_img = get_sensor_image_pair(clean_data_df,\n", - " pair_index)\n", - "\n", - "zoom_img(left_img), zoom_img(right_img)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can also filter the dataframe to give us a dataframe containing just the data grabbed from the left hand image sensor:" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
sidevalsclock
0left245,226,225,245,226,225,245,226,225,245,226,22...1
2left245,226,225,245,226,225,245,226,225,245,226,22...1
4left245,226,225,245,226,225,245,226,225,245,226,22...1
6left245,226,225,245,226,225,245,226,225,245,226,22...2
8left245,226,225,245,226,225,245,226,225,245,226,22...1
10left245,226,225,245,226,225,245,226,225,245,226,22...1
\n", - "
" - ], - "text/plain": [ - " side vals clock\n", - "0 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "2 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "4 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "6 left 245,226,225,245,226,225,245,226,225,245,226,22... 2\n", - "8 left 245,226,225,245,226,225,245,226,225,245,226,22... 1\n", - "10 left 245,226,225,245,226,225,245,226,225,245,226,22... 1" - ] - }, - "execution_count": 11, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# The mechanics behind how this line of code\n", - "# works are beyond the scope of this module.\n", - "# In short, we identify the rows where the\n", - "# \"side\" column value is equal to \"left\"\n", - "# and select just those rows.\n", - "clean_left_images_df = clean_data_df[clean_data_df['side']=='left']\n", - "clean_left_images_df" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The shape names and classes are defined as follows in the order they appear going from left to right along the test track. We can also derive a map going the other way, from code to shape." - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{0: 'square',\n", - " 1: 'right facing triangle',\n", - " 2: 'left facing triangle',\n", - " 3: 'downwards facing triangle',\n", - " 4: 'upwards facing triangle',\n", - " 5: 'diamond'}" - ] - }, - "execution_count": 12, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Define the classes\n", - "shapemap = {'square': 0,\n", - " 'right facing triangle': 1,\n", - " 'left facing triangle': 2,\n", - " 'downwards facing triangle': 3,\n", - " 'upwards facing triangle': 4,\n", - " 'diamond': 5\n", - " }\n", - "\n", - "codemap = {shapemap[k]:k for k in shapemap}\n", - "codemap" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.1.2 Counting the number of black pixels in each shape\n", - "\n", - "Ever mindful that we are on the look out for features that might help us distinguish between the different shapes, let's check a really simple measure: the number of black filled pixels in each shape.\n", - "\n", - "If we cast the pixel data for the image in central focus areas of the the image array to a *pandas* *Series*, we can use the *Series* `.value_counts()` method to count the number of each unique pixel value." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "*Each column in a `pandas` dataframe is a `pandas.Series` object. Casting a list of data to a `Series` provides us with many convenient tools for manipulating and summarising that data.*" - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "metadata": { - "scrolled": false - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "0\n" - ] - }, - { - "data": { - "text/plain": [ - "'square'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAIklEQVR4nGP8z4AMmBhI4LIwMDAwMELY/0nUSxKXkRRHAgCTZgMbUGwI3QAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "255 147\n", - "0 49\n", - "dtype: int64" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "1\n" - ] - }, - { - "data": { - "text/plain": [ - "'right facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOUlEQVR4nGWPQQ4AMAjCYPH/X2aHmYjMW1MkSsHn4EeGJSPM2H0Bq2I0kyhDLStgrFZz31oObeeNC+mHByQEAyZUAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "255 145\n", - "0 51\n", - "dtype: int64" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "2\n" - ] - }, - { - "data": { - "text/plain": [ - "'left facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAO0lEQVR4nHXOQQ4AIAgDwYXw/y/XAypgIrcJJdREH+dHA6Kjtqm9tXF7RXSAgzQImqzAeaSnVQZaSQELkUIKHq+alzwAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "255 145\n", - "0 51\n", - "dtype: int64" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "3\n" - ] - }, - { - "data": { - "text/plain": [ - "'downwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANklEQVR4nJWOMRIAIAjDCuf/vxwHqTiwyJYLLQR6J/WBS4omUuoypixXHotpPoRX5jeovG21b2HxDRI/9WNGAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "255 143\n", - "0 53\n", - "dtype: int64" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "4\n" - ] - }, - { - "data": { - "text/plain": [ - "'upwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAMElEQVR4nGP8z4AMmBiwchlRuIxQPlbFjDDl2GQZ4UqwyDIiLMOUZYRzGBkY8boZAP24AiTbCsDqAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "255 136\n", - "0 60\n", - "dtype: int64" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "5\n" - ] - }, - { - "data": { - "text/plain": [ - "'diamond'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOElEQVR4nHWOSQ4AMAgChf//mR4qWhPrbSIbFO8xNsRAmOkffl60nR2Di25WilW0zlBJmEKNkY47j5gKG64KO3cAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "255 145\n", - "0 51\n", - "dtype: int64" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n" - ] - } - ], - "source": [ - "from nn_tools.sensor_data import generate_image, sensor_image_focus\n", - "import pandas as pd\n", - "\n", - "for index in range(len(clean_left_images_df)):\n", - " print(index)\n", - " # Get the central focal area of the image\n", - " left_img = sensor_image_focus(generate_image(clean_left_images_df, index))\n", - " \n", - " # Count of each pixel value\n", - " pixel_series = pd.Series(list(left_img.getdata()))\n", - " # The .value_counts() method tallies occurrences\n", - " # of each unique value in the Series\n", - " pixel_counts = pixel_series.value_counts()\n", - " \n", - " # Display the count and the image\n", - " display(codemap[index], left_img, pixel_counts)\n", - " print('\\n')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Observing the black (`0` value) pixel counts, we see that they do not uniquely identify the shapes. For example, the left and right facing triangles and the diamond all have 51 black pixels. A simple pixel count does not provide a way to distinguish between the shapes." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "### 3.1.3 Activity — Using bounding box sizes as a feature for distinguishing between shapes\n", - "\n", - "When we trained a neural network to recognise shape data, we use the dimensions of a bounding box drawn around the fruit as the input features to our network.\n", - "\n", - "Will the bounding box approach used there also allow us to distinguish between the shape images?\n", - "\n", - "Run the following code cell to convert the raw data associated with an image to a data frame, and then prune the rows and columns the edges that only contain white space.\n", - "\n", - "The dimensions of the dataframe, which is to say, the `.shape` of the dataframe, given as the 2-tuple `(rows, columns)`, corresponds to the bounding box of the shape. " - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "metadata": { - "activity": true - }, - "outputs": [ - { - "data": { - "text/html": [ - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
0 1 2 3 4 5 6 7 8
02552552552550255255255255
1255255255000255255255
225525500000255255
325525500000255255
42550000000255
5000000000
62550000000255
725525500000255255
825525500000255255
9255255255000255255255
102552552552550255255255255
" - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "(11, 9)" - ] - }, - "execution_count": 14, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from nn_tools.sensor_data import df_from_image, trim_image\n", - "\n", - "index = -1\n", - "\n", - "# The sensor_image_focus function crops\n", - "# to the central focal area of the image array\n", - "left_img = sensor_image_focus(generate_image(clean_left_images_df, index))\n", - "\n", - "trimmed_df = trim_image( df_from_image(left_img, show=False), reindex=True)\n", - "\n", - "# dataframe shape\n", - "trimmed_df.shape" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "Using the above code, or otherwise, find the shape of the bounding box for each shape as captured in the `roboSim.image_data` list.\n", - "\n", - "You may find it useful to use the provided code as the basis of a simple function that will:\n", - "\n", - "- take the index number for a particular image data scan;\n", - "- generate the image;\n", - "- find the size of the bounding box.\n", - "\n", - "Then you can iterate through all the rows in the `left_images_df` dataset, generate the corresponding image and its bounding box dimensions, and then display the image and the dimensions.\n", - "\n", - "*Hint: you can use a `for` loop defined as `for i in range(len(left_images_df)):` to iterate through each row of the data frame and generate an appropriate index number, `i`, for each row.*\n", - "\n", - "Based on the shape dimensions alone, can you distinguish between the shapes?" - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "metadata": { - "student": true - }, - "outputs": [], - "source": [ - "# Your code here" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "student": true - }, - "source": [ - "*Record your observations here, identifying the bounding box dimensions for each shape (square, right facing triangle, left facing triangle, downwards facing triangle, upwards facing triangle, diamond). Are the shapes distinguishable from their bounding box sizes?*" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "#### Example solution\n", - "\n", - "*Click the arrow in the sidebar or run this cell to reveal an example solution.*" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "Let's start by creating a simple function inspired by the supplied code that will display an image and its bounding box dimensions:" - ] - }, - { - "cell_type": "code", - "execution_count": 16, - "metadata": { - "activity": true - }, - "outputs": [ - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAIklEQVR4nGP8z4AMmBhI4LIwMDAwMELY/0nUSxKXkRRHAgCTZgMbUGwI3QAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "(7, 7)" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], - "source": [ - "def find_bounding_box(index):\n", - " \"\"\"Find bounding box for a shape in an image.\"\"\"\n", - " img = sensor_image_focus(generate_image(clean_left_images_df, index))\n", - " trimmed_df = trim_image( df_from_image(img, show=False), show=False, reindex=True)\n", - "\n", - " # Show image and shape\n", - " display(img, trimmed_df.shape)\n", - "\n", - "find_bounding_box(0)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "We can then call this function by iterating through each image data record in the `roboSim.image_data` dataset:" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "metadata": { - "activity": true - }, - "outputs": [ - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAIklEQVR4nGP8z4AMmBhI4LIwMDAwMELY/0nUSxKXkRRHAgCTZgMbUGwI3QAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "(7, 7)" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOUlEQVR4nGWPQQ4AMAjCYPH/X2aHmYjMW1MkSsHn4EeGJSPM2H0Bq2I0kyhDLStgrFZz31oObeeNC+mHByQEAyZUAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "(11, 9)" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAO0lEQVR4nHXOQQ4AIAgDwYXw/y/XAypgIrcJJdREH+dHA6Kjtqm9tXF7RXSAgzQImqzAeaSnVQZaSQELkUIKHq+alzwAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "(11, 9)" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANklEQVR4nJWOMRIAIAjDCuf/vxwHqTiwyJYLLQR6J/WBS4omUuoypixXHotpPoRX5jeovG21b2HxDRI/9WNGAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "(9, 11)" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAMElEQVR4nGP8z4AMmBiwchlRuIxQPlbFjDDl2GQZ4UqwyDIiLMOUZYRzGBkY8boZAP24AiTbCsDqAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "(10, 11)" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOElEQVR4nHWOSQ4AMAgChf//mR4qWhPrbSIbFO8xNsRAmOkffl60nR2Di25WilW0zlBJmEKNkY47j5gKG64KO3cAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "(11, 9)" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], - "source": [ - "for i in range(len(clean_left_images_df)):\n", - " find_bounding_box(i)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "Inspecting the results from my run (yours may be slightly different), several of the shapes appear to share the same bounding box dimensions:\n", - "\n", - "- the left and right facing triangles and the diamond have the same dimensions (`(11, 9)`).\n", - "\n", - "The square is clearly separated from the other shapes on the basis of its bounding box dimensions, but the other shapes all have dimensions that may be hard to distinguish between." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.1.4 Decoding the training label image\n", - "\n", - "The grey filled squares alongside the shape images are used to encode a label describing the associated shape.\n", - "\n", - "The grey levels are determined by the following algorithm, in which we use the numerical class values to derive the greyscale value:" - ] - }, - { - "cell_type": "code", - "execution_count": 18, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{nan: 'unknown',\n", - " 0: 'square',\n", - " 42: 'right facing triangle',\n", - " 85: 'left facing triangle',\n", - " 127: 'downwards facing triangle',\n", - " 170: 'upwards facing triangle',\n", - " 212: 'diamond'}" - ] - }, - "execution_count": 18, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from numpy import nan\n", - "\n", - "greymap = {nan: 'unknown'}\n", - "\n", - "# Generate greyscale value\n", - "for shape in shapemap:\n", - " key = int(shapemap[shape] * 255/len(shapemap))\n", - " greymap[key] = shape\n", - " \n", - "greymap" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's see if we can decode the labels from the solid grey squares.\n", - "\n", - "To to try to make sure we are using actual shape image data, we can can identify images in our training set if *all* the pixels in the right hand image are the same value." - ] - }, - { - "cell_type": "code", - "execution_count": 19, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "True" - ] - }, - "execution_count": 19, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "left_img, right_img = get_sensor_image_pair(clean_data_df, -1)\n", - "\n", - "# Generate a set of distinct pixel values\n", - "# from the right hand image.\n", - "# Return True if there is only one value\n", - "# in the set. That is, all the values are the same.\n", - "len(set(right_img.getdata())) == 1" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The following function can be used to generate a greyscale image from a row of the dataframe, find the median pixel value within that image, and then try to decode it. We also return a flag (`uniform`) that identifies if the all the pixels in the right hand encoded label image are the same." - ] - }, - { - "cell_type": "code", - "execution_count": 20, - "metadata": {}, - "outputs": [], - "source": [ - "def decode_shape_label(img, background=255):\n", - " \"\"\"Decode the shape from the greyscale image.\"\"\"\n", - " # Get the image greyscale pixel data\n", - " # The pandas Series is a convenient representation\n", - " image_pixels = pd.Series(list(img.getdata()))\n", - " \n", - " # Find the median pixel value\n", - " pixels_median = int(image_pixels.median())\n", - " \n", - " shape = None\n", - " code= None\n", - " #uniform = len(set(img.getdata())) == 1\n", - " # There is often more than one way to do it!\n", - " # The following makes use of Series.unique()\n", - " # which identifies the distinct values in a Series\n", - " uniform = len(image_pixels.unique()) == 1\n", - " \n", - " if pixels_median in greymap:\n", - " shape = greymap[pixels_median]\n", - " code = shapemap[greymap[pixels_median]]\n", - " \n", - " return (pixels_median, shape, code, uniform)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can apply that function to each row of the dataframe by iterating over pairs of rows:" - ] - }, - { - "cell_type": "code", - "execution_count": 21, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Grey: 0; shape: square; code: 0; uniform: True\n", - "Grey: 42; shape: right facing triangle; code: 1; uniform: True\n", - "Grey: 85; shape: left facing triangle; code: 2; uniform: True\n", - "Grey: 127; shape: downwards facing triangle; code: 3; uniform: True\n", - "Grey: 170; shape: upwards facing triangle; code: 4; uniform: True\n", - "Grey: 212; shape: diamond; code: 5; uniform: True\n" - ] - } - ], - "source": [ - "shapes = []\n", - "\n", - "# The number of row pairs is half the number of rows\n", - "num_pairs = int(len(clean_data_df)/2)\n", - "\n", - "for i in range(num_pairs):\n", - " \n", - " # Retrieve a pair of images \n", - " # from the datalog dataframe:\n", - " left_img, right_img = get_sensor_image_pair(roboSim.image_data(), i)\n", - " \n", - " #Decode the label image\n", - " (grey, shape, code, uniform) = decode_shape_label(right_img)\n", - " \n", - " # Add the label to a list of labels found so far\n", - " shapes.append(shape)\n", - "\n", - " # Display the result of decoding\n", - " # the median pixel value\n", - " print(f\"Grey: {grey}; shape: {shape}; code: {code}; uniform: {uniform}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can also use the `decode_shape_label()` function as part of another function that will return a shape training image and it's associated label from a left and right sensor row pair in the datalog dataframe:" - ] - }, - { - "cell_type": "code", - "execution_count": 22, - "metadata": {}, - "outputs": [], - "source": [ - "def get_training_data(raw_df, pair_index):\n", - " \"\"\"Get training image and label from raw data frame.\"\"\"\n", - " \n", - " # Get the left and right images\n", - " # at specified pair index\n", - " left_img, right_img = get_sensor_image_pair(raw_df,\n", - " pair_index)\n", - " response = decode_shape_label(right_img)\n", - " (grey, shape, code, uniform) = response\n", - " return (shape, code, uniform, left_img)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To use the `get_training_data()` function, we pass it the datalog dataframe and the index of the desired image pair:" - ] - }, - { - "cell_type": "code", - "execution_count": 23, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "diamond 5 True\n" - ] - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAATgAAAEzCAYAAACluB+pAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAAT10lEQVR4nO3de6xlZXnH8e/jAIUZqCA3kYFCK05sqSISCl6oiBhE4ijWBKItVFrTeAMvMVASLDG0tZpWkyY1Rqyk4hiKoJSqzFRBaqIo4IAzDAOIKAeBwXpBoOHiPP1jrSGn41zOWe97zlnz7u8n2Tn7+syzZ+/zO3uvvdfzRmYiSS16xkI3IElzxYCT1CwDTlKzDDhJzTLgJDXLgJPUrKKAi4iTImJ9RNwVEefWakqSaoih34OLiEXAHcCJwBTwXeD0zLytXnuSNFzJK7ijgbsy8+7MfAL4PLC8TluSVK4k4A4E7p12eqo/T5JGYae5/gci4m3A2wB23XXXFx988MFV6m7cuJFnPKPeZyQ1601Kb5NyP2vXm5Teat/PO+6446eZue+sbpSZgw7AscA1006fB5y3rds873nPy1quvfbaarVq15uU3iblftauNym91b6fwI05y5wqidfvAodFxKERsQtwGnBVQT1JqmrwW9TMfCoi3glcAywCPp2Za6t1JkmFirbBZeaXgS9X6kWSqnJPBknNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPUrNI1GT4dERsiYk2thiSpltJXcJ8BTqrQhyRVVxRwmXk98LNKvUhSVYNX1Xq6QMQhwNWZefhWLn96ZPm+++774ssuu6zo39vkkUceYffdd69Sq3a9SeltUu5n7XqT0lvt+3n88cfflJlHzepGsx0BvPkBOARYM5PrOrJ84euNtVbtevbWVq3M+R9ZLkmjZsBJalbp10RWAN8ClkXEVEScVactSSpXuibD6bUakaTafIsqqVkGnKRmGXCSmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZBpykZg0OuIg4KCKujYjbImJtRJxdszFJKlWyL+pTwPsy8+aI2AO4KSJWZeZtlXqTpCKDX8Fl5v2ZeXN//FfAOuDAWo1JUqkq2+D6seUvAm6oUU+SaqixJsPuwDeAizLzii1c7poMI6pXu9b69eur1AJYtmzZKO9n7XqT0tsOvyYDsDNwDfDemVzfNRkWvl7tWkC1w1jvZ+16k9LbDr0mQ0QEcDGwLjP/cWgdSZorJdvgXgr8KfDKiFjdH06u1JckFRv8NZHM/CYQFXuRpKrck0FSsww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNKtnZfteI+E5E3NKPLL+wZmOSVKpkZPnjwCsz85GI2Bn4ZkR8JTO/Xak3SSpSsrN9Ao/0J3fuD2XTMyWpoqJtcBGxKCJWAxuAVZnpyHJJo1E8shwgIvYErgTelZlrNrvMkeUjqvfggw8yNTVVpdbSpUur1apdr+b4c5ic58dYa8ECjCyffgAuAN6/res4snzh6330ox+tNmK8Zq3a9cb8GExKbzv6yPJ9+1duRMRuwInA7UPrSVJtJZ+iHgBcEhGL6LblXZaZV9dpS5LKlXyKeivdWqiSNEruySCpWQacpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmlUccP3Qy+9FhDvaSxqVGq/gzgbWVagjSVWVjixfCrwW+FSddiSpntJXcB8DPgBsLG9FkuoavCZDRJwCnJyZb4+IV9CNKz9lC9dzTYZCY11HYcxrMtTureYaD5Py3N2h12QA/g6YAu4BHgAeAz67rdu4JsMwY11HYcxrMtTubczPj7H2tkOvyZCZ52Xm0sw8BDgN+HpmvmVoPUmqze/BSWpWyaIzT8vM64DratSSpFp8BSepWQacpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmFe2qFRH3AL8Cfg08lbMdZSJJc6jGvqjHZ+ZPK9SRpKp8iyqpWaUBl8DKiLipn9wrSaMxeGQ5QEQcmJn3RcR+wCrgXZl5/WbXcWR5IUeWL2wtcGT5QteCeR5ZvvkB+Bu6dRkcWe7I8gWv58jytmplzvPI8ohYEhF7bDoOvBpYM7SeJNVW8inq/sCVEbGpzucy86tVupKkCgYHXGbeDbywYi+SVJVfE5HULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzDDhJzSoKuIjYMyIuj4jbI2JdRBxbqzFJKlW6JsPHga9m5p9ExC7A4go9SVIVgwMuIp4JHAecCZCZTwBP1GlLksoNHlkeEUcAnwRuoxubdBNwdmY+utn1HFleyJHlC1sLHFm+0LVgnkeWA0cBTwF/1J/+OPChbd1mUkaWT8oo70m5n7Xrjfm5O9ZamfM8shyYAqYy84b+9OXAkQX1JKmqwQGXmQ8A90bEsv6sE+jerkrSKJR+ivou4NL+E9S7gT8vb0mS6igKuMxcTbctTpJGxz0ZJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1KzBARcRyyJi9bTDwxFxTsXeJKnI4H1RM3M9cARARCwC7gOurNOWJJWr9Rb1BOAHmfmjSvUkqVitgDsNWFGpliRVMXhNhqcLdLPgfgL8QWY+uIXLd4g1Gca67kHtemOtVbveJPU21vUidug1GTYdgOXAyplcd8xrMkzKegBjrWVvww9jXUdhR1+TYZPT8e2ppBEqXdl+CXAicEWddiSpntKR5Y8Ce1fqRZKqck8GSc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNcuAk9Ss0p3t3xMRayNiTUSsiIhdazUmSaVKFp05EHg3cFRmHg4sopvsK0mjUPoWdSdgt4jYCVhMN9lXkkahaGR5RJwNXAT8L91U3zdv4To7xMjymvVqjj+H8Y7LnqSx4DXr1RwxDuMdM75DjywH9gK+DuwL7Ax8EXjLtm4z5pHlNetNyrjsSbmfteuN+bk71lqZ8z+y/FXADzPzocx8km6q70sK6klSVSUB92PgmIhYHBFBtzbqujptSVK5wQGXmTcAlwM3A9/va32yUl+SVKx0TYYPAh+s1IskVeWeDJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmmXASWpW6cjys/tx5Wsj4pxKPUlSFSUjyw8H/hI4GnghcEpEPLdWY5JUquQV3POBGzLzscx8CvgGcGqdtiSpXEnArQFeHhF7R8Ri4GTgoDptSVK50jUZzgLeDjwKrAUez8xzNrvOxK3JULu3mms8TMq6B7V7q7mOwqQ8d3foNRk2PwB/C7x9W9eZlDUZavc21rUKxrzuQe3exvz8GGtvY1iToWjgZUTsl5kbIuJguu1vx5TUk6SaigIO+EJE7A08CbwjM39R3pIk1VE6svzltRqRpNrck0FSsww4Sc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnN2m7ARcSnI2JDRKyZdt6zImJVRNzZ/9xrbtuUpNmbySu4zwAnbXbeucDXMvMw4Gv9aUkale0GXGZeD/xss7OXA5f0xy8BXl+3LUkqN3Qb3P6ZeX9//AFg/0r9SFI1MxpZHhGHAFdn5uH96V9k5p7TLv95Zm5xO5wjy8s5snxha4Ejyxe6FszhyHLgEGDNtNPrgQP64wcA62dSx5Hlw4x1lLcjy4eZlOfuGEaWD32LehVwRn/8DOBLA+tI0pyZyddEVgDfApZFxFS/ktbfAydGxJ3Aq/rTkjQq2x1Znpmnb+WiEyr3IklVuSeDpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmmXASWrW0DUZ3hQRayNiY0TMbj6TJM2ToWsyrAFOBa6v3ZAk1TKTaSLX9xN9p5+3DiAi5qgtSSrnNjhJzRq0JsO0868D3p+ZN27jtq7JMKJ6Y13foXa9mmsowOQ8P8ZaC+ZxTYZp518HHDXT+eiuybDw9ca87kHNemN+DCaltx15TQZJGr1BazJExBsiYgo4FvjPiLhmrhuVpNkqWZPhysq9SFJVvkWV1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ss4aOLP9IRNweEbdGxJURseecdilJAwwdWb4KODwzXwDcAZxXuS9JKrbdgMvM64GfbXbeysx8qj/5bWDpHPQmSUVqbIN7K/CVCnUkqarSkeXnA0cBp+ZWCjmyfFz1atdav359lVpQd8z4pDwGteuNtRbM88hy4Ey6QZiLZzo+2JHlC1+vdi0qjiwf6/2sXW9SehvDyPLtDrzckog4CfgA8MeZ+diQGpI01waNLAf+GdgDWBURqyPiE3PcpyTN2tCR5RfPQS+SVJV7MkhqlgEnqVkGnKRmGXCSmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZBpykZg1dk+FD/XoMqyNiZUQ8Z27blKTZG7omw0cy8wWZeQRwNXBB5b4kqdjQNRkennZyCd3AQkkalcEjyyPiIuDPgF8Cx2fmQ1u5rSPLR1TPkeULX29SetuhR5ZPu+w84MKZ1HFk+cLXG2ut2vXsra1amcNGltf4FPVS4I0V6khSVYMCLiIOm3ZyOXB7nXYkqZ7tjizv12R4BbBPREwBHwROjohlwEbgR8BfzWWTkjSEazJIapZ7MkhqlgEnqVkGnKRmGXCSmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZg0aWT7vsfRGREbHP3LQnScMNHVlORBwEvBr4ceWeJKmKQSPLe/8EfADHlUsaqaHz4JYD92XmLZX7kaRqtjsuaXMRsRj4a7q3pzO5/tNrMgCPb2lb3kD7AD+tVKt2vUnpbVLuZ+16k9Jb7fu5bNa3mMlcc6atyQD8IbABuKc/PEW3He7ZM6gz65nq81HL3ha+lr2No95Yaw2tN+tXcJn5fWC/Tacj4h7gqMysmdSSVGwmXxNZAXwLWBYRUxFx1ty3JUnlho4sn375IbP49z45i+vOZ63a9Salt0m5n7XrTUpvC34/Z7TwsyTtiNxVS1Kz5iXgIuKkiFgfEXdFxLmFtba669iAWgdFxLURcVtErI2Iswvr7RoR34mIW/p6F1bocVFEfC8irq5Q656I+H5ErI6IGwtr7RkRl0fE7RGxLiKOHVhnWd/PpsPDEXFOYW/v6f//10TEiojYtaDW2X2dtUP62tLzNSKeFRGrIuLO/udeBbXe1Pe2MSKOqtDbR/rH9NaIuDIi9iyo9aG+zuqIWBkRzynpbdplM99FtObHuFv5aHcR8APgd4FdgFuA3y+odxxwJP3XVgp7OwA4sj++B3BHYW8B7N4f3xm4ATimsMf3Ap8Drq5wf+8B9qn0uF4C/EV/fBdgz0rPlQeA3ymocSDwQ2C3/vRlwJkDax0OrAEW022v/i/gubOs8RvPV+AfgHP74+cCHy6o9Xy674ddR/dthtLeXg3s1B//cGFvvz3t+LuBT5T01p9/EHAN3XrM230uz8cruKOBuzLz7sx8Avg8sHxosdz6rmNDat2fmTf3x38FrKP7BRlaLzPzkf7kzv1h8EbOiFgKvBb41NAacyEinkn3BLwYIDOfyMxfVCh9AvCDzPxRYZ2dgN0iYie6cPrJwDrPB27IzMcy8yngG8Cpsymwlefrcro/EPQ/Xz+0Vmauy8z1s+lpO/VW9vcV4NvA0oJaD087uYRZ/C5s4/d8VruIzkfAHQjcO+30FAUhMlci4hDgRXSvukrqLIqI1XRfhl6VmSX1Pkb3YG4s6WmaBFZGxE39HiZDHQo8BPxr//b5UxGxpEJ/pwErSgpk5n3AR+m+fH4/8MvMXDmw3Brg5RGxd78Hz8l0ryBK7Z+Z9/fHHwD2r1BzLrwV+EpJgYi4KCLuBd4MXFBYa9a7iPohAxARuwNfAM7Z7K/OrGXmrzPzCLq/fEdHxOEDezoF2JCZN5X0s5mXZeaRwGuAd0TEcQPr7ET39uFfMvNFwKN0b7UGi4hdgNcB/15YZy+6V0iHAs8BlkTEW4bUysx1dG/TVgJfBVYDvy7pbwv/RjLCgRURcT7dXkqXltTJzPMz86C+zjsL+tm0i+isQnI+Au4+/v9fvaX9eaMQETvThdulmXlFrbr9W7Zr2cKoqRl6KfC6fk+RzwOvjIjPFvZ0X/9zA3Al3eaDIaaAqWmvTi+nC7wSrwFuzswHC+u8CvhhZj6UmU8CVwAvGVosMy/OzBdn5nHAz+m205Z6MCIOAOh/bqhQs5qIOBM4BXhzH8A1XAq8seD2v0f3R+uW/ndiKXBzRDx7Wzeaj4D7LnBYRBza/5U+DbhqHv7d7YqIoNuOtC4z/7FCvX03feoUEbsBJwK3D6mVmedl5tLsvkh9GvD1zBz0SqTvZ0lE7LHpON3G5EGfRGfmA8C9EbFp5+cTgNuG9tY7ncK3p70fA8dExOL+8T2BbtvqIBGxX//zYLrtb5+r0ONVwBn98TOAL1WoWUVEnES3WeR1mflYYa3Dpp1czsDfBeh2Ec3M/TLzkP53YoruA8IHtnfDOT/Qbbu4g+7T1PMLa62g27byZH8nzyqo9TK6twe30r39WA2cXFDvBcD3+nprgAsq/f+9gsJPUek+xb6lP6yt8DgcAdzY39cvAnsV1FoC/A/wzEr/XxfS/TKtAf4N+K2CWv9NF963ACcMuP1vPF+BvYGvAXfSfTL7rIJab+iPPw48CFxT2NtddNvMN/0+zOiTz63U+kL/GNwK/AdwYElvm11+DzP4FNU9GSQ1yw8ZJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc36P/R+G7OqnZ0oAAAAAElFTkSuQmCC\n", - "text/plain": [ - "
" - ] - }, - "metadata": { - "needs_background": "light" - }, - "output_type": "display_data" - } - ], - "source": [ - "pair_index = -1\n", - "\n", - "# Get the response tuple as a single variable\n", - "response = get_training_data(clean_data_df, pair_index)\n", - "\n", - "# Then unpack the tuple\n", - "(shape, code, uniform, training_img) = response\n", - "\n", - "print(shape, code, uniform)\n", - "zoom_img(training_img)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In summary, we can now:\n", - " \n", - "- grab the greyscale training image;\n", - "- find the median greyscale value;\n", - "- try to decode that value to a shape label / code;\n", - "- return the shape label and code associated with that greyscale image, along with an indicator of whether the image is in view via the `uniform` training image array flag;\n", - "- label the corresponding shape image with the appropriate label." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3.2 Real time data collection\n", - "\n", - "In this section, you will start to explore how to collect data in real time as the robot drives over the images, rather than being teleported directly on top of them." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.2.1 Identifying when the robot is over a pattern in real time\n", - "\n", - "If we want to collect data from the robot as it drives slowly over the images we need to be able to identify when it is passing over the images so we can trigger the image sampling.\n", - "\n", - "The following program will slow drive over the test patterns, logging the reflected light sensor values every so often. Start the program using the simulator *Run* button or the simulator `R` keyboard shortcut.\n", - "\n", - "From the traces on the simulator chart, can you identify when the robot passes over the images?" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "student": true - }, - "source": [ - "*Record your observations here.*" - ] - }, - { - "cell_type": "code", - "execution_count": 24, - "metadata": {}, - "outputs": [], - "source": [ - "%%sim_magic_preloaded -b Simple_Shapes -x 100 -y 900 -OAc\n", - "\n", - "say(\"On my way..\")\n", - "\n", - "# Start driving forwards slowly\n", - "tank_drive.on(SpeedPercent(10), SpeedPercent(10))\n", - "\n", - "count = 1\n", - "\n", - "# Drive forward no further than a specified distance\n", - "while int(tank_drive.left_motor.position)<1500:\n", - " \n", - " left_light = colorLeft.reflected_light_intensity_pc\n", - " right_light = colorRight.reflected_light_intensity_pc\n", - " \n", - " # report every fifth pass of the loop\n", - " if not (count % 5):\n", - " print('Light_left: ' + str(left_light))\n", - " print('Light_right: ' + str(right_light))\n", - "\n", - " count = count + 1\n", - "\n", - "say('All done')" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "student": true - }, - "source": [ - "*Based on your observations, describe a strategy you might use to capture image sample data when the test images are largely in view.*" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "activity": true - }, - "source": [ - "### 3.2.2 Challenge — capturing image data in real time (optional)\n", - "\n", - "Using your observations regarding the reflected light sensor values as the robot crosses the images, or otherwise, write a program to collect image data from the simulator in real time as the robot drives over them." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "student": true - }, - "source": [ - "*Describe your program strategy and record your program design notes here.*" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "student": true - }, - "outputs": [], - "source": [ - "# Your code here" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.2.3 Capturing image data in real time\n", - "\n", - "By observation of the reflected light sensor data in the chart, the robot appears to be over the a shape, as the reflected light sensor values drop below about 85%.\n", - "\n", - "From the chart, we might also notice that the training label image (encoded as the solid grey square presented to the right hand sensor) gives distinct readings for each shape.\n", - "\n", - "We can therefore use a drop in the reflected light sensor value to trigger the collection of the image data.\n", - "\n", - "First, let's clear the datalog:" - ] - }, - { - "cell_type": "code", - "execution_count": 25, - "metadata": {}, - "outputs": [], - "source": [ - "# Clear the datalog to give us a fresh start\n", - "roboSim.clear_datalog()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now we can write a program to drive the robot forwards slowly and collect the image data when it is over an image:" - ] - }, - { - "cell_type": "code", - "execution_count": 26, - "metadata": {}, - "outputs": [], - "source": [ - "%%sim_magic_preloaded -b Simple_Shapes -x 100 -y 900 -OAR\n", - "\n", - "say(\"Getting started.\")\n", - " \n", - "# Start driving forwards slowly\n", - "tank_drive.on(SpeedPercent(10), SpeedPercent(10))\n", - "\n", - "# Drive forward no futher than a specified distance\n", - "while int(tank_drive.left_motor.position)<1200:\n", - " \n", - " # Sample the right sensor\n", - " sample = colorRight.reflected_light_intensity_pc\n", - " # If we seem to be over a test label,\n", - " # grab the image data into the datalof\n", - " if sample < 85:\n", - " print(\"image_data both\")\n", - "\n", - "say(\"All done.\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "If we review the the images in the datalog, we should see they all contain a fragment at least of the image data (this may take a few moments to run). The following code cell grabs images where the `uniform` flag is set on the encoded label image and adds those training samples to a list (`training_images`):" - ] - }, - { - "cell_type": "code", - "execution_count": 27, - "metadata": {}, - "outputs": [ - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "5628a33c30a5438fab7ae001b057e298", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "HBox(children=(HTML(value=''), FloatProgress(value=0.0, max=152.0), HTML(value='')))" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'square'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAH0lEQVR4nGP8z4AMmBhI4LJAKEYI9Z8kvSRxGUlxJACRZgMbVprLxAAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'square'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAH0lEQVR4nGP8z4AMmBhI4LJAKEYI9Z8kvSRxGUlxJACRZgMbVprLxAAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'square'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAH0lEQVR4nGP8z4AMmBhI4LJAKEYI9Z8kvSRxGUlxJACRZgMbVprLxAAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'square'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAIklEQVR4nGP8z4AMmBhI4LIwMDAwMELY/0nUSxKXkRRHAgCTZgMbUGwI3QAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'square'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAIUlEQVR4nGP8z4AMmBhI4LIwMDAwQpj/SdVLEpeRFEcCAJRmAxszDo5DAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'square'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAIUlEQVR4nGP8z4AMmBhI4LIwMDAwQpj/SdVLEpeRFEcCAJRmAxszDo5DAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'square'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAH0lEQVR4nGP8z4AMmBhI4LIwMEIY/0nXSxKXkRRHAgCWZgMb28QbEgAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'square'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAH0lEQVR4nGP8z4AMmBhI4LIwMEIY/0nXSxKXkRRHAgCWZgMb28QbEgAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'right facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANklEQVR4nG2PQQ4AIAzCqPH/X8aThhF3a2DNhpWz9EMqhSpTu1AqysxED7R2Qqos6aUeV91HDtxzByHbNPJrAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'right facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANklEQVR4nG2PQQ4AIAzCqPH/X8aThhF3a2DNhpWz9EMqhSpTu1AqysxED7R2Qqos6aUeV91HDtxzByHbNPJrAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'right facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOUlEQVR4nGWPQQ4AMAjCYPH/X2aHmYjMW1MkSsHn4EeGJSPM2H0Bq2I0kyhDLStgrFZz31oObeeNC+mHByQEAyZUAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'right facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOUlEQVR4nGWPQQ4AMAjCYPH/X2aHmYjMW1MkSsHn4EeGJSPM2H0Bq2I0kyhDLStgrFZz31oObeeNC+mHByQEAyZUAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'right facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOElEQVR4nF2OQQoAMAzCzOj/v9wdxqTVW4iItGaOEgkLUWajC54ilkFSGXvZd6cmfOunNUE6i3QB9Y0HJMya6TkAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'left facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOklEQVR4nHXOQQoAMAgDwU3x/19OD6USC/U2RDEyOYsvBZWIVECnGrdXVOKk9iB4shf6kZ9W9lPSsAGIRQoeHKYQZgAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'left facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOklEQVR4nHXOQQoAMAgDwU3x/19OD6USC/U2RDEyOYsvBZWIVECnGrdXVOKk9iB4shf6kZ9W9lPSsAGIRQoeHKYQZgAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'left facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOklEQVR4nHXOMQ4AIAhD0Q/h/leug6hlkO2lJRDCJ/kwAMpx09ZO427kEOWARBoETVqhD2nyFN6TAliaPwoe5fTkgAAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'left facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAO0lEQVR4nG2OQQ4AIAjDysL/v4wHRUciJwpjLAov8ccAIB1620Q6gAZZXwCq8VhnbM4m0LuzVC24ITcvAEkLGdO4FGsAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'left facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAO0lEQVR4nG2OQQ4AIAjDysL/v4wHRUciJwpjLAov8ccAIB1620Q6gAZZXwCq8VhnbM4m0LuzVC24ITcvAEkLGdO4FGsAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'downwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANElEQVR4nGP8z4AMmBhI4LIwMDAwosoiTPuPTS9M+j/UqP8wErtF/6EYhzP+Q/TDZSGmAQDPdAwTiggo+AAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'downwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANklEQVR4nJWOuREAMAjDhC/7r+xUPAUpQqfDFoSZIz7wEA1GeNDSdS8RyX4dckXWN5w+tRXgAm3mDRL9IN9jAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'downwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANklEQVR4nJWOuREAMAjDhC/7r+xUPAUpQqfDFoSZIz7wEA1GeNDSdS8RyX4dckXWN5w+tRXgAm3mDRL9IN9jAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'downwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAMUlEQVR4nGP8z4AMmBhI4DIimP9RZP9j0/sfWZKBCUr/x2XRf4QkNmf8h0tCZBHeAAD88AwPIugKGwAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'upwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAALUlEQVR4nGP8z4AMmBhwcBlRuIwMjLgVM0IwdllGKIlVlhFmAjZZRjiXEa+bAf2EAiPBfIMLAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'upwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAALUlEQVR4nGP8z4AMmBhwcBlRuIwMjLgVM0IwdllGKIlVlhFmAjZZRjiXEa+bAf2EAiPBfIMLAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'upwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAMElEQVR4nGP8z4AMmBiwchlRuIxQPlbFjDDl2GQZ4UqwyDIiLMOUZYRzGBkY8boZAP24AiTbCsDqAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'upwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAMUlEQVR4nGP8z4AMmBiwcRlRuIwwPjbFjHDlWGQZEaZhysIsYcQmywhnMzIwMOJ1MwD9wwIkgzyx7gAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'upwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAALUlEQVR4nGP8z4AMmBgwuYwoXEYEH1MxI5JyDFmIBCNWWYQd/3E5A6GOEa+bAT2jAx+7u6KjAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'upwards facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAALUlEQVR4nGP8z4AMmBgwuYwoXEYEH1MxI5JyDFmIBCNWWYQd/3E5A6GOEa+bAT2jAx+7u6KjAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'diamond'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOUlEQVR4nHWOUQoAMAhC1fvf2X1Ua0ELEh4lSuMdYUcOZLD6xp+XpWoCGFjZzmeX7jWcq2vzKBn2A32YChvzQ46EAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'diamond'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOUlEQVR4nHWOUQoAMAhC1fvf2X1Ua0ELEh4lSuMdYUcOZLD6xp+XpWoCGFjZzmeX7jWcq2vzKBn2A32YChvzQ46EAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'diamond'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOElEQVR4nHWOSQ4AMAgChf//mR4qWhPrbSIbFO8xNsRAmOkffl60nR2Di25WilW0zlBJmEKNkY47j5gKG64KO3cAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'diamond'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOElEQVR4nHWOSQ4AMAgChf//mR4qWhPrbSIbFO8xNsRAmOkffl60nR2Di25WilW0zlBJmEKNkY47j5gKG64KO3cAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'diamond'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAO0lEQVR4nHWPQQ4AIAjDyv7/53nAOIyRC2k6CJSZJV6sCyusdjXDc7bF1tliAHkS2t2/MxyJAB/qcN5Y/wgLF+dLDhUAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'diamond'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAO0lEQVR4nHWPQQ4AIAjDyv7/53nAOIyRC2k6CJSZJV6sCyusdjXDc7bF1tliAHkS2t2/MxyJAB/qcN5Y/wgLF+dLDhUAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'diamond'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANUlEQVR4nGXOsQ0AMAgDwbf339mpophAd7yQUOgxixpU2aj7d3v3mVWTt6a130hFDHnC3eAAvrkKF1muikwAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "'diamond'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANUlEQVR4nGXOsQ0AMAgDwbf339mpophAd7yQUOgxixpU2aj7d3v3mVWTt6a130hFDHnC3eAAvrkKF1muikwAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n" - ] - } - ], - "source": [ - "training_images = []\n", - "\n", - "for i in trange(int(len(roboSim.image_data())/2)):\n", - " \n", - " response = get_training_data(roboSim.image_data(), i)\n", - " \n", - " (shape, code, uniform, training_img) = response\n", - " \n", - " # Likely shape\n", - " if uniform:\n", - " display(shape, training_img)\n", - " training_images.append((shape, code, training_img))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "student": true - }, - "source": [ - "*Record your own observations here about how \"clean\" the captured training images are.*" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can cast the list of training images in the convenient form of a *pandas* dataframe:" - ] - }, - { - "cell_type": "code", - "execution_count": 28, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
shapecodeimage
0square0<PIL.Image.Image image mode=L size=14x14 at 0x...
1square0<PIL.Image.Image image mode=L size=14x14 at 0x...
2square0<PIL.Image.Image image mode=L size=14x14 at 0x...
\n", - "
" - ], - "text/plain": [ - " shape code image\n", - "0 square 0 " - ] - }, - "metadata": { - "needs_background": "light" - }, - "output_type": "display_data" - } - ], - "source": [ - "from nn_tools.network_views import quick_progress_tracked_training\n", - "\n", - "\n", - "# Specify some parameters\n", - "hidden_layer_sizes = (40)\n", - "max_iterations = 500\n", - "\n", - "\n", - "# Create a new MLP\n", - "MLP = quick_progress_tracked_training(training_images, training_labels,\n", - " hidden_layer_sizes=hidden_layer_sizes,\n", - " max_iterations=max_iterations,\n", - " report=True,\n", - " jiggled=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can use the following code cell to randomly select images from the training samples and test the network:" - ] - }, - { - "cell_type": "code", - "execution_count": 31, - "metadata": {}, - "outputs": [ - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAO0lEQVR4nG2OQQ4AIAjDysL/v4wHRUciJwpjLAov8ccAIB1620Q6gAZZXwCq8VhnbM4m0LuzVC24ITcvAEkLGdO4FGsAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "MLP predicts 2 compared to label 2; classification is True\n" - ] - } - ], - "source": [ - "from nn_tools.network_views import predict_and_report_from_image\n", - "import random\n", - "\n", - "sample = random.randint(0, len(training_images))\n", - "test_image = training_images[sample]\n", - "test_label = training_labels[sample]\n", - "\n", - "predict_and_report_from_image(MLP, test_image, test_label)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "student": true - }, - "source": [ - "*Record your observations about how well the network performs.*" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3.3 Testing the network on a new set of collected data\n", - "\n", - "Let's collect some data again by driving the robot over a second, slightly shorter test track at `y=700` to see if we can recognise the images." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "There are no encoded training label images in this track, so we will either have to rely on just the reflected light sensor value to capture legitimate images for us, or we will need to preprocess the images to discard ones that are only partial image captures." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.3.1 Collecting the test data\n", - "\n", - "The following program will stop as soon as the reflected light value from the left sensor drops below 85. How much of the image can we see?" - ] - }, - { - "cell_type": "code", - "execution_count": 32, - "metadata": {}, - "outputs": [], - "source": [ - "%%sim_magic_preloaded -b Simple_Shapes -x 100 -y 700 -OAR\n", - "\n", - "say('Starting')\n", - "# Start driving forwards slowly\n", - "tank_drive.on(SpeedPercent(5), SpeedPercent(5))\n", - "\n", - "# Sample the left sensor\n", - "sample = colorLeft.reflected_light_intensity_pc\n", - " \n", - "# Drive forward no futher than a specified distance\n", - "while sample>85:\n", - " sample = colorLeft.reflected_light_intensity_pc\n", - "\n", - "say(\"All done.\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "That's perhaps a bit optimistic for a sensible attempt at image recognition.\n", - "\n", - "However, recalling that black pixel count for the training images ranged from 49 for the square to 60 for one of the equilateral triangles, we could tag an image as likely to contain a potentially recognisable image if its black pixel count exceeds 45.\n", - "\n", - "To give us some data to work with, let's collect samples for the new test set at `y=700`. First clear the datalog:" - ] - }, - { - "cell_type": "code", - "execution_count": 33, - "metadata": {}, - "outputs": [], - "source": [ - "# Clear the datalog to give us a fresh start\n", - "roboSim.clear_datalog()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "And then grab the data:" - ] - }, - { - "cell_type": "code", - "execution_count": 34, - "metadata": {}, - "outputs": [], - "source": [ - "%%sim_magic_preloaded -b Simple_Shapes -x 100 -y 700 -OAR\n", - "\n", - "say(\"Starting\")\n", - "\n", - "# Start driving forwards slowly\n", - "tank_drive.on(SpeedPercent(5), SpeedPercent(5))\n", - "\n", - "# Drive forward no futher than a specified distance\n", - "while int(tank_drive.left_motor.position)<800:\n", - " \n", - " # Sample the right sensor\n", - " sample = colorLeft.reflected_light_intensity_pc\n", - " # If we seem to be over a test label,\n", - " # grab the image data into the datalog\n", - " if sample < 85:\n", - " print(\"image_data both\")\n", - "\n", - "say(\"All done.\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.3.2 Generating the test set\n", - "\n", - "We can now generate a clean test set of images based on a minimum required number of black pixels. The following function grabs the test images and also counts the black pixels in the left image." - ] - }, - { - "cell_type": "code", - "execution_count": 35, - "metadata": {}, - "outputs": [], - "source": [ - "def get_test_data(raw_df, pair_index):\n", - " \"\"\"Get test image and label from raw data frame.\"\"\"\n", - " \n", - " # Get the left and right images\n", - " # at specified pair index\n", - " left_img, right_img = get_sensor_image_pair(raw_df,\n", - " pair_index)\n", - " \n", - " # Get the pixel count\n", - " left_pixel_cnt = pd.Series(list(left_img.getdata())).value_counts()\n", - " count = left_pixel_cnt[0] if 0 in left_pixel_cnt else 0\n", - " \n", - " return (count, left_img)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The following cell creates a filtered list of potentially recognisable images. You may recall seeing a similarly structured code fragment previously when we used the `uniform` flag to select the images. However, in this case, we only save an image to a list if we see the black pixel count decreasing.\n", - "\n", - "Having got a candidate image, the `crop_and_pad_to_fit()` function crops it and tries to place it in the centre of the image array." - ] - }, - { - "cell_type": "code", - "execution_count": 36, - "metadata": {}, - "outputs": [ - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "0b84d20a18b94a21b010d472163d5d53", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "HBox(children=(HTML(value=''), FloatProgress(value=0.0, max=132.0), HTML(value='')))" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "51" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAL0lEQVR4nH2OMRIAAAzB8P8/6xpLTVW5wxUVfdZSN/WmAhA8eZS2A5cwmrK9fjcfcswJD7tgqgAAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "---\n" - ] - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAAkAAAALCAAAAACIMvjHAAAALUlEQVR4nGNk+M8AAYwMDFA2E5QHIxn+Q8QQAqic/zDZ/1Cx/zBT/kPVwewAAJO1CAk16KE7AAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "51" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAANUlEQVR4nGXOsQ0AMAgDwbf339mpophAd7yQUOgxixpU2aj7d3v3mVWTt6a130hFDHnC3eAAvrkKF1muikwAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "---\n" - ] - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAAkAAAALCAAAAACIMvjHAAAAMElEQVR4nF3MMQ7AMBACweH+/2dcnIvENKBFkCLFILIpyLKrPv5jc2M/i9m++1LFASKQDAkJrC/xAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "49" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAHElEQVR4nGP8z4AMmBhI4DJCqP/YZanIZSTFkQCWDgMX9MMWOgAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "---\n" - ] - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAAcAAAAHCAAAAADhOQgPAAAADElEQVR4nGNgIA8AAAA4AAGPBdpAAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/plain": [ - "51" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAAAAAA6I3INAAAAOUlEQVR4nG3OQQoAMAgDwVX8/5ftQSum1IsMBowle5wvrVZs9HVEbICLcBH+/k0lmUJWoFpN4JZsH1HACR05e6fSAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "---\n" - ] - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAAkAAAALCAAAAACIMvjHAAAALklEQVR4nE2MQQ4AMAjCqvH/X2YHwYxLGxIosakxoSOMCR35o4wbdOZ6X3SGxAM/LwkSu1MLwwAAAABJRU5ErkJggg==\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n" - ] - }, - { - "data": { - "text/plain": [ - "[,\n", - " ,\n", - " ,\n", - " ]" - ] - }, - "execution_count": 36, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from nn_tools.sensor_data import crop_and_pad_to_fit\n", - "\n", - "test_images = []\n", - "possible_img = None\n", - "possible_count = 0\n", - "\n", - "for i in trange(int(len(roboSim.image_data())/2)):\n", - " (count, left_img) = get_test_data(roboSim.image_data(), i)\n", - " # On the way in to a shape, we have\n", - " # an increasing black pixel count\n", - " if count and count >= possible_count:\n", - " possible_img = left_img\n", - " possible_count = count\n", - " # We're perhaps now on the way out...\n", - " # Do we have a possible shape?\n", - " elif possible_img is not None and possible_count > 45:\n", - " display(possible_count, left_img)\n", - " print('---')\n", - " possible_img = crop_and_pad_to_fit(possible_img)\n", - " test_images.append(possible_img)\n", - " possible_img = None\n", - " # We have now gone passed the image\n", - " elif count < 35:\n", - " possible_count = 0\n", - " \n", - "test_images" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.3.3 Testing the data\n", - "\n", - "Having got our images, we can now try to test them with the MLP.\n", - "\n", - "Recall that the `codemap` dictionary maps from code values to shape name:" - ] - }, - { - "cell_type": "code", - "execution_count": 37, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "'right facing triangle'" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAATgAAAEzCAYAAACluB+pAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAATxklEQVR4nO3de4xmdX3H8ffXBQq7UEEuK7LQpRUntlQRCQUvVEQMInEVawLRFiqtabyBlxgoCZYY2lpNq0mTGiNWUnENRVBKVXarIDVRFHDBXZabiDIILNYLAg2w7rd/nLNkuu5l5vx+M3P2N+9X8mSe63e/z87MZ57nnOd8f5GZSFKLnjHfDUjSbDHgJDXLgJPULANOUrMMOEnNMuAkNaso4CLipIi4IyLujohzazUlSTXE0M/BRcQi4E7gRGAS+C5wembeVq89SRqu5BXc0cDdmXlPZj4JfB5YUactSSpXEnAHAfdNuTzZXydJo7DLbP8DEfE24G0Au++++4sPOeSQKnU3bdrEM55Rbx9JzXoLpbeF8jxr11sovdV+nnfeeedPM3P/GT0oMwedgGOBa6ZcPg84b3uPed7znpe1XHvttdVq1a63UHpbKM+zdr2F0lvt5wncmDPMqZJ4/S5wWEQcGhG7AacBVxXUk6SqBr9FzcyNEfFO4BpgEfDpzFxXrTNJKlS0DS4zvwx8uVIvklSVRzJIapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmmXASWqWASepWQacpGaVrsnw6YjYEBFrazUkSbWUvoL7DHBShT4kqbqigMvM64GfVepFkqoavKrW0wUilgNXZ+bh27j96ZHl+++//4svu+yyon9vs0cffZQ999yzSi2Ahx56iMnJySq1JiYmqvZW87mOtVbtevbWVi2A448//qbMPGpGD5rpCOAtT8ByYO107jvmkeUf/ehHE6hyciT1/Nezt7ZqZc79yHJJGjUDTlKzSj8mshL4FjAREZMRcVadtiSpXOmaDKfXakSSavMtqqRmGXCSmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZBpykZhlwkpo1OOAi4uCIuDYibouIdRFxds3GJKlUybGoG4H3ZebNEbEXcFNErM7M2yr1JklFBr+Cy8wHMvPm/vyvgPXAQbUak6RSVbbB9WPLXwTcUKOeJNVQY02GPYFvABdl5hVbuX3BrcmwbNmyarVq16u5XsRCWVugdr2F0ttOvyYDsCtwDfDe6dx/oazJULNW7Xpjnrlvb/Nfb6y1Mud4TYaICOBiYH1m/uPQOpI0W0q2wb0U+FPglRGxpj+dXKkvSSo2+GMimflNICr2IklVeSSDpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmlVysP3uEfGdiLilH1l+Yc3GJKlUycjyJ4BXZuajEbEr8M2I+EpmfrtSb5JUpORg+wQe7S/u2p/KpmdKUkVF2+AiYlFErAE2AKsz05HlkkajeGQ5QETsDVwJvCsz125xmyPLR1Svdq2lS5dWqQXjHpdtb/NbC+ZhZPnUE3AB8P7t3ceR5fNfr3atmsY8Ltve5rdW5tyPLN+/f+VGROwBnAjcPrSeJNVWshf1QOCSiFhEty3vssy8uk5bklSuZC/qrXRroUrSKHkkg6RmGXCSmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElqVnHA9UMvvxcRHmgvaVRqvII7G1hfoY4kVVU6snwZ8FrgU3XakaR6Sl/BfQz4ALCpvBVJqmvwmgwRcQpwcma+PSJeQTeu/JSt3M81GUZUb6y1atebmJhYEOse1K431lowx2syAH8HTAL3Ag8CjwOf3d5jXJNh/uuNtVbtegtl3YPa9cZaK3OO12TIzPMyc1lmLgdOA76emW8ZWk+SavNzcJKaVbLozNMy8zrguhq1JKkWX8FJapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmmXASWqWASepWUWHakXEvcCvgF8DG3Omo0wkaRbVOBb1+Mz8aYU6klSVb1ElNas04BJYFRE39ZN7JWk0Bo8sB4iIgzLz/og4AFgNvCszr9/iPo4sH1G9sdaqXa92bzVHoDuyfJg5HVm+5Qn4G7p1GRxZPuJR3mOtNfbexjzKe6y97dQjyyNiSUTstfk88Gpg7dB6klRbyV7UpcCVEbG5zucy86tVupKkCgYHXGbeA7ywYi+SVJUfE5HULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzDDhJzSoKuIjYOyIuj4jbI2J9RBxbqzFJKlW6JsPHga9m5p9ExG7A4go9SVIVgwMuIp4JHAecCZCZTwJP1mlLksoNHlkeEUcAnwRuoxubdBNwdmY+tsX9HFk+onpjrVW7niPL26oFczyyHDgK2Aj8UX/548CHtvcYR5bPf72x1rK34aexjhnfqUeWA5PAZGbe0F++HDiyoJ4kVTU44DLzQeC+iJjorzqB7u2qJI1C6V7UdwGX9ntQ7wH+vLwlSaqjKOAycw3dtjhJGh2PZJDULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzBgdcRExExJopp0ci4pyKvUlSkcHHombmHcARABGxCLgfuLJOW5JUrtZb1BOAH2TmjyrVk6RitQLuNGBlpVqSVMXgNRmeLtDNgvsJ8AeZ+dBWbndNhhHVG2ut2vVck6GtWjDHazJsPgErgFXTua9rMsx/vbHWGntvY16rYKy97exrMmx2Or49lTRCpSvbLwFOBK6o044k1VM6svwxYN9KvUhSVR7JIKlZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmmXASWqWASepWQacpGYZcJKaVXqw/XsiYl1ErI2IlRGxe63GJKlUyaIzBwHvBo7KzMOBRXSTfSVpFErfou4C7BERuwCL6Sb7StIoFI0sj4izgYuA/6Wb6vvmrdzHkeUjqjfWWrXr1RwxDuMe5T3W3nbqkeXAPsDXgf2BXYEvAm/Z3mMcWT7/9cZaq3a9hTIWvHa9sdbKnPuR5a8CfpiZD2fmU3RTfV9SUE+SqioJuB8Dx0TE4ogIurVR19dpS5LKDQ64zLwBuBy4Gfh+X+uTlfqSpGKlazJ8EPhgpV4kqSqPZJDULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1y4CT1CwDTlKzSkeWn92PK18XEedU6kmSqigZWX448JfA0cALgVMi4rm1GpOkUiWv4J4P3JCZj2fmRuAbwKl12pKkciUBtxZ4eUTsGxGLgZOBg+u0JUnlStdkOAt4O/AYsA54IjPP2eI+rskwonq1ay1durRKLRj3egD2Nr+1YI7XZNjyBPwt8Pbt3cc1Gea/Xu1aNY15PQB7m99amcPWZCgaeBkRB2Tmhog4hG772zEl9SSppqKAA74QEfsCTwHvyMxflLckSXWUjix/ea1GJKk2j2SQ1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ss3YYcBHx6YjYEBFrp1z3rIhYHRF39V/3md02JWnmpvMK7jPASVtcdy7wtcw8DPhaf1mSRmWHAZeZ1wM/2+LqFcAl/flLgNfXbUuSyg3dBrc0Mx/ozz8I1BvrKkmVTGtkeUQsB67OzMP7y7/IzL2n3P7zzNzqdjhHlo+r3sTExGhHUo95XLa9zW8tmMWR5cByYO2Uy3cAB/bnDwTumE4dR5bPf70xj6S2t/mvN9ZamcNGlg99i3oVcEZ//gzgSwPrSNKsmc7HRFYC3wImImKyX0nr74ETI+Iu4FX9ZUkalR2OLM/M07dx0wmVe5GkqjySQVKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNcuAk9QsA05Ssww4Sc0y4CQ1a+iaDG+KiHURsSkiZjafSZLmyNA1GdYCpwLX125IkmqZzjSR6/uJvlOvWw8QEbPUliSVcxucpGYNWpNhyvXXAe/PzBu389gFtyZDzXUPYLxz8hfK2gK16y2U3nbaNRmmXH8dcNR056MvlDUZnLk///Xsra1amXO7JoMkjd6gNRki4g0RMQkcC/xnRFwz241K0kyVrMlwZeVeJKkq36JKapYBJ6lZBpykZhlwkpplwElqlgEnqVkGnKRmGXCSmmXASWqWASepWUNHln8kIm6PiFsj4sqI2HtWu5SkAYaOLF8NHJ6ZLwDuBM6r3JckFdthwGXm9cDPtrhuVWZu7C9+G1g2C71JUpEa2+DeCnylQh1Jqqp0ZPn5wFHAqbmNQjvLyPIxj2oea28L5XnWrrdQetupR5YDZ9INwlw83fHBYx5ZPuZRzWPtbaE8z9r1FkpvYxhZvsOBl1sTEScBHwD+ODMfH1JDkmbboJHlwD8DewGrI2JNRHxilvuUpBkbOrL84lnoRZKq8kgGSc0y4CQ1y4CT1CwDTlKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNcuAk9SsoWsyfKhfj2FNRKyKiOfMbpuSNHND12T4SGa+IDOPAK4GLqjclyQVG7omwyNTLi4BdjwWWJLm2OCR5RFxEfBnwC+B4zPz4W081pHlI6o31lq169lbW7VgjkeWT7ntPODC6dRxZPn81xtrrdr17K2tWpnDRpbX2It6KfDGCnUkqapBARcRh025uAK4vU47klTPDkeW92syvALYLyImgQ8CJ0fEBLAJ+BHwV7PZpCQN4ZoMkprlkQySmmXASWqWASepWQacpGYZcJKaZcBJapYBJ6lZBpykZhlwkpplwElq1qCR5VNue19EZETsNzvtSdJwQ0eWExEHA68Gfly5J0mqYtDI8t4/AR/AceWSRmroPLgVwP2ZeUvlfiSpmh2OS9pSRCwG/pru7el07v/0mgzAE1vbljfQfsBPK9WqXW+h9LZQnmftegult9rPc2LGj5jOXHOmrMkA/CGwAbi3P22k2w737GnUmfFM9bmoZW/zX8vexlFvrLWG1pvxK7jM/D5wwObLEXEvcFRm1kxqSSo2nY+JrAS+BUxExGREnDX7bUlSuaEjy6fevnwG/94nZ3DfuaxVu95C6W2hPM/a9RZKb/P+PKe18LMk7Yw8VEtSs+Yk4CLipIi4IyLujohzC2tt89CxAbUOjohrI+K2iFgXEWcX1ts9Ir4TEbf09S6s0OOiiPheRFxdoda9EfH9iFgTETcW1to7Ii6PiNsjYn1EHDuwzkTfz+bTIxFxTmFv7+n//9dGxMqI2L2g1tl9nXVD+traz2tEPCsiVkfEXf3XfQpqvanvbVNEHFWht4/039NbI+LKiNi7oNaH+jprImJVRDynpLcpt03/ENGau3G3sWt3EfAD4HeB3YBbgN8vqHcccCT9x1YKezsQOLI/vxdwZ2FvAezZn98VuAE4prDH9wKfA66u8HzvBfar9H29BPiL/vxuwN6VflYeBH6noMZBwA+BPfrLlwFnDqx1OLAWWEy3vfq/gOfOsMZv/LwC/wCc258/F/hwQa3n030+7Dq6TzOU9vZqYJf+/IcLe/vtKeffDXyipLf++oOBa+jWY97hz/JcvII7Grg7M+/JzCeBzwMrhhbLbR86NqTWA5l5c3/+V8B6ul+QofUyMx/tL+7anwZv5IyIZcBrgU8NrTEbIuKZdD+AFwNk5pOZ+YsKpU8AfpCZPyqsswuwR0TsQhdOPxlY5/nADZn5eGZuBL4BnDqTAtv4eV1B9weC/uvrh9bKzPWZecdMetpBvVX9cwX4NrCsoNYjUy4uYQa/C9v5PZ/RIaJzEXAHAfdNuTxJQYjMlohYDryI7lVXSZ1FEbGG7sPQqzOzpN7H6L6Zm0p6miKBVRFxU3+EyVCHAg8D/9q/ff5URCyp0N9pwMqSApl5P/BRug+fPwD8MjNXDSy3Fnh5ROzbH8FzMt0riFJLM/OB/vyDwNIKNWfDW4GvlBSIiIsi4j7gzcAFhbVmfIioOxmAiNgT+AJwzhZ/dWYsM3+dmUfQ/eU7OiIOH9jTKcCGzLyppJ8tvCwzjwReA7wjIo4bWGcXurcP/5KZLwIeo3urNVhE7Aa8Dvj3wjr70L1COhR4DrAkIt4ypFZmrqd7m7YK+CqwBvh1SX9b+TeSEQ6siIjz6Y5SurSkTmaen5kH93XeWdDP5kNEZxSScxFw9/P//+ot668bhYjYlS7cLs3MK2rV7d+yXctWRk1N00uB1/VHinweeGVEfLawp/v7rxuAK+k2HwwxCUxOeXV6OV3glXgNcHNmPlRY51XADzPz4cx8CrgCeMnQYpl5cWa+ODOPA35Ot5221EMRcSBA/3VDhZrVRMSZwCnAm/sAruFS4I0Fj/89uj9at/S/E8uAmyPi2dt70FwE3HeBwyLi0P6v9GnAVXPw7+5QRATddqT1mfmPFertv3mvU0TsAZwI3D6kVmael5nLsvsg9WnA1zNz0CuRvp8lEbHX5vN0G5MH7YnOzAeB+yJi88HPJwC3De2tdzqFb097PwaOiYjF/ff3BLptq4NExAH910Potr99rkKPVwFn9OfPAL5UoWYVEXES3WaR12Xm44W1DptycQUDfxegO0Q0Mw/IzOX978Qk3Q7CB3f0wFk/0W27uJNub+r5hbVW0m1beap/kmcV1HoZ3duDW+nefqwBTi6o9wLge329tcAFlf7/XkHhXlS6vdi39Kd1Fb4PRwA39s/1i8A+BbWWAP8DPLPS/9eFdL9Ma4F/A36roNZ/04X3LcAJAx7/Gz+vwL7A14C76PbMPqug1hv6808ADwHXFPZ2N902882/D9Pa87mNWl/ovwe3Av8BHFTS2xa338s09qJ6JIOkZrmTQVKzDDhJzTLgJDXLgJPULANOUrMMOEnNMuAkNcuAk9Ss/wPnPSVDpucuOQAAAABJRU5ErkJggg==\n", - "text/plain": [ - "
" - ] - }, - "metadata": { - "needs_background": "light" - }, - "output_type": "display_data" - } - ], - "source": [ - "from nn_tools.network_views import class_predict_from_image\n", - "\n", - "# Random sample from the test images\n", - "sample = random.randint(0, len(test_images)-1)\n", - "\n", - "test_img = test_images[sample]\n", - "\n", - "prediction = class_predict_from_image(MLP, test_img)\n", - "\n", - "# How did we do?\n", - "display(codemap[prediction])\n", - "zoom_img(test_img)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 3.3.4 Save the MLP\n", - "\n", - "Save the MLP so we can use it again:" - ] - }, - { - "cell_type": "code", - "execution_count": 38, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "['mlp_shapes_14x14.joblib']" - ] - }, - "execution_count": 38, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from joblib import dump\n", - "\n", - "dump(MLP, 'mlp_shapes_14x14.joblib') \n", - "\n", - "# Load it back\n", - "#from joblib import load\n", - "\n", - "#MLP = load('mlp_shapes_14x14.joblib')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Summary\n", - "\n", - "In this notebook, you have seen how we can collect data in real time from the simulator by sampling images when the robot detects a change in the reflected light levels.\n", - "\n", - "Using a special test track, with paired shape and encoded label images, we were able to collect a set of shape based training patterns that could be used to train an MLP to recognise the shapes.\n", - "\n", - "Investigation of the shape images revealed that simple black pixel counts and bounding box dimensions did not distinguish between the shapes, so we simply trained the network on the raw images.\n", - "\n", - "Running the robot over a test track without and paired encoded label image, we were still able to detect when the robot was over the image based on the black pixel count of the shape image. On testing the MLP against newly collected and shapes, the neural network was able to correctly classify the collected patterns.\n", - "\n", - "In the next notebook, you will explore how the robot may be able to identify the shapes in real time as part of a multi-agent system working in partnership with a pattern recognising agent running in the notebook Python environment." - ] - } - ], - "metadata": { - "jupytext": { - "cell_metadata_filter": "tags,-all", - "formats": "ipynb,.md//md" - }, - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.8" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} diff --git a/content/08. Remote services and multi-agent systems/08.4 Recognising patterns on the move.ipynb b/content/08. Remote services and multi-agent systems/08.4 Recognising patterns on the move.ipynb new file mode 100644 index 00000000..de1d300c --- /dev/null +++ b/content/08. Remote services and multi-agent systems/08.4 Recognising patterns on the move.ipynb @@ -0,0 +1,1329 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "# 4 Recognising patterns on the move\n", + "\n", + "To be really useful a robot needs to recognise things as it goes along, or ‘on the fly’. In this notebook, you will train a neural network to use a simple MLP classifier to try to identify different shapes on the background. The training samples themselves, images *and* training labels, will be captured by the robot from the simulator background.\n", + "\n", + "We will use the two light sensors to collect the data used to train the network:\n", + "\n", + "- one light sensor will capture the shape image data;\n", + "- one light sensor will capture the training class data.\n", + "\n", + "To begin with we will contrive things somewhat to collect the data at specific locations on the background. But then you will explore how we can collect images as the robot moves more naturally within the environment.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [ + "alert-warning" + ] + }, + "source": [ + "*There is quite a lot of provided code in this notebook. You are not necessarily expected to be able to create this sort of code yourself. Instead, try to focus on the process of how various tasks are broken down into smaller discrete steps, as well as how small code fragments can be combined to create \"higher level\" functions that perform ever more powerful tasks.*" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "Before continuing, ensure the simulator is loaded and available:\n", + "\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nbev3devsim.load_nbev3devwidget import roboSim, eds\n", + "%load_ext nbev3devsim" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The background image *Simple_Shapes* contains several shapes arranged in a line, including a square, a circle, four equilateral triangles (arrow heads) with different orientations, a diamond and a rectangle.\n", + "\n", + "Just below each shape is a grey square, whose fill colour is used to distinguish between the different shapes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%sim_magic -b Simple_Shapes -x 600 -y 900" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.1 Evaluating the possible training data\n", + "\n", + "In this initial training pass, we will check whether the robot can clearly observe the potential training pairs. Each training pair consists of the actual shape image as well as a solid grey square, where the grey colour is use to represent one of six (6) different training classes.\n", + "\n", + "The left light sensor will be used to sample the shape image data and the right light sensor will be used to collect the simpler grey classification group pattern.\n", + "\n", + "As we are going to be pulling data into the notebook Python environment from the simulator, ensure the local notebook datalog is cleared:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "roboSim.clear_datalog()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The *Simple_Shapes* background we are using in this notebook contains several small regular shapes, with label encoding patterns alongside.\n", + "\n", + "The *x* and *y* locations for sampling the eight different images, along with a designator for each shape, as are follows:\n", + "\n", + "- 200 900 square\n", + "- 280 900 right facing triangle\n", + "- 360 900 left facing triangle\n", + "- 440 900 downwards facing triangle\n", + "- 520 900 upwards facing triangle\n", + "- 600 900 diamond" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now start to collect image data from the robot's light sensors. The `-R` switch runs the program once it has been downloaded to the simulator:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If we print the message `\"image_data both\"` we can collect data from both the left and the right light sensors at the same time." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%sim_magic_preloaded -b Simple_Shapes -AR -x 520 -y 900 -O\n", + "\n", + "#Sample the light sensor reading\n", + "sensor_value = colorLeft.reflected_light_intensity\n", + "\n", + "# This is essentially a command invocation\n", + "# not just a print statement!\n", + "print(\"image_data both\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can preview the collected image data in the usual way:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "roboSim.image_data()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can also collect consecutive rows of data from the dataframe and decode them as left and right images:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nn_tools.sensor_data import get_sensor_image_pair\n", + "from nn_tools.sensor_data import zoom_img\n", + "\n", + "pair_index = -1\n", + "\n", + "left_img, right_img = get_sensor_image_pair(roboSim.image_data(),\n", + " pair_index)\n", + "zoom_img(left_img), zoom_img(right_img)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you don't see a figure image displayed, check that the robot is placed over a figure by reviewing the sensor array display in the simulator. If the image is there, rerun the previous code cell to see if the data is now available. If it isn't, rerun the data collecting magic cell, wait a view seconds, and then try to view the zoomed image display." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can run the previously downloaded program again from a simple line magic that situates the robot at a specific location and then runs the program to collect the sensor data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "_x = 280\n", + "\n", + "%sim_magic -x $_x -y 900 -RAH" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.1.1 Investigating the training data samples\n", + "\n", + "Let's start by seeing if we can collect image data samples for each of the shapes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from tqdm.notebook import trange\n", + "from nbev3devsim.load_nbev3devwidget import tqdma\n", + "\n", + "import time\n", + "\n", + "# Clear the datalog to give us a fresh start\n", + "roboSim.clear_datalog()\n", + "\n", + "# x-coordinate for centreline of first shape\n", + "_x_init = 200\n", + "\n", + "# Distance between shapes\n", + "_x_gap = 80\n", + "\n", + "# Number of shapes\n", + "_n_shapes = 6\n", + "\n", + "# y-coordinate for centreline of shapes\n", + "_y = 900\n", + "\n", + "# Load in the required background\n", + "%sim_magic -b Simple_Shapes\n", + "\n", + "# Generate x coordinate for each shape in turn\n", + "for _x in trange(_x_init, _x_init+(_n_shapes*_x_gap), _x_gap):\n", + " \n", + " # Jump to shape and run program to collect data\n", + " %sim_magic -x $_x -y $_y -R\n", + " \n", + " # Wait a short period to allow time for\n", + " # the program to run and capture the sensor data,\n", + " # and for the data to be passed from the simulator\n", + " # to the notebook Python environment\n", + " time.sleep(1)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We should now be able to access multiple image samples via `roboSim.image_data()`, which returns a dataframe containing as many rows as images we scanned:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "clean_data_df = roboSim.image_data()\n", + "clean_data_df" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The original sensor data is collected as three channel RGB data. By default, the `get_sensor_image_pair()` function, which extracts a pair of consecutive images from the datalog, converts these to greyscale images:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nn_tools.sensor_data import get_sensor_image_pair\n", + "\n", + "pair_index = -1\n", + "\n", + "left_img, right_img = get_sensor_image_pair(clean_data_df,\n", + " pair_index)\n", + "\n", + "zoom_img(left_img), zoom_img(right_img)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can also filter the dataframe to give us a dataframe containing just the data grabbed from the left hand image sensor:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# The mechanics behind how this line of code\n", + "# works are beyond the scope of this module.\n", + "# In short, we identify the rows where the\n", + "# \"side\" column value is equal to \"left\"\n", + "# and select just those rows.\n", + "clean_left_images_df = clean_data_df[clean_data_df['side']=='left']\n", + "clean_left_images_df" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The shape names and classes are defined as follows in the order they appear going from left to right along the test track. We can also derive a map going the other way, from code to shape." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Define the classes\n", + "shapemap = {'square': 0,\n", + " 'right facing triangle': 1,\n", + " 'left facing triangle': 2,\n", + " 'downwards facing triangle': 3,\n", + " 'upwards facing triangle': 4,\n", + " 'diamond': 5\n", + " }\n", + "\n", + "codemap = {shapemap[k]:k for k in shapemap}\n", + "codemap" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.1.2 Counting the number of black pixels in each shape\n", + "\n", + "Ever mindful that we are on the look out for features that might help us distinguish between the different shapes, let's check a really simple measure: the number of black filled pixels in each shape.\n", + "\n", + "If we cast the pixel data for the image in central focus areas of the the image array to a *pandas* *Series*, we can use the *Series* `.value_counts()` method to count the number of each unique pixel value." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "*Each column in a `pandas` dataframe is a `pandas.Series` object. Casting a list of data to a `Series` provides us with many convenient tools for manipulating and summarising that data.*" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": false + }, + "outputs": [], + "source": [ + "from nn_tools.sensor_data import generate_image, sensor_image_focus\n", + "import pandas as pd\n", + "\n", + "for index in range(len(clean_left_images_df)):\n", + " print(index)\n", + " # Get the central focal area of the image\n", + " left_img = sensor_image_focus(generate_image(clean_left_images_df, index))\n", + " \n", + " # Count of each pixel value\n", + " pixel_series = pd.Series(list(left_img.getdata()))\n", + " # The .value_counts() method tallies occurrences\n", + " # of each unique value in the Series\n", + " pixel_counts = pixel_series.value_counts()\n", + " \n", + " # Display the count and the image\n", + " display(codemap[index], left_img, pixel_counts)\n", + " print('\\n')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Observing the black (`0` value) pixel counts, we see that they do not uniquely identify the shapes. For example, the left and right facing triangles and the diamond all have 51 black pixels. A simple pixel count does not provide a way to distinguish between the shapes." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true + }, + "source": [ + "### 4.1.3 Activity — Using bounding box sizes as a feature for distinguishing between shapes\n", + "\n", + "When we trained a neural network to recognise shape data, we used the dimensions of a bounding box drawn around the fruit as the input features to our network.\n", + "\n", + "Will the bounding box approach used there also allow us to distinguish between the shape images?\n", + "\n", + "Run the following code cell to convert the raw data associated with an image to a data frame, and then prune the rows and columns around the edges that only contain white space.\n", + "\n", + "The dimensions of the dataframe, which is to say, the `.shape` of the dataframe, given as the 2-tuple `(rows, columns)`, corresponds to the bounding box of the shape. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "activity": true + }, + "outputs": [], + "source": [ + "from nn_tools.sensor_data import df_from_image, trim_image\n", + "\n", + "index = -1\n", + "\n", + "# The sensor_image_focus function crops\n", + "# to the central focal area of the image array\n", + "left_img = sensor_image_focus(generate_image(clean_left_images_df, index))\n", + "\n", + "trimmed_df = trim_image( df_from_image(left_img, show=False), reindex=True)\n", + "\n", + "# dataframe shape\n", + "trimmed_df.shape" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true + }, + "source": [ + "Using the above code, or otherwise, find the shape of the bounding box for each shape as captured in the `roboSim.image_data` list.\n", + "\n", + "You may find it useful to use the provided code as the basis of a simple function that will:\n", + "\n", + "- take the index number for a particular image data scan;\n", + "- generate the image;\n", + "- find the size of the bounding box.\n", + "\n", + "Then you can iterate through all the rows in the `left_images_df` dataset, generate the corresponding image and its bounding box dimensions, and then display the image and the dimensions.\n", + "\n", + "*Hint: you can use a `for` loop defined as `for i in range(len(left_images_df)):` to iterate through each row of the data frame and generate an appropriate index number, `i`, for each row.*\n", + "\n", + "Based on the shape dimensions alone, can you distinguish between the shapes?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "student": true + }, + "outputs": [], + "source": [ + "# Your code here" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "student": true + }, + "source": [ + "*Record your observations here, identifying the bounding box dimensions for each shape (square, right facing triangle, left facing triangle, downwards facing triangle, upwards facing triangle, diamond). Are the shapes distinguishable from their bounding box sizes?*" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true + }, + "source": [ + "#### Example solution\n", + "\n", + "*Click the arrow in the sidebar or run this cell to reveal an example solution.*" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true + }, + "source": [ + "Let's start by creating a simple function inspired by the supplied code that will display an image and its bounding box dimensions:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "activity": true + }, + "outputs": [], + "source": [ + "def find_bounding_box(index):\n", + " \"\"\"Find bounding box for a shape in an image.\"\"\"\n", + " img = sensor_image_focus(generate_image(clean_left_images_df, index))\n", + " trimmed_df = trim_image( df_from_image(img, show=False), show=False, reindex=True)\n", + "\n", + " # Show image and shape\n", + " display(img, trimmed_df.shape)\n", + "\n", + "find_bounding_box(0)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true + }, + "source": [ + "We can then call this function by iterating through each image data record in the `roboSim.image_data` dataset:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "activity": true + }, + "outputs": [], + "source": [ + "for i in range(len(clean_left_images_df)):\n", + " find_bounding_box(i)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true + }, + "source": [ + "Inspecting the results from my run (yours may be slightly different), several of the shapes appear to share the same bounding box dimensions:\n", + "\n", + "- the left and right facing triangles and the diamond have the same dimensions (`(11, 9)`).\n", + "\n", + "The square is clearly separated from the other shapes on the basis of its bounding box dimensions, but the other shapes all have dimensions that may be hard to distinguish between." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.1.4 Decoding the training label image\n", + "\n", + "The grey filled squares alongside the shape images are used to encode a label describing the associated shape.\n", + "\n", + "The grey levels are determined by the following algorithm, in which we use the numerical class values to derive the greyscale value:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from numpy import nan\n", + "\n", + "greymap = {nan: 'unknown'}\n", + "\n", + "# Generate greyscale value\n", + "for shape in shapemap:\n", + " key = int(shapemap[shape] * 255/len(shapemap))\n", + " greymap[key] = shape\n", + " \n", + "greymap" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's see if we can decode the labels from the solid grey squares.\n", + "\n", + "To to try to make sure we are using actual shape image data, we can can identify images in our training set if *all* the pixels in the right hand image are the same value." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "left_img, right_img = get_sensor_image_pair(clean_data_df, -1)\n", + "\n", + "# Generate a set of distinct pixel values\n", + "# from the right hand image.\n", + "# Return True if there is only one value\n", + "# in the set. That is, all the values are the same.\n", + "len(set(right_img.getdata())) == 1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The following function can be used to generate a greyscale image from a row of the dataframe, find the median pixel value within that image, and then try to decode it. We also return a flag (`uniform`) that identifies if the all the pixels in the right hand encoded label image are the same." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def decode_shape_label(img, background=255):\n", + " \"\"\"Decode the shape from the greyscale image.\"\"\"\n", + " # Get the image greyscale pixel data\n", + " # The pandas Series is a convenient representation\n", + " image_pixels = pd.Series(list(img.getdata()))\n", + " \n", + " # Find the median pixel value\n", + " pixels_median = int(image_pixels.median())\n", + " \n", + " shape = None\n", + " code= None\n", + " #uniform = len(set(img.getdata())) == 1\n", + " # There is often more than one way to do it!\n", + " # The following makes use of Series.unique()\n", + " # which identifies the distinct values in a Series\n", + " uniform = len(image_pixels.unique()) == 1\n", + " \n", + " if pixels_median in greymap:\n", + " shape = greymap[pixels_median]\n", + " code = shapemap[greymap[pixels_median]]\n", + " \n", + " return (pixels_median, shape, code, uniform)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can apply that function to each row of the dataframe by iterating over pairs of rows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "shapes = []\n", + "\n", + "# The number of row pairs is half the number of rows\n", + "num_pairs = int(len(clean_data_df)/2)\n", + "\n", + "for i in range(num_pairs):\n", + " \n", + " # Retrieve a pair of images \n", + " # from the datalog dataframe:\n", + " left_img, right_img = get_sensor_image_pair(roboSim.image_data(), i)\n", + " \n", + " #Decode the label image\n", + " (grey, shape, code, uniform) = decode_shape_label(right_img)\n", + " \n", + " # Add the label to a list of labels found so far\n", + " shapes.append(shape)\n", + "\n", + " # Display the result of decoding\n", + " # the median pixel value\n", + " print(f\"Grey: {grey}; shape: {shape}; code: {code}; uniform: {uniform}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can also use the `decode_shape_label()` function as part of another function that will return a shape training image and it's associated label from a left and right sensor row pair in the datalog dataframe:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def get_training_data(raw_df, pair_index):\n", + " \"\"\"Get training image and label from raw data frame.\"\"\"\n", + " \n", + " # Get the left and right images\n", + " # at specified pair index\n", + " left_img, right_img = get_sensor_image_pair(raw_df,\n", + " pair_index)\n", + " response = decode_shape_label(right_img)\n", + " (grey, shape, code, uniform) = response\n", + " return (shape, code, uniform, left_img)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To use the `get_training_data()` function, we pass it the datalog dataframe and the index of the desired image pair:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pair_index = -1\n", + "\n", + "# Get the response tuple as a single variable\n", + "response = get_training_data(clean_data_df, pair_index)\n", + "\n", + "# Then unpack the tuple\n", + "(shape, code, uniform, training_img) = response\n", + "\n", + "print(shape, code, uniform)\n", + "zoom_img(training_img)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In summary, we can now:\n", + " \n", + "- grab the greyscale training image;\n", + "- find the median greyscale value;\n", + "- try to decode that value to a shape label / code;\n", + "- return the shape label and code associated with that greyscale image, along with an indicator of whether the image is in view via the `uniform` training image array flag;\n", + "- label the corresponding shape image with the appropriate label." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 4.2 Real time data collection\n", + "\n", + "In this section, you will start to explore how to collect data in real time as the robot drives over the images, rather than being teleported directly on top of them." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.2.1 Identifying when the robot is over a pattern in real time\n", + "\n", + "If we want to collect data from the robot as it drives slowly over the images we need to be able to identify when it is passing over the images so we can trigger the image sampling.\n", + "\n", + "The following program will slowly drive over the test patterns, logging the reflected light sensor values every so often. Start the program using the simulator *Run* button or the simulator `R` keyboard shortcut.\n", + "\n", + "From the traces on the simulator chart, can you identify when the robot passes over the images?" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "student": true + }, + "source": [ + "*Record your observations here.*" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%sim_magic_preloaded -b Simple_Shapes -x 100 -y 900 -OAc\n", + "\n", + "say(\"On my way..\")\n", + "\n", + "# Start driving forwards slowly\n", + "tank_drive.on(SpeedPercent(10), SpeedPercent(10))\n", + "\n", + "count = 1\n", + "\n", + "# Drive forward no further than a specified distance\n", + "while int(tank_drive.left_motor.position)<1500:\n", + " \n", + " left_light = colorLeft.reflected_light_intensity_pc\n", + " right_light = colorRight.reflected_light_intensity_pc\n", + " \n", + " # report every fifth pass of the loop\n", + " if not (count % 5):\n", + " print('Light_left: ' + str(left_light))\n", + " print('Light_right: ' + str(right_light))\n", + "\n", + " count = count + 1\n", + "\n", + "say('All done')" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "student": true + }, + "source": [ + "*Based on your observations, describe a strategy you might use to capture image sample data when the test images are largely in view.*" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "activity": true + }, + "source": [ + "### 4.2.2 Challenge — capturing image data in real time (optional)\n", + "\n", + "Using your observations regarding the reflected light sensor values as the robot crosses the images, or otherwise, write a program to collect image data from the simulator in real time as the robot drives over them." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "student": true + }, + "source": [ + "*Describe your program strategy and record your program design notes here.*" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "student": true + }, + "outputs": [], + "source": [ + "# Your code here" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.2.3 Capturing image data in real time\n", + "\n", + "By observation of the reflected light sensor data in the chart, the robot appears to be over the a shape, as the reflected light sensor values drop below about 85%.\n", + "\n", + "From the chart, we might also notice that the training label image (encoded as the solid grey square presented to the right hand sensor) gives distinct readings for each shape.\n", + "\n", + "We can therefore use a drop in the reflected light sensor value to trigger the collection of the image data.\n", + "\n", + "First, let's clear the datalog:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Clear the datalog to give us a fresh start\n", + "roboSim.clear_datalog()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can write a program to drive the robot forwards slowly and collect the image data when it is over an image:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%sim_magic_preloaded -b Simple_Shapes -x 100 -y 900 -OAR\n", + "\n", + "say(\"Getting started.\")\n", + " \n", + "# Start driving forwards slowly\n", + "tank_drive.on(SpeedPercent(10), SpeedPercent(10))\n", + "\n", + "# Drive forward no futher than a specified distance\n", + "while int(tank_drive.left_motor.position)<1200:\n", + " \n", + " # Sample the right sensor\n", + " sample = colorRight.reflected_light_intensity_pc\n", + " # If we seem to be over a test label,\n", + " # grab the image data into the datalof\n", + " if sample < 85:\n", + " print(\"image_data both\")\n", + "\n", + "say(\"All done.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If we review the the images in the datalog, we should see they all contain a fragment at least of the image data (this may take a few moments to run). The following code cell grabs images where the `uniform` flag is set on the encoded label image and adds those training samples to a list (`training_images`):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_images = []\n", + "\n", + "for i in trange(int(len(roboSim.image_data())/2)):\n", + " \n", + " response = get_training_data(roboSim.image_data(), i)\n", + " \n", + " (shape, code, uniform, training_img) = response\n", + " \n", + " # Likely shape\n", + " if uniform:\n", + " display(shape, training_img)\n", + " training_images.append((shape, code, training_img))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "student": true + }, + "source": [ + "*Record your own observations here about how \"clean\" the captured training images are.*" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can cast the list of training images in the convenient form of a *pandas* dataframe:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_df = pd.DataFrame(training_images,\n", + " columns=['shape', 'code', 'image'])\n", + "\n", + "training_df.head(3)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can get training image and training label lists as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_images = training_df['image'].to_list()\n", + "training_labels = training_df['code'].to_list()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We are now in a position to try to use the data collected by travelling over the test track to train the neural network." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 4.3 Training an MLP to recognise the patterns\n", + "\n", + "In an earlier activity, we discovered that the bounding box method we used to distinguish fruits did not provide a set of features that we could use to distinguish the different shapes.\n", + "\n", + "So let's just use a \"naive\" training approach and just train the network on the 14 x 14 pixels in the centre of each sensor image array.\n", + "\n", + "We can use the `quick_progress_tracked_training()` function we used previously to train an MLP using the scanned image shapes,. We can optionally use the `jiggled=True` parameter to add some variation:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nn_tools.network_views import quick_progress_tracked_training\n", + "\n", + "\n", + "# Specify some parameters\n", + "hidden_layer_sizes = (40)\n", + "max_iterations = 500\n", + "\n", + "\n", + "# Create a new MLP\n", + "MLP = quick_progress_tracked_training(training_images, training_labels,\n", + " hidden_layer_sizes=hidden_layer_sizes,\n", + " max_iterations=max_iterations,\n", + " report=True,\n", + " jiggled=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can use the following code cell to randomly select images from the training samples and test the network:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nn_tools.network_views import predict_and_report_from_image\n", + "import random\n", + "\n", + "sample = random.randint(0, len(training_images))\n", + "test_image = training_images[sample]\n", + "test_label = training_labels[sample]\n", + "\n", + "predict_and_report_from_image(MLP, test_image, test_label)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "student": true + }, + "source": [ + "*Record your observations about how well the network performs.*" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 4.4 Testing the network on a new set of collected data\n", + "\n", + "Let's collect some data again by driving the robot over a second, slightly shorter test track at `y=700` to see if we can recognise the images." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are no encoded training label images in this track, so we will either have to rely on just the reflected light sensor value to capture legitimate images for us, or we will need to preprocess the images to discard ones that are only partial image captures." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.4.1 Collecting the test data\n", + "\n", + "The following program will stop as soon as the reflected light value from the left sensor drops below 85. How much of the image can we see?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%sim_magic_preloaded -b Simple_Shapes -x 100 -y 700 -OAR\n", + "\n", + "say('Starting')\n", + "# Start driving forwards slowly\n", + "tank_drive.on(SpeedPercent(5), SpeedPercent(5))\n", + "\n", + "# Sample the left sensor\n", + "sample = colorLeft.reflected_light_intensity_pc\n", + " \n", + "# Drive forward no futher than a specified distance\n", + "while sample>85:\n", + " sample = colorLeft.reflected_light_intensity_pc\n", + "\n", + "say(\"All done.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That's perhaps a bit optimistic for a sensible attempt at image recognition.\n", + "\n", + "However, recalling that black pixel count for the training images ranged from 49 for the square to 60 for one of the equilateral triangles, we could tag an image as likely to contain a potentially recognisable image if its black pixel count exceeds 45.\n", + "\n", + "To give us some data to work with, let's collect samples for the new test set at `y=700`. First clear the datalog:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Clear the datalog to give us a fresh start\n", + "roboSim.clear_datalog()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And then grab the data:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%sim_magic_preloaded -b Simple_Shapes -x 100 -y 700 -OAR\n", + "\n", + "say(\"Starting\")\n", + "\n", + "# Start driving forwards slowly\n", + "tank_drive.on(SpeedPercent(5), SpeedPercent(5))\n", + "\n", + "# Drive forward no futher than a specified distance\n", + "while int(tank_drive.left_motor.position)<800:\n", + " \n", + " # Sample the right sensor\n", + " sample = colorLeft.reflected_light_intensity_pc\n", + " # If we seem to be over a test label,\n", + " # grab the image data into the datalog\n", + " if sample < 85:\n", + " print(\"image_data both\")\n", + "\n", + "say(\"All done.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.4.2 Generating the test set\n", + "\n", + "We can now generate a clean test set of images based on a minimum required number of black pixels. The following function grabs the test images and also counts the black pixels in the left image." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def get_test_data(raw_df, pair_index):\n", + " \"\"\"Get test image and label from raw data frame.\"\"\"\n", + " \n", + " # Get the left and right images\n", + " # at specified pair index\n", + " left_img, right_img = get_sensor_image_pair(raw_df,\n", + " pair_index)\n", + " \n", + " # Get the pixel count\n", + " left_pixel_cnt = pd.Series(list(left_img.getdata())).value_counts()\n", + " count = left_pixel_cnt[0] if 0 in left_pixel_cnt else 0\n", + " \n", + " return (count, left_img)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The following cell creates a filtered list of potentially recognisable images. You may recall seeing a similarly structured code fragment previously when we used the `uniform` flag to select the images. However, in this case, we only save an image to a list if we see the black pixel count decreasing.\n", + "\n", + "Having got a candidate image, the `crop_and_pad_to_fit()` function crops it and tries to place it in the centre of the image array." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nn_tools.sensor_data import crop_and_pad_to_fit\n", + "\n", + "test_images = []\n", + "possible_img = None\n", + "possible_count = 0\n", + "\n", + "for i in trange(int(len(roboSim.image_data())/2)):\n", + " (count, left_img) = get_test_data(roboSim.image_data(), i)\n", + " # On the way in to a shape, we have\n", + " # an increasing black pixel count\n", + " if count and count >= possible_count:\n", + " possible_img = left_img\n", + " possible_count = count\n", + " # We're perhaps now on the way out...\n", + " # Do we have a possible shape?\n", + " elif possible_img is not None and possible_count > 45:\n", + " display(possible_count, left_img)\n", + " print('---')\n", + " possible_img = crop_and_pad_to_fit(possible_img)\n", + " test_images.append(possible_img)\n", + " possible_img = None\n", + " # We have now gone passed the image\n", + " elif count < 35:\n", + " possible_count = 0\n", + " \n", + "test_images" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.4.3 Testing the data\n", + "\n", + "Having got our images, we can now try to test them with the MLP.\n", + "\n", + "Recall that the `codemap` dictionary maps from code values to shape name:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nn_tools.network_views import class_predict_from_image\n", + "\n", + "# Random sample from the test images\n", + "sample = random.randint(0, len(test_images)-1)\n", + "\n", + "test_img = test_images[sample]\n", + "\n", + "prediction = class_predict_from_image(MLP, test_img)\n", + "\n", + "# How did we do?\n", + "display(codemap[prediction])\n", + "zoom_img(test_img)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4.4.4 Save the MLP\n", + "\n", + "Save the MLP so we can use it again:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from joblib import dump\n", + "\n", + "dump(MLP, 'mlp_shapes_14x14.joblib') \n", + "\n", + "# Load it back\n", + "#from joblib import load\n", + "\n", + "#MLP = load('mlp_shapes_14x14.joblib')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 4.5 Summary\n", + "\n", + "In this notebook, you have seen how we can collect data in real time from the simulator by sampling images when the robot detects a change in the reflected light levels.\n", + "\n", + "Using a special test track, with paired shape and encoded label images, we were able to collect a set of shape based training patterns that could be used to train an MLP to recognise the shapes.\n", + "\n", + "Investigation of the shape images revealed that simple black pixel counts and bounding box dimensions did not distinguish between the shapes, so we simply trained the network on the raw images.\n", + "\n", + "Running the robot over a test track without any paired encoded label images, we were still able to detect when the robot was over the image based on the black pixel count of the shape image. On testing the MLP against newly collected and shapes, the neural network was able to correctly classify the collected patterns.\n", + "\n", + "In the next notebook, you will explore how the robot may be able to identify the shapes in real time as part of a multi-agent system working in partnership with a pattern recognising agent running in the notebook Python environment." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "tags,-all", + "formats": "ipynb,.md//md" + }, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.8" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": false, + "sideBar": false, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": false + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/content/08. Remote services and multi-agent systems/08.4 Messaging in multi-agent systems.ipynb b/content/08. Remote services and multi-agent systems/08.5 Messaging in multi-agent systems.ipynb similarity index 97% rename from content/08. Remote services and multi-agent systems/08.4 Messaging in multi-agent systems.ipynb rename to content/08. Remote services and multi-agent systems/08.5 Messaging in multi-agent systems.ipynb index 5a826ad7..c82105bb 100644 --- a/content/08. Remote services and multi-agent systems/08.4 Messaging in multi-agent systems.ipynb +++ b/content/08. Remote services and multi-agent systems/08.5 Messaging in multi-agent systems.ipynb @@ -4,13 +4,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# 4 Messaging in multi-agent systems\n", + "# 5 Messaging in multi-agent systems\n", "\n", "In the previous notebooks in this session, you have seen how we can pull data collected in the simulator into the notebook's Python environment, and then analyse it in that environment at our convenience.\n", "\n", - "In particular, we could convert the raw data to an image based representation, as well as presenting in as raw data to a pre-trained multilayer perceptron (MLP) or a pre-trained convolutional neural network (CNN).\n", + "In particular, we could convert the raw data to an image based representation, as well as presenting it as raw data to a pre-trained multilayer perceptron (MLP) or a pre-trained convolutional neural network (CNN).\n", "\n", - "We could also capture and decode test labels for the images, allowing is to train a classifier neural network purely using information retrieved from the simulated robot.\n", + "We could also capture and decode test labels for the images, allowing us to train a classifier neural network purely using information retrieved from the simulated robot.\n", "\n", "To simplify data collection matters in the original experiments, we \"teleported\" the robot to specific sampling locations, rather than expecting it to explore the environment and try to detect images on its own.\n", "\n", @@ -23,7 +23,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 4.1 ROS — the Robot Operating System\n", + "## 5.1 ROS — the Robot Operating System\n", "\n", "*ROS*, the *Robot Operating System*, provides one possible architecture for implementing a dynamic message passing architecture. In a ROS environment, separate *nodes* publish details of one or more *services* they can perform along with *topics* that act act as the nodes address that other nodes can subscribe. Nodes then pass messages between each other in order to perform a particular task. The ROS architecture is rather elaborate for our needs, however, so we shall use a much simpler and more direct approach." ] @@ -52,7 +52,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### 4.1.1 Communicating between the notebook and the robot\n", + "### 5.1.1 Communicating between the notebook and the robot\n", "\n", "A simple diagram helps to explain the architecture we are using.\n", "\n", @@ -110,7 +110,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### 4.1.2 Defining a simple message handler\n", + "### 5.1.2 Defining a simple message handler\n", "\n", "Inside the robot, a simple mechanism is already defined that allows the robot to send a message to the Python environment, but there is nothing defined on the Python end to handle it.\n", "\n", @@ -452,11 +452,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### 4.1.3 Passing state\n", + "### 5.1.3 Passing state\n", "\n", "Passing messages is all very well, but can we go a step further? Can we pass *data objects* between the robot and the Python environment, and back again?\n", "\n", - "Let's start by adding another level of indirection to out program. In this case, let's create a simple agent that takes a parsed message object, does something to it (which in this case isn't very interesting!) and passes a modified object back:" + "Let's start by adding another level of indirection to our program. In this case, let's create a simple agent that takes a parsed message object, does something to it (which in this case isn't very interesting!) and passes a modified object back:" ] }, { @@ -705,9 +705,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### 4.1.4 Extending the message parser\n", + "### 5.1.4 Extending the message parser\n", "\n", - "Let's now look at how we might retrieve real time sensor data in out message passing system.\n", + "Let's now look at how we might retrieve real time sensor data in our message passing system.\n", "\n", "As well as the `PY::` message processor, the robot also has a special `IMG_DATA` message processor. Printing the message `IMG_DATA` to the simulator output window causes a special message to be passed to the Python environment. This message starts with the phrase `IMG_DATA::`, followed by the sensor data.\n", "\n", @@ -839,7 +839,7 @@ "activity": true }, "source": [ - "### 4.1.5 Activity — Reviewing the inter-agent message protocol and communication activity \n", + "### 5.1.5 Activity — Reviewing the inter-agent message protocol and communication activity \n", "\n", "At this point, let's quickly recap on the messaging protocol we have defined by way of another sequence diagram:" ] @@ -902,7 +902,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 4.2 Putting the pieces together — a multi-agent system\n", + "## 5.2 Putting the pieces together — a multi-agent system\n", "\n", "With our message protocol defined, let's see if we can now create a multi-agent system where the robot collects some image data and passes it to the Python agent. The Python agent should then decode the image data, present it to a pre-trained multi-layer perceptron neural network, and identify a presented shape. The Python agent should then inform the robot about the shape the robot of the object it can see." ] @@ -911,7 +911,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### 4.2.1 The image classifier agent\n", + "### 5.2.1 The image classifier agent\n", "\n", "To perform the recognition task, we need to implement our agent. The agent will take the image data and place it in a two row dataframe in the correct form. Then it will generate an image pair from the dataframe, and present the left-hand shape image to the neural network. The neural network will return a shape prediction and this will be passed in a message back to the robot.\n", "\n", @@ -1092,7 +1092,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 4.3 Summary\n", + "## 5.3 Summary\n", "\n", "In this notebook, you have seen how we can create a simple protocol that allows the passage of messages between the robot and Python agent in a simple multi-agent system. The Python agent picks up the message received from the robot, parses it and decodes it as an image. The image is then classified by an MLP and the agent responds to the robot with a predicted image classification.\n", "\n", @@ -1129,6 +1129,19 @@ "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.8" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": false, + "sideBar": false, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": false } }, "nbformat": 4, diff --git a/content/08. Remote services and multi-agent systems/08.5 Conclusion.ipynb b/content/08. Remote services and multi-agent systems/08.6 Conclusion.ipynb similarity index 86% rename from content/08. Remote services and multi-agent systems/08.5 Conclusion.ipynb rename to content/08. Remote services and multi-agent systems/08.6 Conclusion.ipynb index 878a2227..d7488597 100644 --- a/content/08. Remote services and multi-agent systems/08.5 Conclusion.ipynb +++ b/content/08. Remote services and multi-agent systems/08.6 Conclusion.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# 5 Conclusion\n", + "# 6 Conclusion\n", "\n", "Phew... you made it... Well done:-)\n", "\n", @@ -38,7 +38,7 @@ "\n", "As well as sequential programs, you also saw how we could use rule based systems to create rich programs that react to particular events, from the simple conversational agent originally implemented many decades ago in the form of *Eliza*, to more elaborate rule based systems created using the `durable-rules` framework.\n", "\n", - "You then learned how we could use simple neural networks to perform a range of classification tasks. *MLPs*, which is to say, *multilayer perceptrons*, can be quite quick to train, but may struggle when it comes to all but the simplest or most well behaved classification tasks. If you worked through the optinal materials, you will also have seen how *CNNs*, or *convolutional neural networks*, offer far more robust behaviour, particularly when it comes to image based recognition tasks. However, they are far more expensive to train in many senses of the word — in terms of training data required, computational effort and time. Trying to make sense of how neural networks actually perform their classification tasks is a significant challenge, but you saw how certain visualisation techniques could be used to help us peer inside the \"mind\" of a neural network.\n", + "You then learned how we could use simple neural networks to perform a range of classification tasks. *MLPs*, which is to say, *multilayer perceptrons*, can be quite quick to train, but may struggle when it comes to all but the simplest or most well behaved classification tasks. If you worked through the optional materials, you will also have seen how *CNNs*, or *convolutional neural networks*, offer far more robust behaviour, particularly when it comes to image based recognition tasks. However, they are far more expensive to train in many senses of the word — in terms of training data required, computational effort and time. Trying to make sense of how neural networks actually perform their classification tasks is a significant challenge, but you saw how certain visualisation techniques could be used to help us peer inside the \"mind\" of a neural network.\n", "\n", "Finally, you saw how we could start to consider the robot+Python notebook computational environment as a *multi-agent* system, in which we programmed the robot and Python agents separately, and then created a simple message passing protocol to allow them to communicate. Just as complex emergent behaviours can arise from multiple interacting rules in a rule based system, or the combined behaviour of the weighted connections between neural network neurons, so too might we create complex behaviours from the combined behaviour of agents in a simple multi-agent system.\n", "\n", @@ -53,7 +53,7 @@ "student": true }, "source": [ - "*Jot down a few notes here to reflect on what you enjoyed, what you learned, and what you found particularly challenging studying this block. Are there any things you could have done differently that would have made it easier or more rewarding?*" + "*Jot down a few notes here to reflect on what you enjoyed, what you learned, what you found particularly challenging studying this block and what surprised you. Are there any things you could have done differently that would have made it easier or more rewarding?*" ] }, { @@ -84,7 +84,20 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": false, + "sideBar": false, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": false } }, "nbformat": 4,