Skip to content

Commit

Permalink
Merge pull request #10 from sajal2692/blog
Browse files Browse the repository at this point in the history
blog: add corrective rag with langgraph post
  • Loading branch information
sajal2692 committed Feb 29, 2024
2 parents d7b301a + 3152b7b commit b9ce3d6
Show file tree
Hide file tree
Showing 7 changed files with 516 additions and 101 deletions.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
74 changes: 18 additions & 56 deletions src/content/blog/building-image-classifier-fastai.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Building an Image Classifier Really Fast Using Fastai"
author: "Your Name" # Replace with the actual author's name
author: "Sajal Sharma" # Replace with the actual author's name
pubDatetime: 2022-10-28T00:00:00Z
slug: building-image-classifier-fastai
featured: false
Expand All @@ -10,7 +10,7 @@ tags:
- Computer Vision
- fastai
description: "In this post, I demonstrate how to quickly build an image classifier using the fastai library, a powerful tool for practical deep learning. The project involves classifying images of fruit as either rotten or fresh."
canonicalURL: "" # Add if the article is published elsewhere
canonicalURL: "" # Add if the article is published elsewhere
---

## Table of contents
Expand All @@ -19,15 +19,14 @@ canonicalURL: "" # Add if the article is published elsewhere

I recently started the [fast.ai](https://course.fast.ai/Lessons/lesson1.html) course to build up my practical deep learning skills. In order to better retain what I learn, I'm going to be writing a series of posts/notebooks, implementing my own models based on the course content. This notebook is written based on what I learned from the first week of the course.

In this notebook we'll build an image classifier using the [fastai](https://docs.fast.ai), a deep learning library built on top of Pytorch that provides both high-level and low-level components to quickly build state-of-the-art models for common deep learning domains.
In this notebook we'll build an image classifier using the [fastai](https://docs.fast.ai), a deep learning library built on top of Pytorch that provides both high-level and low-level components to quickly build state-of-the-art models for common deep learning domains.

We'll build a model that can classify images of fruit into a binary category: rotten or not. You can imagine such a model being used inside refrigerators to detect if produce kept inside it has gone bad.

When I started learning ML in 2016, building such models was a non-trivial task. Libraries to build deep neural networks were still in their infancy (Pytorch was introduced in late 2016), and building accurate image classification models required a certain degree of specialized knowledge. All that has changed and, as you'll notice in the notebook, we can build an image classifier using just a few lines of code.

Let's get started!


```python
import os
# !pip install -Uqq fastai duckduckgo_search
Expand All @@ -39,7 +38,6 @@ We'll be needing the `duckduckgo_search` package to quickly search for, and down

<br />


```python
from duckduckgo_search import ddg_images
from fastcore.all import *
Expand All @@ -52,16 +50,14 @@ def search_images(term, max_images=40):
urls = search_images('rotten fruit', max_images=1)
urls[0]
```

```output
Searching for 'rotten fruit'
'https://i.pinimg.com/originals/13/2e/48/132e481c0ef6f1516de2b5b80a553b6a.jpg'
```



Let's download this image and open it.


```python
from fastdownload import download_url
dest = 'rotten.jpg'
Expand All @@ -72,33 +68,23 @@ im = Image.open(dest)
im.to_thumb(256,256)
```




![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_8_0.png)




Doing something similar for fresh fruit.


```python
download_url(search_images('fresh fruit', max_images=1)[0], 'fresh.jpg', show_progress=False)
Image.open('fresh.jpg').to_thumb(256,256)
```

```output
Searching for 'fresh fruit'
```

![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_10_1.png)



![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_10_1.png)

Now that we know what duckduckgo image search is working fine, we can download images for both rotten and fresh fruit and store them in their respective directories. We use time.sleep to avoid spamming the search API.


```python
searches = 'rotten', 'fresh'
path=Path('rotten_or_fresh')
Expand All @@ -116,6 +102,7 @@ for o in searches:
download_images(dest, urls=search_images(f'{o} vegetables'))
resize_images(path/o, max_size=400, dest=path/o)
```

```output
Searching for 'rotten fruit'
Searching for 'rotten apple'
Expand All @@ -127,14 +114,12 @@ Searching for 'fresh banana'
Searching for 'fresh vegetables'
```


## Training our model

We have our images and the next step is to train a model. Again, it blows my mind how simple this is using fastai. I'll briefly explain what the below blocks of code are doing.

First, we check if all image files can be opened correctly using a fastai vision library utility verify_images. If it can't be opened, then we unlink it from our path so that is is not used in model training.


```python
# validate images
failed=verify_images(get_image_files(path))
Expand All @@ -146,21 +131,18 @@ len(failed)
0
```



Next, we'll use another building block from the fastai library, the `DataBlock` class, which we can use to represent our training data, the labels, data splitting criteria, and any data transformations.

`blocks=(ImageBlock, CategoryBlock)` is used to specify what kind of data is in the DataBlock. We have images, and categories - hence a tuple of ImageBlock and CategoryBlock classes.

`get_items` takes the function `get_image_files` as its parameter. `get_image_files` is used to find the paths of our input images.
`get_items` takes the function `get_image_files` as its parameter. `get_image_files` is used to find the paths of our input images.

`splitter=RandomSplitter(valid_pct=0.2, seed=42)` specifies that we want to randomly split our input data into training and validation sets, using 20% data for validation.
`splitter=RandomSplitter(valid_pct=0.2, seed=42)` specifies that we want to randomly split our input data into training and validation sets, using 20% data for validation.

`get_y=parent_label` specifies that the labels for an image file is its parent (the directory that the file belongs to).

`item_tfms=[Resize(192, method='squish')]` specifies the transformation performed on each file. Here we are resizing each image to 192x192 pixels by squishing it. Another option could be to `crop` the image.


```python
dls = DataBlock(
blocks=(ImageBlock, CategoryBlock),
Expand All @@ -173,18 +155,12 @@ dls = DataBlock(
dls.show_batch(max_n=6)
```



![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_16_0.png)




Above you can see a batch of images from our DataBlock, along with their labels. This is a nice way of quickly knowing if a sample from our data is correct (images/labels).

To train our model we will fine-tune the resnet18, which is one of the most widely used computer vision models, on our dataset.


```python
clf = vision_learner(dls, resnet18, metrics=error_rate)
clf.fine_tune(5)
Expand All @@ -211,7 +187,6 @@ clf.fine_tune(5)
</tbody>
</table>


<table border="1" class="dataframe">
<thead>
<tr style="text-align: left;">
Expand Down Expand Up @@ -261,17 +236,15 @@ clf.fine_tune(5)
</tbody>
</table>


## Using the model

It's finally time to use our model and see how it does predicting if a fruit is rotten or not.


```python
Image.open('rotten.jpg').to_thumb(256,256)
```
![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_20_0.png)

![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_20_0.png)

<br />

Expand All @@ -292,21 +265,14 @@ Probability it's rotten: 1.0000
Image.open('fresh.jpg').to_thumb(256,256)
```





![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_22_0.png)





```python
is_rotten,_,probs = clf.predict(PILImage.create('fresh.jpg'))
print(f"This fruit/vegetable is: {is_rotten}.")
print(f"Probability it's fresh: {probs[0]:.4f}")
```

```output
This fruit/vegetable is: fresh.
Probability it's fresh: 1.0000
Expand All @@ -316,23 +282,19 @@ Probability it's fresh: 1.0000

Let's see if our model can predict if a given image is of a rotten orange or a fresh orange. We haven't explicitly downloaded images of fresh/rotten oranges for our training set, so it would be a good generalization on "unseen data".


```python
download_url(search_images('fresh orange', max_images=1)[0], 'fresh orange.jpg', show_progress=False)
Image.open('fresh orange.jpg').to_thumb(256,256)
```

![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_25_1.png)




![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_25_1.png)

```python
is_rotten,_,probs = clf.predict(PILImage.create('fresh orange.jpg'))
print(f"This fruit/vegetable is: {is_rotten}.")
print(f"Probability it's fresh: {probs[0]:.4f}")
```

```output
This fruit/vegetable is: fresh.
Probability it's fresh: 0.9748
Expand All @@ -345,31 +307,31 @@ download_url(search_images('rotten orange', max_images=1)[0], 'rotten orange.jpg
Image.open('rotten orange.jpg').to_thumb(256,256)
```


![rotten_or_not](@assets/images/blog/building-image-classifier-fastai/is-the-fruit-rotten-or-not_27_1.png)


```python
is_rotten,_,probs = clf.predict(PILImage.create('rotten orange.jpg'))
print(f"This fruit/vegetable is: {is_rotten}.")
print(f"Probability it's rotten: {probs[1]:.4f}")
```

```output
This fruit/vegetable is: rotten.
Probability it's rotten: 0.9899
```

Not bad at all. The model seems to generalize fine. Though, a more accurate measure of generalizability would involve creating a separate test set and calculating performance metrics.
Not bad at all. The model seems to generalize fine. Though, a more accurate measure of generalizability would involve creating a separate test set and calculating performance metrics.

## Summary

There you have it! With a few lines of code we have created our own image classification model by fine-tuning off the shelf models with fastai. The high level apis that the library provides makes the process of building an initial model a breeze.
If you want to run the notebook for yourself, you can check it out on Kaggle [here](https://www.kaggle.com/code/sajalsharma26/is-the-fruit-rotten-or-not). I urge you to try building your own
If you want to run the notebook for yourself, you can check it out on Kaggle [here](https://www.kaggle.com/code/sajalsharma26/is-the-fruit-rotten-or-not). I urge you to try building your own
classification model on images from duckduckgo search.

I'll be going over the rest of the fastai course in the coming weeks. Even though I have only done the first two weeks till now, I highly recommend it for anyone
interested in Machine Learning, more so for people with a coding background.

## Resources

- fastai Course: https://course.fast.ai/
- Notebook on Kaggle: https://www.kaggle.com/code/sajalsharma26/is-the-fruit-rotten-or-not
- Notebook on Kaggle: https://www.kaggle.com/code/sajalsharma26/is-the-fruit-rotten-or-not

0 comments on commit b9ce3d6

Please sign in to comment.