Skip to content

Commit

Permalink
Minor grammar correction - Update 09-1.md (#843)
Browse files Browse the repository at this point in the history
  • Loading branch information
ritog committed Sep 26, 2023
1 parent 724bfd2 commit a250ef4
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/en/week09/09-1.md
Expand Up @@ -62,7 +62,7 @@ The answer about whether it helps is not clear. People interested in this are ei
<center><img src="{{site.baseurl}}/images/week09/09-1/7akkfhv.png" width="400px"/></center>
**Fig 3:** Structure of Convolutional RELU with Group Sparsity

As can be seen above, you are start with an image, you have an encoder which is basically Convolution RELU and some kind of scaling layer after this. You train with group sparsity. You have a linear decoder and a criterion which is group by 1. You take the group sparsity as a regulariser. This is like L2 pooling with an architecture similar to group sparsity.
As can be seen above, you start with an image, you have an encoder which is basically Convolution RELU and some kind of scaling layer after this. You train with group sparsity. You have a linear decoder and a criterion which is group by 1. You take the group sparsity as a regulariser. This is like L2 pooling with an architecture similar to group sparsity.

You can also train another instance of this network. This time, you can add more layers and have a decoder with the L2 pooling and sparsity criterion, train it to reconstruct its input with pooling on top. This will create a pretrained 2-layer convolutional net. This procedure is also called Stacked Autoencoder. The main characteristic here is that it is trained to produce invariant features with group sparsity.

Expand Down

0 comments on commit a250ef4

Please sign in to comment.