Skip to content
Joaquim Castilla edited this page Sep 23, 2020 · 1 revision

Guidelines for reviewers:

  • Try to pick up all of the files from the same week. Most of the time, EN files from the same week are homogeneous in writers, topics, and style. We wish to keep the structure consistent also in the [YOUR_LANG] files. Especially if the [YOUR_LANG] translations are authored by different people, try to keep the translation of terms, particularly the technical and mathematical ones, consistent throughout the files.
  • Try to keep the translation of terms consistent with the #Terms table below. If you're unsure about a translation, ask on Slack, have no fear :)
  • After having completed the review, create another Pull Request. Write the PR number in the Notes column of the workload distribution, like so "Reviewed (#000)". When the PR is approved, write Review approved in the same column.

Workload distribution

File name Translator Start date Publish date Reviewers PR Notes
README.md
index.md
01.md
01-1.md
01-2.md
01-3.md
lecture01.sbv
practicum01.sbv
02.md
02-1.md
02-2.md
02-3.md
lecture02.sbv
practicum02.sbv
03.md
03-1.md
03-2.md
03-3.md
lecture03.sbv
practicum03.sbv
04.md
04-1.md
practicum04.sbv
05.md
05-1.md
05-2.md
05-3.md
lecture05.sbv
practicum05.sbv
06.md
06-1.md
06-2.md
06-3.md
07.md
07-1.md
07-2.md
07-3.md
08.md
08-1.md
08-2.md
08-3.md
09.md
09-1.md
09-2.md
09-3.md
10.md
10-1.md
10-2.md
10-3.md
11.md
11-1.md
11-2.md
11-3.md
12.md
12-1.md
12-2.md
12-3.md
13.md
13-1.md
13-2.md
13-3.md
14.md
14-1.md
14-2.md
14-3.md

Terminology conventions

Rules

Terms

Non-technical

Term Translation
Class
Lecture
Practicum

Technical

Term Translation
activation
activation function
adaline
affine transformation
(artificial) neural network
autoencoder
autonomous vehicles
backpropagation
batch normalization
bias
chain rule
computer vision
contrast normalization
convolution
cost function
cybernetic
deep-learning
dropout
embedding
energy-based model
ensemble
feature
fire (of neuron)
fully connected layer
fully connected network
gradient
gradient descent
hidden layer
hierarchical representation
image classification
image segmentation
inference
Jacobian matrix
Jupyter Notebook
label
lane tracking
latent space
layer
layers
lecture part A
logistic regression
loss function
Nash equilibrium
natural language understanding
natural language processing
nearest neighbor
non-maximum suppression
norm
object detection
one-hot
parameter
pattern recognition
perceptron
pooling
practicum
recurrent neural networks
reflection
regularization
rotation
scaling
self-supervised learning
scalar
semantic segmentation
shearing
softmax, soft (arg)max
speech recognition
stochastic gradient descent
supervised learning
tensor
translation
trajectory
unsupervised learning
visual cortex
weight
weighted sum

Do not translate

Term Explanation
actor critic
CNN Convolutional Neural Network
GAN generative adversarial networks
GPU Graphic Processing Unit
ReLU Rectified linear unit