Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLO v1 - why using Adam as the optimizer #141

Open
oonisim opened this issue Feb 26, 2023 · 1 comment
Open

YOLO v1 - why using Adam as the optimizer #141

oonisim opened this issue Feb 26, 2023 · 1 comment

Comments

@oonisim
Copy link

oonisim commented Feb 26, 2023

train.py set the model training optimizer to Adam.

def main():
    model = Yolov1(split_size=7, num_boxes=2, num_classes=20).to(DEVICE)
    optimizer = optim.Adam(
        model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY
    )

According to the v1 paper, the training uses momentum and decay, which suggests SGD + Momentum.

We train the network for about 135 epochs on the training
and validation data sets from PASCAL VOC 2007 and
2012. When testing on 2012 we also include the VOC 2007
test data for training. Throughout training we use a batch
size of 64, a momentum of 0:9 and a decay of 0:0005.

Please clarify why chose to use Adam instead of SGD+Momentum.

@lmalkam
Copy link

lmalkam commented Feb 28, 2023

While the original YOLOv1 paper used SGD with momentum and weight decay, it's worth noting that the choice of optimizer can be a hyperparameter and may not be set in stone.

Adam is an adaptive optimizer that can converge faster than SGD with momentum in some cases. Adam adjusts the learning rate for each parameter based on the gradient variance and the historical gradient, which helps in cases where the gradients for different parameters vary significantly.

In contrast, SGD with momentum adjusts the learning rate based on the moving average of the gradient, which can be less effective when the gradient variance is high. Therefore, Adam can be a good choice for neural networks that have many parameters and complex architectures like YOLOv1.

Additionally, while the original YOLOv1 paper used SGD with momentum, subsequent research has shown that Adam can outperform SGD in some cases, especially for deep learning models with complex architectures. Therefore, the choice of optimizer can depend on the specific problem and the architecture of the neural network.

@oonisim

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants