Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] feat: Weights and Biases #1513

Open
wants to merge 19 commits into
base: dev
Choose a base branch
from
Open

Conversation

Ihab-Asaad
Copy link
Contributor

No description provided.

@IgnatovFedor IgnatovFedor self-requested a review January 6, 2022 12:38
@@ -3,7 +3,7 @@
"class_name": "basic_classification_reader",
"x": "Twit",
"y": "Class",
"data_path": "{DOWNLOADS_PATH}/sentiment_twitter_data"
"data_path": "{DOWNLOADS_PATH}/sentiment_twitter_data/modified_data"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert changes

Comment on lines 65 to 66
"save_path": "{MODEL_PATH}/new_model",
"load_path": "",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert changes

@@ -74,7 +74,7 @@
"filters_cnn": 256,
"optimizer": "Adam",
"learning_rate": 0.01,
"learning_rate_decay": 0.1,
"learning_rate_decay": 0.01,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert changes

@@ -100,14 +100,17 @@
]
},
"train": {
"epochs": 100,
"epochs": 10,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert changes

Comment on lines +110 to +113
"inputs": [
"y_onehot",
"y_pred_probas"
]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"inputs": [
"y_onehot",
"y_pred_probas"
]
"inputs": ["y_onehot", "y_pred_probas"]

@@ -0,0 +1,378 @@
# Copyright 2019 Neural Networks and Deep Learning lab, MIPT
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Copyright 2019 Neural Networks and Deep Learning lab, MIPT
# Copyright 2022 Neural Networks and Deep Learning lab, MIPT

requirements.txt Outdated
Comment on lines 27 to 29
wandb==0.12.7
pybind11==2.2
fasttext
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove additional requirements from library general requirements. Move wandb requirement to deeppavlov/requirements/wandb.txt

Comment on lines 37 to 40
There are three types of logging:
- StdLogger: for logging report about current training and validation processes to stdout.
- TensorboardLogger: for logging to tensorboard.
- WandbLogger: for logging to WandB.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove types of logging description


"""

@abstractmethod
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove __init__ as it's empty (+ method is incorrect because there is no self argument)

"""
raise NotImplementedError

@abstractmethod
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to make get_repost abstract

requirements.txt Outdated
@@ -23,4 +23,4 @@ tqdm==4.62.0
click==7.1.2
uvicorn==0.11.7
sacremoses==0.0.35
uvloop==0.14.0
uvloop==0.14.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
uvloop==0.14.0
uvloop==0.14.0

Comment on lines 2 to 3
pybind11==2.2
fasttext
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove, because it doesn't used in WandbLogger

@@ -50,7 +49,7 @@ class FitTrainer:
evaluation_targets: data types on which to evaluate trained pipeline (default is ``('valid', 'test')``)
show_examples: a flag used to print inputs, expected outputs and predicted outputs for the last batch
in evaluation logs (default is ``False``)
tensorboard_log_dir: path to a directory where tensorboard logs can be stored, ignored if None
logger : list of dictionaries of possible loggers from deeppavlov.configs files.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
logger : list of dictionaries of possible loggers from deeppavlov.configs files.
logger: list of dictionaries with train and evaluation loggers configuration

Comment on lines 67 to 68
log.info(
f'{self.__class__.__name__} got additional init parameters {list(kwargs)} that will be ignored:')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert the change, because the initial line is shorter than 120 characters

Comment on lines 70 to 71
self._chainer = Chainer(
chainer_config['in'], chainer_config['out'], chainer_config.get('in_y'))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert the change, because the initial line is shorter than 120 characters

Comment on lines 413 to 415
log.warning(
f"Using {self.__class__.__name__} for a pipeline without batched training"
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert changes


# Run the at-train-exit model-saving logic
if self.validation_number < 1:
log.info('Save model to capture early training results')
log.info("Save model to capture early training results")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert changes

Comment on lines 423 to 425
from deeppavlov.core.common.logging.wandb_logger import WandbLogger
from deeppavlov.core.common.logging.std_logger import StdLogger
from deeppavlov.core.common.logging.tensorboard_logger import TensorboardLogger
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove imports


def save(self) -> None:
if self._loaded:
raise RuntimeError('Cannot save already finalized chainer')
raise RuntimeError("Cannot save already finalized chainer")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert changes

Comment on lines 171 to 201
if self.stdlogger_idx is not None:
self.std_logger = StdLogger(stdlogging=True)

if self.tensorboard_idx is not None:
self.tensorboard_logger = TensorboardLogger(
log_dir=self.logger[self.tensorboard_idx]["log_dir"]
)

if self.wandblogger_idx is not None:
if WandbLogger.login(
API_Key=self.logger[self.wandblogger_idx].get("API_Key", None),
relogin=True,
):
if self.log_every_n_epochs > 0 or self.val_every_n_epochs > 0:
self.wandb_logger = WandbLogger(
log_on="epochs",
commit_on_valid=self.val_every_n_epochs > 0,
**self.logger[self.wandblogger_idx].get("init", None),
)
if self.wandb_logger.init_succeed == False:
self.wandblogger_idx = None
elif self.log_every_n_batches > 0 or self.val_every_n_batches > 0:
self.wandb_logger = WandbLogger(
log_on="batches",
commit_on_valid=self.val_every_n_batches > 0,
**self.logger[self.wandblogger_idx].get("init", None),
)
if self.wandb_logger.init_succeed == False:
self.wandblogger_idx = None
else:
self.wandblogger_idx = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove those lines from nn_trainer.py and move to init methods of according classes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants