Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'function' object has no attribute '__func__' #29

Open
Paul-B98 opened this issue Jun 13, 2023 · 5 comments
Open

AttributeError: 'function' object has no attribute '__func__' #29

Paul-B98 opened this issue Jun 13, 2023 · 5 comments

Comments

@Paul-B98
Copy link
Contributor

Paul-B98 commented Jun 13, 2023

I tried to run the demo example for fine tuning the CodeT5+ Model in the README

from codetf.trainer.codet5_trainer import CodeT5Seq2SeqTrainer
from codetf.data_utility.codexglue_dataset import CodeXGLUEDataset
from codetf.models import load_model_pipeline
from codetf.performance.evaluation_metric import EvaluationMetric
from codetf.data_utility.base_dataset import CustomDataset

model_class = load_model_pipeline(model_name="codet5", task="pretrained",
            model_type="plus-220M", is_eval=True)

dataset = CodeXGLUEDataset(tokenizer=model_class.get_tokenizer())
train, test, validation = dataset.load(subset="text-to-code")

train_dataset= CustomDataset(train[0], train[1])
test_dataset= CustomDataset(test[0], test[1])
val_dataset= CustomDataset(validation[0], validation[1])

evaluator = EvaluationMetric(metric="bleu", tokenizer=model_class.tokenizer)

# peft can be in ["lora", "prefixtuning"]
trainer = CodeT5Seq2SeqTrainer(train_dataset=train_dataset, 
                                validation_dataset=val_dataset, 
                                peft="lora",
                                pretrained_model_or_path=model_class.get_model(),
                                tokenizer=model_class.tokenizer)
trainer.train()

however, I got the following error.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[1], line 25
     19 # peft can be in ["lora", "prefixtuning"]
     20 trainer = CodeT5Seq2SeqTrainer(train_dataset=train_dataset, 
     21                                 validation_dataset=val_dataset, 
     22                                 peft="lora",
     23                                 pretrained_model_or_path=model_class.get_model(),
     24                                 tokenizer=model_class.tokenizer)
---> 25 trainer.train()

File [~/.conda/envs/codetf/lib/python3.8/site-packages/codetf/trainer/base_trainer.py:54](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/paul/projects/edu/master/mdl-ii/src/modeling/~/.conda/envs/codetf/lib/python3.8/site-packages/codetf/trainer/base_trainer.py:54), in BaseTrainer.train(self)
     53 def train(self):
---> 54     self.trainer.train()

File [~/.conda/envs/codetf/lib/python3.8/site-packages/transformers/trainer.py:1645](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/paul/projects/edu/master/mdl-ii/src/modeling/~/.conda/envs/codetf/lib/python3.8/site-packages/transformers/trainer.py:1645), in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
   1640     self.model_wrapped = self.model
   1642 inner_training_loop = find_executable_batch_size(
   1643     self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
   1644 )
-> 1645 return inner_training_loop(
   1646     args=args,
   1647     resume_from_checkpoint=resume_from_checkpoint,
   1648     trial=trial,
   1649     ignore_keys_for_eval=ignore_keys_for_eval,
   1650 )

File [~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/utils/memory.py:132](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/paul/projects/edu/master/mdl-ii/src/modeling/~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/utils/memory.py:132), in find_executable_batch_size..decorator(*args, **kwargs)
    130     raise RuntimeError("No executable batch size found, reached zero.")
    131 try:
--> 132     return function(batch_size, *args, **kwargs)
    133 except Exception as e:
    134     if should_reduce_batch_size(e):

File [~/.conda/envs/codetf/lib/python3.8/site-packages/transformers/trainer.py:1756](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/paul/projects/edu/master/mdl-ii/src/modeling/~/.conda/envs/codetf/lib/python3.8/site-packages/transformers/trainer.py:1756), in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
   1754         model = self.accelerator.prepare(self.model)
   1755     else:
-> 1756         model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
   1757 else:
   1758     # to handle cases wherein we pass "DummyScheduler" such as when it is specified in DeepSpeed config.
   1759     model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(
   1760         self.model, self.optimizer, self.lr_scheduler
   1761     )

File [~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/accelerator.py:1182](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/paul/projects/edu/master/mdl-ii/src/modeling/~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/accelerator.py:1182), in Accelerator.prepare(self, device_placement, *args)
   1180     result = self._prepare_megatron_lm(*args)
   1181 else:
-> 1182     result = tuple(
   1183         self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
   1184     )
   1185     result = tuple(self._prepare_one(obj, device_placement=d) for obj, d in zip(result, device_placement))
   1187 if tpu_should_fix_optimizer or self.mixed_precision == "fp8":
   1188     # 2. grabbing new model parameters

File [~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/accelerator.py:1183](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/paul/projects/edu/master/mdl-ii/src/modeling/~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/accelerator.py:1183), in (.0)
   1180     result = self._prepare_megatron_lm(*args)
   1181 else:
   1182     result = tuple(
-> 1183         self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
   1184     )
   1185     result = tuple(self._prepare_one(obj, device_placement=d) for obj, d in zip(result, device_placement))
   1187 if tpu_should_fix_optimizer or self.mixed_precision == "fp8":
   1188     # 2. grabbing new model parameters

File [~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/accelerator.py:1022](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/paul/projects/edu/master/mdl-ii/src/modeling/~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/accelerator.py:1022), in Accelerator._prepare_one(self, obj, first_pass, device_placement)
   1020     return self.prepare_data_loader(obj, device_placement=device_placement)
   1021 elif isinstance(obj, torch.nn.Module):
-> 1022     return self.prepare_model(obj, device_placement=device_placement)
   1023 elif isinstance(obj, torch.optim.Optimizer):
   1024     optimizer = self.prepare_optimizer(obj, device_placement=device_placement)

File [~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/accelerator.py:1308](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/paul/projects/edu/master/mdl-ii/src/modeling/~/.conda/envs/codetf/lib/python3.8/site-packages/accelerate/accelerator.py:1308), in Accelerator.prepare_model(self, model, device_placement, evaluation_mode)
   1306 model._original_forward = model.forward
   1307 if self.mixed_precision == "fp16" and is_torch_version(">=", "1.10"):
-> 1308     model.forward = MethodType(torch.cuda.amp.autocast(dtype=torch.float16)(model.forward.__func__), model)
   1309 elif self.mixed_precision == "bf16" and self.distributed_type != DistributedType.TPU:
   1310     model.forward = MethodType(
   1311         torch.autocast(device_type=self.device.type, dtype=torch.bfloat16)(model.forward.__func__), model
   1312     )

AttributeError: 'function' object has no attribute '__func__'

Logging Output:

/home/paul/.conda/envs/codetf/lib/python3.8/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

 and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /home/paul/.conda/envs/codetf/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda118.so
CUDA SETUP: CUDA runtime path found: /home/paul/.conda/envs/tf/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.9
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /home/paul/.conda/envs/codetf/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda118.so...
/home/paul/.conda/envs/codetf/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /home/paul/.conda/envs/codetf did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
  warn(msg)
Found cached dataset code_x_glue_tc_text_to_code (/home/paul/.cache/huggingface/datasets/code_x_glue_tc_text_to_code/default/0.0.0/059898ce5bb35e72c699c69af37020002b38b251734ddaeedef30ae7e6292717)
100%|██████████| 3/3 [00:00<00:00, 13.97it/s]
trainable params: 884736 || all params: 223766784 || trainable%: 0.3953830788397978
/home/paul/.conda/envs/codetf/lib/python3.8/site-packages/transformers/optimization.py:411: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  warnings.warn(

Deps:

Package                  Version
------------------------ ----------
absl-py                  1.4.0
accelerate               0.20.3
aiohttp                  3.8.4
aiosignal                1.3.1
antlr4-python3-runtime   4.9.3
anyio                    3.7.0
argon2-cffi              21.3.0
argon2-cffi-bindings     21.2.0
arrow                    1.2.3
asttokens                2.2.1
async-lru                2.0.2
async-timeout            4.0.2
attrs                    23.1.0
Babel                    2.12.1
backcall                 0.2.0
beautifulsoup4           4.12.2
bitsandbytes             0.39.0
bleach                   6.0.0
certifi                  2023.5.7
cffi                     1.15.1
charset-normalizer       3.1.0
click                    8.1.3
colorama                 0.4.6
comm                     0.1.3
datasets                 2.12.0
debugpy                  1.6.7
decorator                5.1.1
defusedxml               0.7.1
dill                     0.3.6
evaluate                 0.4.0
exceptiongroup           1.1.1
executing                1.2.0
fastjsonschema           2.17.1
filelock                 3.12.1
fqdn                     1.5.1
frozenlist               1.3.3
fsspec                   2023.6.0
huggingface-hub          0.14.1
idna                     3.4
importlib-metadata       6.6.0
importlib-resources      5.12.0
iopath                   0.1.10
ipykernel                6.23.1
ipython                  8.12.2
isoduration              20.11.0
jedi                     0.18.2
Jinja2                   3.1.2
joblib                   1.2.0
json5                    0.9.14
jsonpointer              2.3
jsonschema               4.17.3
jupyter_client           8.2.0
jupyter_core             5.3.0
jupyter-events           0.6.3
jupyter-lsp              2.2.0
jupyter_server           2.6.0
jupyter_server_terminals 0.4.4
jupyterlab               4.0.2
jupyterlab-pygments      0.2.2
jupyterlab_server        2.22.1
lxml                     4.9.2
MarkupSafe               2.1.3
matplotlib-inline        0.1.6
mistune                  2.0.5
multidict                6.0.4
multiprocess             0.70.14
nbclient                 0.8.0
nbconvert                7.4.0
nbformat                 5.9.0
nest-asyncio             1.5.6
nltk                     3.8.1
notebook_shim            0.2.3
numpy                    1.21.6
nvidia-cublas-cu11       11.10.3.66
nvidia-cuda-nvrtc-cu11   11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11        8.5.0.96
omegaconf                2.3.0
overrides                7.3.1
packaging                23.1
pandas                   1.3.5
pandocfilters            1.5.0
parso                    0.8.3
peft                     0.3.0
pexpect                  4.8.0
pickleshare              0.7.5
Pillow                   9.5.0
pip                      23.0.1
pkgutil_resolve_name     1.3.10
platformdirs             3.5.3
portalocker              2.7.0
prometheus-client        0.17.0
prompt-toolkit           3.0.38
psutil                   5.9.5
ptyprocess               0.7.0
pure-eval                0.2.2
pyarrow                  12.0.0
pycparser                2.21
Pygments                 2.15.1
pyparsing                3.0.7
pyrsistent               0.19.3
python-dateutil          2.8.2
python-json-logger       2.0.7
pytz                     2023.3
PyYAML                   6.0
pyzmq                    25.1.0
regex                    2023.6.3
requests                 2.31.0
responses                0.18.0
rfc3339-validator        0.1.4
rfc3986-validator        0.1.1
rouge-score              0.1.2
sacrebleu                2.3.1
safetensors              0.3.1
salesforce-codetf        1.0.1.1
scikit-learn             1.0.2
scipy                    1.10.1
Send2Trash               1.8.2
setuptools               67.8.0
six                      1.16.0
sniffio                  1.3.0
soupsieve                2.4.1
stack-data               0.6.2
tabulate                 0.9.0
terminado                0.17.1
threadpoolctl            3.1.0
tinycss2                 1.2.1
tokenizers               0.13.3
tomli                    2.0.1
torch                    1.13.1
torchvision              0.14.1
tornado                  6.3.2
tqdm                     4.63.0
traitlets                5.9.0
transformers             4.30.1
tree-sitter              0.20.1
typing_extensions        4.6.3
uri-template             1.2.0
urllib3                  2.0.3
wcwidth                  0.2.6
webcolors                1.13
webencodings             0.5.1
websocket-client         1.5.3
wheel                    0.38.4
xxhash                   3.2.0
yarl                     1.9.2
zipp                     3.15.0

System:

OS: Ubuntu 22.04.2 LTS (WSL)
GPU: RTX 4070 TI
CUDA 11.8

If u need any further information to assist feel free to ask!

@Paul-B98
Copy link
Contributor Author

Even after update the deps to the src from github I got the same error.

pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/peft.git
pip install -q -U git+https://github.com/huggingface/accelerate.git

@Luxios22
Copy link

same issue here

@ElisaMetz
Copy link

same issue here too

@Paul-B98
Copy link
Contributor Author

I got it working with changing fp16 to False in the codet6.yaml config file for training. While I don’t think that this is the right solution, I could be helpful if we could provide custom configs without cloning the repo and changing them.

@bdqnghi
Copy link
Contributor

bdqnghi commented Jun 18, 2023

we will release a new stable version very soon to fix all of these bugs, thanks for updating us the issues !!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants