Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] No UI available #634

Closed
2 tasks done
dejankocic opened this issue May 9, 2024 · 8 comments
Closed
2 tasks done

[BUG] No UI available #634

dejankocic opened this issue May 9, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@dejankocic
Copy link

Prerequisites

  • I have read the documentation.
  • I have checked other issues for similar problems.

Backend

Local

Interface Used

CLI

CLI Command

autotrain app --port 8080 --host 127.0.0.1

UI Screenshots & Parameters

image

Error Logs

autotrain app --port 8080 --host 127.0.0.1
Your installed package nvidia-ml-py is corrupted. Skip patch functions nvmlDeviceGet{Compute,Graphics,MPSCompute}RunningProcesses. You may get incorrect or incomplete results. Please consider reinstall package nvidia-ml-py via pip3 install --force-reinstall nvidia-ml-py nvitop.
Your installed package nvidia-ml-py is corrupted. Skip patch functions nvmlDeviceGetMemoryInfo. You may get incorrect or incomplete results. Please consider reinstall package nvidia-ml-py via pip3 install --force-reinstall nvidia-ml-py nvitop.
INFO | 2024-05-09 15:07:07 | autotrain.app::33 - Starting AutoTrain...
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, train_split, username, logging_steps, lora_alpha, evaluation_strategy, text_column, save_total_limit, model_max_length, valid_split, token, rejected_text_column, data_path, scheduler, push_to_hub, trainer, warmup_ratio, prompt_text_column, weight_decay, max_grad_norm, use_flash_attention_2, model, gradient_accumulation, optimizer, auto_find_batch_size, lr, lora_r, dpo_beta, project_name, seed, merge_adapter, disable_gradient_checkpointing, lora_dropout, model_ref, max_prompt_length, add_eos_token
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, epochs, train_split, username, weight_decay, max_grad_norm, logging_steps, model, evaluation_strategy, text_column, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, lr, valid_split, token, project_name, seed, max_seq_length, data_path, scheduler, push_to_hub, target_column, warmup_ratio
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, username, epochs, train_split, weight_decay, max_grad_norm, image_column, logging_steps, model, evaluation_strategy, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, lr, valid_split, token, project_name, seed, data_path, scheduler, push_to_hub, target_column, warmup_ratio
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: max_target_length, batch_size, username, train_split, epochs, logging_steps, lora_alpha, evaluation_strategy, text_column, save_total_limit, valid_split, peft, max_seq_length, data_path, scheduler, push_to_hub, target_column, warmup_ratio, weight_decay, max_grad_norm, model, gradient_accumulation, optimizer, auto_find_batch_size, lr, quantization, lora_r, project_name, seed, lora_dropout, token
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: id_column, username, train_split, num_trials, model, time_limit, numerical_columns, valid_split, task, project_name, seed, data_path, push_to_hub, categorical_columns, target_columns, token
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: adam_weight_decay, pre_compute_text_embeddings, validation_epochs, epochs, text_encoder_use_attention_mask, username, image_path, scale_lr, adam_epsilon, adam_beta1, checkpoints_total_limit, dataloader_num_workers, checkpointing_steps, prior_preservation, scheduler, lr_power, push_to_hub, validation_images, local_rank, logging, revision, resume_from_checkpoint, tokenizer_max_length, num_class_images, max_grad_norm, model, class_image_path, xl, validation_prompt, rank, class_prompt, adam_beta2, prior_generation_precision, warmup_steps, project_name, seed, class_labels_conditioning, prior_loss_weight, num_validation_images, center_crop, allow_tf32, sample_batch_size, num_cycles, token, tokenizer
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, epochs, train_split, username, weight_decay, max_grad_norm, logging_steps, model, evaluation_strategy, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, tags_column, lr, valid_split, token, project_name, seed, tokens_column, max_seq_length, data_path, scheduler, push_to_hub, warmup_ratio
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, epochs, train_split, username, weight_decay, max_grad_norm, logging_steps, model, evaluation_strategy, text_column, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, lr, valid_split, token, project_name, seed, max_seq_length, data_path, scheduler, push_to_hub, target_column, warmup_ratio
INFO | 2024-05-09 15:07:10 | autotrain.app::157 - AutoTrain started successfully

Additional Information

After successful start of the application, no UI available.

running environment, WSL

@dejankocic dejankocic added the bug Something isn't working label May 9, 2024
@abhishekkrthakur
Copy link
Member

is the screenshot from 127.0.0.1:8080 ?

@dejankocic
Copy link
Author

is the screenshot from 127.0.0.1:8080 ?

Yes, the screenshot is correct and it is from 127.0.0.1:8080 .

@dejankocic
Copy link
Author

any idea ...

@abhishekkrthakur
Copy link
Member

did you export HF_TOKEN before autotrain app command?

@dejankocic
Copy link
Author

HF_TOKEN I have put in my .bashrc, but after your question I did explicitly export prior running autotrain app command.
The result is the same, no UI.

@abhishekkrthakur
Copy link
Member

we made an update to the ui. can you update autotrain and see if you still have this issue?

@dejankocic
Copy link
Author

hello @abhishekkrthakur, it seems is working now. I am doing the training locally now with some test data and hope soon will have more to continue with playing.

Thx,
D.

@abhishekkrthakur
Copy link
Member

good to know. ill close this issue in that case :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants