New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] No UI available #634
Comments
is the screenshot from |
Yes, the screenshot is correct and it is from |
any idea ... |
did you export HF_TOKEN before |
HF_TOKEN I have put in my .bashrc, but after your question I did explicitly export prior running |
we made an update to the ui. can you update autotrain and see if you still have this issue? |
hello @abhishekkrthakur, it seems is working now. I am doing the training locally now with some test data and hope soon will have more to continue with playing. Thx, |
good to know. ill close this issue in that case :) |
Prerequisites
Backend
Local
Interface Used
CLI
CLI Command
autotrain app --port 8080 --host 127.0.0.1
UI Screenshots & Parameters
Error Logs
autotrain app --port 8080 --host 127.0.0.1
Your installed package
nvidia-ml-py
is corrupted. Skip patch functionsnvmlDeviceGet{Compute,Graphics,MPSCompute}RunningProcesses
. You may get incorrect or incomplete results. Please consider reinstall packagenvidia-ml-py
viapip3 install --force-reinstall nvidia-ml-py nvitop
.Your installed package
nvidia-ml-py
is corrupted. Skip patch functionsnvmlDeviceGetMemoryInfo
. You may get incorrect or incomplete results. Please consider reinstall packagenvidia-ml-py
viapip3 install --force-reinstall nvidia-ml-py nvitop
.INFO | 2024-05-09 15:07:07 | autotrain.app::33 - Starting AutoTrain...
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, train_split, username, logging_steps, lora_alpha, evaluation_strategy, text_column, save_total_limit, model_max_length, valid_split, token, rejected_text_column, data_path, scheduler, push_to_hub, trainer, warmup_ratio, prompt_text_column, weight_decay, max_grad_norm, use_flash_attention_2, model, gradient_accumulation, optimizer, auto_find_batch_size, lr, lora_r, dpo_beta, project_name, seed, merge_adapter, disable_gradient_checkpointing, lora_dropout, model_ref, max_prompt_length, add_eos_token
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, epochs, train_split, username, weight_decay, max_grad_norm, logging_steps, model, evaluation_strategy, text_column, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, lr, valid_split, token, project_name, seed, max_seq_length, data_path, scheduler, push_to_hub, target_column, warmup_ratio
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, username, epochs, train_split, weight_decay, max_grad_norm, image_column, logging_steps, model, evaluation_strategy, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, lr, valid_split, token, project_name, seed, data_path, scheduler, push_to_hub, target_column, warmup_ratio
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: max_target_length, batch_size, username, train_split, epochs, logging_steps, lora_alpha, evaluation_strategy, text_column, save_total_limit, valid_split, peft, max_seq_length, data_path, scheduler, push_to_hub, target_column, warmup_ratio, weight_decay, max_grad_norm, model, gradient_accumulation, optimizer, auto_find_batch_size, lr, quantization, lora_r, project_name, seed, lora_dropout, token
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: id_column, username, train_split, num_trials, model, time_limit, numerical_columns, valid_split, task, project_name, seed, data_path, push_to_hub, categorical_columns, target_columns, token
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: adam_weight_decay, pre_compute_text_embeddings, validation_epochs, epochs, text_encoder_use_attention_mask, username, image_path, scale_lr, adam_epsilon, adam_beta1, checkpoints_total_limit, dataloader_num_workers, checkpointing_steps, prior_preservation, scheduler, lr_power, push_to_hub, validation_images, local_rank, logging, revision, resume_from_checkpoint, tokenizer_max_length, num_class_images, max_grad_norm, model, class_image_path, xl, validation_prompt, rank, class_prompt, adam_beta2, prior_generation_precision, warmup_steps, project_name, seed, class_labels_conditioning, prior_loss_weight, num_validation_images, center_crop, allow_tf32, sample_batch_size, num_cycles, token, tokenizer
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, epochs, train_split, username, weight_decay, max_grad_norm, logging_steps, model, evaluation_strategy, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, tags_column, lr, valid_split, token, project_name, seed, tokens_column, max_seq_length, data_path, scheduler, push_to_hub, warmup_ratio
WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, epochs, train_split, username, weight_decay, max_grad_norm, logging_steps, model, evaluation_strategy, text_column, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, lr, valid_split, token, project_name, seed, max_seq_length, data_path, scheduler, push_to_hub, target_column, warmup_ratio
INFO | 2024-05-09 15:07:10 | autotrain.app::157 - AutoTrain started successfully
Additional Information
After successful start of the application, no UI available.
running environment, WSL
The text was updated successfully, but these errors were encountered: