Skip to content

Dev_Administrator Procedures

dtuantran edited this page Oct 3, 2023 · 25 revisions

Administrator procedures

Re-build the Docker containers

If you change from one branch to another (e.g. Python2 version to Python3), you need to re-build the containers using:

docker-compose build --no-cache
docker-compose up -d

Delete all contents of database

In codalab-competitions folder, on the CodaLab instance:

docker-compose up -d
docker-compose exec django bash
python manage.py reset_db
python manage.py migrate
python manage.py syncdb --migrate
exit
docker-compose restart
docker-compose up -d

Add yourself to "staff" and "superuser"

This gives access to health/simple_status and more.

docker-compose exec django bash
# python manage.py shell_plus
>>> user = ClUser.objects.get(username="your username")
>>> user.is_superuser = True
>>> user.is_staff = True
>>> user.save()

Erase user

/!/ Always check the user's email address to ensure you won't erase an account under the demand of another user /!/

docker-compose exec django bash
# python manage.py shell_plus
>>> user = ClUser.objects.get(username="user name")    # or (email="user email")
>>> user.delete()

Change username

Hint: check if the request is sent from the email address linked to the user account.

docker-compose exec django bash
# python manage.py shell_plus
>>> user = ClUser.objects.get(username="new_username") # check if 'new_username' already exists
>>> user = ClUser.objects.get(username="old_username")    # or (email="user email"), good to check if the person who contacted us owns the account
>>> user.username = 'new_username'
>>> user.save()

Confirm user's email by hand

Note that if the user couldn't receive the registration email, he/she may not be able to receive future emails.

docker-compose exec django bash
# python manage.py shell_plus
>>> email = EmailAddress.objects.get(email="user email")
>>> email.verified = True
>>> email.save()

Some platform admin functionalities

How to flush the default queue?

Log into each worker to check if they are stuck:

docker logs -f compute_worker

If one worker is stuck then restart it:

docker stop compute_worker
docker rm compute_worker
big command from the wiki

Go to RabbitMQ interface: https://<CODALAB-URL>/admin_monitoring_links/

In "Queues" tab we find the queue and select it

This one is the default queue (virtual host: "/"):

default_queue

Then, we can flush all jobs by clicking on "Purge Messages"

image

To determine the workers linked to this queue, click on its name, then open the "Consumers" tab

Delete all submissions from one user

From the shell plus:

def delete_submission_files(submission):
    from apps.web.utils import BundleStorage
    url = submission.s3_file
    if not url or url == '':
        logger.error("Received an invalid url to convert to a key: {}".format(url))
        return
    # Remove the beginning of the URL (before bucket name) so we just have the path(key) to the file
    sub_file_key = url.split("{}/".format(settings.AWS_STORAGE_PRIVATE_BUCKET_NAME))[-1]
    # Path could also be in a format <bucket>.<url> so check that as well
    sub_file_key = sub_file_key.split("{}.{}/".format(settings.AWS_STORAGE_PRIVATE_BUCKET_NAME, settings.AWS_S3_HOST))[-1]
    BundleStorage.delete(sub_file_key)
    data_file_attrs = [
        'inputfile',
        'runfile',
        'output_file',
        'private_output_file',
        'stdout_file',
        'stderr_file',
        'history_file',
        'scores_file',
        'coopetition_file',
        'detailed_results_file',
        'prediction_runfile',
        'prediction_output_file',
        'prediction_stdout_file',
        'prediction_stderr_file',
        'ingestion_program_stdout_file',
        'ingestion_program_stderr_file'
    ]
    for data_attr in data_file_attrs:
        attr_obj = getattr(submission, data_attr)
        storage = attr_obj.storage
        if attr_obj.name and attr_obj.name != '':
            logger.info("Attempting to delete storage file: {}".format(attr_obj.name))
            storage.delete(attr_obj.name)
my_subs = CompetitionSubmission.objects.filter(participant__user__username='username')  # OR CompetitionSubmission.bojects.filter(participant__user__email="<your-email>")
print(my_subs)  # Ensure they're all yours and look correct
for submission in my_subs:
    delete_submission_files(submission)
# One completed succesfully
my_subs.delete()

Edit the upper bound limit of the max submission size for competitions

If you need to limit the range within which organizers can choose to limit the size of user submissions for their competitions, here is the procedure:

  • Be sure to be connected with an admin account
  • Go into the Admin Competitions Manager page from the top right corner menu
  • Change the value of the Upper bound of the max submission size that you want to update (multi-select is possible).
  • Save your changes

Important If you have decreased the value of the Upper bound of the max submission size below the current value of the Max submission size : Note that participants will still be able to submit files that have sizes up to the current Max submission size. In this case, in order to take into account the new limit you will have to ask the organizer to change the Max submission size or modify it directly from shell_plus and then notify the organizer of the change

Storage analytics

The storage analytics page is accessible at <codalab-url>/health/storage It will display the last analysis. The analysis takes place every Sunday at 2am and takes hours to run.

If you want to trigger the analysis task manually you can do the following:

  • Start the app: docker-compose up -d
  • Bash into the django container and start a python console:
    • docker-compose exec django bash
    • python manage.py shell_plus
  • Manually start the task:
    • from apps.web.tasks import create_storage_analytics_snapshot
    • eager_results = create_storage_analytics_snapshot.apply_async()
    • If you check the logs of the app you should see "Task create_storage_analytics_snapshot started" coming from the worker_site container
    • At any time you can check the status of the task: eager_results.status
    • You can also have a more detailed progress status by checking the amount of submissions with a computed sub_size attribute:
      • from apps.web.models import CompetitionSubmission
      • error_count=CompetitionSubmission.objects.filter(sub_size=-1).count()
      • not_done_count=CompetitionSubmission.objects.filter(sub_size=0).count()
      • done_count=CompetitionSubmission.objects.filter(sub_size__gt=0).count()
    • If you have to restart the task, don't worry it will only compute the size of the submissions that hasn't been computed yet.
  • Once the task is over you should be able to see the results on the <codalab-url>/health/storage

SSL certificate

Renewed SSL certificate should be installed before the expiration.

In the case it's not done on time:

Il suffit de copier le certificat au bon endroit, comme spécifié dans le fichier .env. Put SSL certificates in ./certs/ and they are mapped to /app/certs in the container.

To renew in case it's needed:

sudo openssl genrsa -out private.key 2048
sudo openssl req -new -out cert-codalab.csr -key private.key

Then copy/paste the content of cert-codalab.csr in GoDaddy (in "renew certificate")

5 minutes later, you'll be able to download a zip containing the new certificate.

  • Concatenate the big certificate and [...]bundle[...].crt together into [...].chained.crt
  • Rename the big certificate to codalab.crt (or how it is called in .env file)

Then restart docker-compose.

Increase submission size limit of a competition

Go to the "Admin competitions manager":

Capture d’écran 2023-06-06 à 16 31 39

Then click on the value of the limit for the competition, enter the desired value and click on the Save button.

Admins URL

Remark: for RabbitMQ and Flower, the port depends on the configuration of the instance (.env file). You can get this links on "Admin Monitoring Links" under your profile section.

Clone this wiki locally