Skip to content

thibaudcolas/django_admin_tests

Repository files navigation

Sample Django project including automated accessibility CI tests for Django, based on Pa11y and Lighthouse, inspired by wagtail-tooling. View latest report

Using the demo

  1. Clone this repo.
  2. pip install -r requirements.txt
  3. ./manage.py migrate -- may not be necessary.
  4. ./manage.py runserver

You can create your own superuser or use the one already existing:

  • Username: admin
  • Password: correcthorsebatterystaple

There is already a database included with data. If you want to add more data, there is a manageent command for getting data from the Spotify API:

./manage.py import_data <artist_id_1> <artist_id_2> ...

To use this you need to set up a Spotify app on their website and set the following environment variables:

  • SPOTIPY_CLIENT_ID
  • SPOTIPY_CLIENT_SECRET

It can take a while and it might be a good idea to fetch only one artist at a time to avoid rate limits or other issues. Not every album / track will be downloaded -- just whatever is on the first page of results for each.

Contents

  • Demo site set up by @knyghty for Django development.
  • Automated accessibility tests of the Django admin with Axe via Pa11y, for a range of predefined scenarios.
  • Automated Lighthouse accessibility reports for all page-level scenarios.
  • Bespoke report generation based on the test results.

Understanding the report

The report is based on Axe and HTML CodeSniffer, which can only find 30 to 40% of accessibility issues. It’s very useful to use multiple tools together to find as much as possible, but they do often report the same issues.

As of now the tests are always run for the whole page without any filtering of errors, which means that page-level errors are likely to be reported on multiple pages. This is especially true for pages relying on browser automation, which are likely to show all "page load" errors, as well as those triggered by additional interactions.

The Lighthouse reports are provided for reference – a score of 100% doesn’t mean a page is accessible. Lighthouse reports are based on Axe, and as such shouldn’t contain any issue not identified elsewhere.

Local setup

Requirements: nvm, Python 3.9.

# First, clone the repository and install the Node and Python dependencies.
git clone git@github.com:thibaudcolas/django_admin_tests.git
cd django_admin_tests
nvm use
npm install
virtualenv -p python3.9 .venv
source .venv/bin/activate
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
# Then, set up the demo site
./manage.py migrate
./manage.py loaddata fixtures.json
./manage.py runserver
# Prepare for running the test suite.
mkdir pa11y/lighthouse
mkdir pa11y/screenshots

Running the test suite

To run the test suite:

npm run test
# And generate the report:
node pa11y/report.js
  • pa11y.json contains the list of all reported issues.
  • report.html is a high-level overview generated from the list of issues, and scenarios.
  • screenshots/ contains the screenshots for all scenarios (including sub-state scenarios).
  • ligthhouse/ contains the Lighthouse reports for all scenarios (including sub-state scenarios, but without any browser interaction taken into account).

Scope for audits

Django admin

To be refined

Django output

High-profile third-party packages

Docs

Improving the Pa11y test suite

The test suite is an unusual setup of Pa11y,

  • pa11y/scenarios.js contains all of the test cases, with more metadata than strictly needed. This is to facilitate the generation of custom reports based on this metadata.
    • The scenarios also inherit from their parents, grouping related scenarios for ease of understanding.
  • pa11y/test.js runs the tests. The run starts by logging into the admin (only once), and each test with Pa11y is followed by a test with Lighthouse.
    • The test runs produce a single JSON file with all of the issues decorated with additional metadata from the scenarios, as well as Lighthouse results.
  • pa11y/report.js runs on the generated report, separately from test runs. This makes it easy to iterate on the report format.