Replies: 1 comment 1 reply
-
@decabeza @microweb10 Any thoughts? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Currently, the biggest know accessibility issue we've got is the fact that not every user can add annotations in legislation processes (issue #4831).
However, there are other accessibility issues that make it harder to use our site. Even if (on average) we had one or two issues per page, since we've got hundreds of pages, that would mean hundreds of accessibility issues.
There are many automated tools checking for accessibility issues. However, most of them have two important issues:
So, while these tools might be useful to a certain point, they're pretty limited.
The axe accessibility testing engine solves both issues, since it can be executed while running our test suite (meaning we can sign in users with any role before running it) and it's built with a "zero false positives" philosophy in mind.
Since right now running axe in our test suite reports too many errors, we could take the following approach.
First, run the test suite taking only critical issues into account, by applying the following patch:
With this, only critical errors are reported. We can have a look at the errors we get and then tackle them one at a time. For example, we could start checking that all form inputs are associated with labels (pull request #5509). That's done by replacing the previous patch with this one:
Then, we could solve those issues, run the test suite with the patch that detects critical issues once again, and continue the process until we've solved all critical issues. Then, repeat for the serious issues.
An important thing to take into account is that the fact that the report generated by Axe doesn't contain any false positives means it doesn't report all possible issues. We also need to keep working on accessibility issues that are not detected by Axe.
Another important question is: what should we do regarding automated testing once we solve a certain problem? Testing accessibility on every page under every condition is great, but it would make our test suite much slower. Should we only execute these tests before a release, in order to check for regressions? But when should we run the automated accessibility tests on pull requests adding new features?
Beta Was this translation helpful? Give feedback.
All reactions