Your Website Can Pass Accessibility Scans (Checkers) Despite Being Inaccessible

I received an email from someone asking about a freelance service, and the service promised to manually go through his website and fix accessibility issues and that each page would pass a WAVE and an Axe scan. In my reply, one of the things I wrote back is to keep in mind that you can get scan errors down to zero without truly remediating for accessibility, and he wrote back asking how that is possible.

Well, it’s possible because scans are only based on rulesets, so they’re very powerful but also very simple in that they’re only looking for certain things. Scans have these rules that are defined, and if you pass those rules, then no accessibility issues are returned, and if you don’t, then you will get errors and or alerts. Keep in mind that it’s only based on automation, so it’s only based on what can be caught through automation.

With a scan, it’s looking through your code, and it’s looking for any scenarios where, if this exists, then return an error, but you can code your website in a way so that no ruleset is triggered, and so no error is returned. I’ve got a blog post where someone showed all of the different ways that he could have completely different websites, and yet still get a 100% perfect score on Google Lighthouse. This is not to dismiss scans, but it’s simply to bring awareness to their limitations.

Scans are limited in more ways than one, but in this instance, they’re limited in the sense that they can only flag accessibility issues when their rulesets allow them to, and the number of rule sets that we can have are limited because many accessibility issues simply require manual work. They require someone to manually inspect and find whether or not an accessibility issue exists.

One example of this is with text embedded within an image. A scan is not going to know whether text is embedded within an image; a scan will only know if an image exists. So if I have a bunch of text within an image, let’s say it’s an infographic and there are 100 words embedded within this image, a scan is not going to know that, and a scan is not going to then ask if this meaning is conveyed in the alternative text description or in a longer description outside of the alternative text value.

There are several other examples where automation is just limited in what can be caught. The primary concern is going to be false negatives, where a scan clears you, but accessibility issues remain. And so that’s what I was getting at in my reply: there are ways that we can work around scans so that we aren’t triggering their rulesets, so we’re not breaking their rules, and so therefore errors are not returned. But it doesn’t mean that we’ve made our website accessible. It simply means that we’ve found what errors are being returned, and we’ve worked our way around those errors.

It’s important to know that we can have a perfect score and yet have a completely inaccessible website. I’ll link to this article in the description, which does a really good job of showing that it’s easy to work around these rulesets and still get a perfect score. Again, this doesn’t dismiss scans; they’re very helpful, but it’s just important to know their limitations. When you’re aware of their limitations, then you have better context for how to view them, use them, and understand the different services that are being offered.