Unraveling Sales Copy for Automated Website Accessibility “Testing” (Scan)

I am screen-recording the silktide.com accessibility sales page. And specifically, I’m looking at the section titled How effective is automated web accessibility testing? And the reason I’m recording this video is because I want to highlight the difference between the perception that can be created from a sales page versus reality.

So it starts- let’s start, by getting the elephant in the room out of the way. Automated accessibility testing alone will not make your website 100% accessible. Actually, an automated scan won’t make your website accessible at all. You have to manually remediate the website to make it accessible.

And then I also disagree with the word testing. I think it’s misplaced here. And that’s because within the context of web accessibility and user testing services, user testers- user testing involves professionals with disabilities, inspecting a website manually and looking for any practical or technical issues that exist on the website.

So testing in the truest sense of the word. And an automated scan is just a scan that returns results based on automated- based on rule sets. So there’s a significant difference between a scan and testing.

The next paragraph continues always combine automated testing with manual auditing. And I think this also paints the wrong picture. So when you perform a manual audit, everything is being reviewed manually, regardless of the use of an automated scan.

You almost always use an automated scan because it does help. It’s very efficient. It helps quickly find accessibility issues, but you still have to manually review those issues so it can reduce human error.

But you’re not combining the automated scan results with your manual audit of the other accessibility issues outside of the issues that can be flagged by manual scan. No, you are performing the audit and you are manually reviewing everything.

You use an automated scan to reduce incidents of error and to more quickly find some issues, but you’re still manually reviewing everything so you’re not really combining them.

An automated scan is simply a tool that is used when manually auditing a website. And then it says the reason for this is that some WCAG criteria are ambiguous. I would have included the word success criteria. I would have said that WCAG success criteria, but continuing on, the real reason that you must manually audit a website is because that scans are extremely limited in what they can flag or detect.

So with WCAG 2.1 conformance level AA there are 50 success criteria, an automated scan and it depends on the scan and depends on who you ask, but it may flag or partially flag 12 to 14 accessibility issues.

So you’re really looking at about 25% of accessibility issues under WCAG 2.1 AA can be- effectively be flagged or partially flagged with the scan.

So that’s the real reason that you use an automated scan. The WCAG success criteria actually aren’t ambiguous. They’re fairly straightforward, especially once you get past the technical jargon.

What the success criteria are requiring is straightforward. Now, when you get to the application, there may be a few instances where the application of the success criteria can be subjective, but I would hardly call them ambiguous.