Manual Testing: What Automation Misses
Introduction
The UIC Digital Accessibility Engineering team relies heavily on automated testing to scale accessibility checks across digital assets. But automation isn’t enough. Some issues can only be caught through manual testing—especially with screen readers.
As we continue to improve automated testing methods, we’ve put together this short guide highlighting areas that still require manual review. You don’t need to be an expert in assistive technology; even basic testing can uncover barriers automation misses.
Here are a few key items our team focuses on during manual testing:
Logical Reading
- Check if text reads in the correct order.
- Watch for mispronunciations (wrong language settings).
- Ensure alerts or live updates don’t break reading flow.
- Use Tab/Shift+Tab to test keyboard focus order.
- Avoid relying on color alone (e.g., “click the green button”).
- Confirm tables are used for data, not layout.
Images
- Alt text should be short but meaningful.
- Does it describe the image’s purpose clearly?
- Avoid extra detail that adds no value.
Links
- Link text must describe the destination or purpose.
- Don’t use raw URLs or “click here.”
Forms and Interaction
- All fields and controls must work with only a keyboard.
- Labels should be announced correctly.
- Validation errors should be spoken by the screen reader.
- Autocomplete should be easy to navigate.
Modified on August 21, 2025