Back to Common Questions Guides

Introduction

Auditing a site for accessibility helps ensure that all users can access its content and functionality. Even with automated testing, manual testing is still essential to ensure WCAG accessibility compliance, since automated tools typically detect only a portion of accessibility issues, often estimated at 20-30%.

Automated testing can catch the obvious code-level problems, but context-specific issues can be missed, such as:

  • Keyboard navigation
  • Screen reader compatibility
  • Meaningful alt text
  • Clarity of visual focus indicators 

When automated testing is combined with manual testing, a more complete and inclusive evaluation of accessibility can be determined for users.


Expected Workflow for a Team

A good strategy combines automated and manual testing with a clear workflow. This ensures the site meets accessibility standards and works for everyone.

1. Automated Scan

To begin, an automated scan is typically performed to detect common issues, providing a ‘first pass’ to quickly identify and prioritize obvious issues. This helps surface the most common issues early, reducing the time needed for manual review.

2. Review and Fix Automated Issues

After an automated scan is performed on a site, the next step is to review the issues and fix them. Developers or accessibility specialists can analyze the results, verify their findings, and implement the code, content, or design fixes when necessary. 

Fixing or remediating accessibility issues often involves updating the site’s HTML to include semantic elements (like headings, buttons, and landmarks), adjusting ARIA roles, and improving color contrast in the CSS.

The scanner can alternatively return issues marked as Warnings. These will also need to be reviewed by the team and marked as “Not an issue” if they don’t apply, or confirmed and addressed if they do.

Use caution when marking the parent issue as “Not an issue.” We recommend doing this only when your team has confirmed the issue doesn’t apply to this particular case.

Check out our guide on the issue levels.

3. Manual Audit

Once the site has been scanned and automated issues have been identified and reviewed, a manual audit typically follows. Team members will generally use assistive technologies such as screen readers, keyboard navigation, and other tools to simulate real user experiences on the site. Manual testing helps uncover complex or context-sensitive issues that automated testing can’t always pinpoint. 

A few issues that are generally tested for in this process are: 

  • Logical tab orders
  • Meaningful focus indicators
  • Proper use of ARIA
    Content comprehension
  • And whether or not the interactive components behave in a way that is consistent and accessible for site visitors.

See our notes on manual auditing in AAArdvark below.

4. Ongoing Monitoring

When remediation is performed on a site, it is up to those in charge of the site to continue monitoring to maintain compliance. Accessibility is not a one-time task – it requires ongoing attention and upkeep. This requires continuous oversight, especially if the site is active in regards to adding new content and features. 

It is recommended to run automated scans regularly and conduct spot-checks using manual testing to catch issues before they impact anyone. When these approaches are combined, teams can rest easy knowing that their digital experiences are compliant and inclusive.


Performing a Manual Audit

A manual audit is started off within AAArdvark by adding and navigating to the specific site within the Workspace dashboard.

Workspace dashboard displaying the sites added and the ability to add a new one.

Clicking on the preferred site for testing will lead the tester to the Site Dashboard. After everything is set up, such as pages and initial scans, the Visual Mode can be used to add manual issues.

Site dashboard with a summary of the site statistics and issues. An arrow points to the left-hand navigation to locate the Visual Mode menu item.

In Visual Mode, the tester is free to manually record any issues they encounter during testing for the team to review and address.

Recording a manual issue in Visual Mode, with the ability to enter the failure and select the severity, success criterion, recommended solution, and add a comment.

In addition to recording manual issues, testers and developers can also head to the Issues list to go over manual issues that require reviews. Filtering the issues by Type: Manual and Status: Pending will display all of the issues ready to be reviewed.

Issues filtered by Manual and Pending, specifically ready for a tester or developer to review.

If an issue is correctly addressed and now compliant with WCAG standards, the issue can be marked as resolved.


AAArdvark Features for Manual Testing

AAArdvark readily provides features to streamline the manual testing process with some key capabilities:

Accessibility Tester Role

The Accessibility Tester role that can then be assigned to a user in a Team allows the tester to manually record issues, comment on and update their status, assign issues to other users, run scans, and review and resolve manual issues.

A display of workspace teams with the number of users and sites assigned to them.

Creating Manual Issues via Visual Mode

Visual Mode is a powerful tool for spotting and documenting issues that automation might miss. With Visual Mode, the tester can then record issues manually to capture anything automated scans might miss. 

Record Issues button for manually recording an issue in Visual Mode.

Reviewing Manual Issues and Addressing Them

Once manual issues are recorded, they will appear in the Issues list alongside automated issues. Testers and developers can flag issues as Pending Review after an issue has been addressed, allowing the team to coordinate reviews.

Accessibility testers will need to verify fixes after they have been implemented. In addition, once the fix is approved, the tester can mark the issue as resolved to close the issue.

Overview of issues found on a site, depicting image issues accordingly.

Filters for Issues

Filtering issues allows testers and developers to sort by priority, severity, issue type, status, assignment, success criteria, and pages.

When it comes to manually auditing and reviewing issues, sorting the type by Manual and the status by Pending Review will allow testers and developers to track the progress of an issue and determine whether it is resolved or not.

A manual issue is reported and marked as pending so the team can review.

Overall, with all of the above features combined in AAArdvark, teams will find that the tools offered provide a refined and streamlined process for accessibility testing and remediation.


Still stuck?

File a support ticket with our five-star support team to get more help.

File a ticket

  • Please provide any information that will be helpful in helping you get your issue fixed. What have you tried already? What results did you expect? What did you get instead?
  • This field is for validation purposes and should be left unchanged.

Related Guides