AODA compliance and engineering teams: what Ontario businesses need to know
Ontario's accessibility law has teeth, and enforcement is tightening. Here is what engineering teams building web apps need to understand about AODA, WCAG 2.2, and the cost of doing nothing.
If you are building software in Ontario, you have heard of AODA — the Accessibility for Ontarians with Disabilities Act. You may have heard the compliance dates, skimmed the requirements, and filed it under "someone else's problem." For a lot of engineering teams, that is where it stayed.
That is changing. Enforcement is getting more specific, fines are getting harder to ignore, and procurement teams at large organizations are starting to require accessibility conformance reports before they sign contracts. If your product has a web-facing surface and you sell to Ontario businesses or the Ontario public sector, AODA compliance is not optional — it is a sales prerequisite.
This post is a practical guide for engineering teams, not a legal brief. We will cover what AODA actually requires from a technical standpoint, what WCAG 2.2 means in practice, where most teams fall short, and how to close the gap without a six-month remediation project.
What AODA requires, in engineering terms
AODA's technical requirements live inside the Integrated Accessibility Standards Regulation (IASR). The relevant section for software teams is the "Information and Communications" standard, which says that public-facing web content must conform to WCAG — specifically, WCAG 2.0 Level AA as the current legal baseline.
In practice, the industry has moved to WCAG 2.2, and regulators expect you to meet the spirit of the most current standard even if the letter of the law references an older version. If you are starting compliance work today, target WCAG 2.2 Level AA. It is a superset of 2.0 AA, so you will be covered either way.
The WCAG success criteria that matter most for typical web applications:
Perceivable. Every non-text element (images, icons, charts, video) needs a text alternative. Color cannot be the only way to convey information. Contrast ratios must meet minimum thresholds (4.5:1 for normal text, 3:1 for large text and UI components).
Operable. Every interactive element must be reachable and usable with a keyboard alone. Focus order must make sense. No content should flash more than three times per second. Users need enough time to read and interact with content.
Understandable. Labels and instructions must be clear. Error messages must identify the problem and suggest a fix. Navigation must be consistent across pages.
Robust. Your HTML must parse correctly. ARIA roles and properties must be used according to spec. Content must work with assistive technology — screen readers, switch devices, voice control.
That is a lot of criteria. The good news is that most of them map to concrete, testable rules that a scanner can check automatically. The bad news is that most codebases have never been scanned.
Where engineering teams actually fall short
In our experience building inklu (an automated WCAG scanning and remediation tool), the violations we see most often in web applications are not exotic edge cases. They are basics:
Missing or empty alt text on images. This is the single most common violation we detect. Every <img> needs an alt attribute. Decorative images need alt="" (empty, not absent). If your app renders user-uploaded images without alt text, every one of those images is a violation.
Insufficient color contrast. Designers pick colors that look good on their calibrated monitors. Users with low vision or color blindness experience something different. A contrast ratio of 3.9:1 might look fine in Figma — WCAG requires 4.5:1 for normal text. This is a math problem, not a taste problem, and it is testable.
Interactive elements without keyboard access. A <div onClick={...}> is invisible to a keyboard user. It has no focus state, no role, no keyboard event handler. Buttons should be <button>. Links should be <a>. Custom components need ARIA roles, tabIndex, and onKeyDown handlers.
Form inputs without associated labels. A <label> must be programmatically linked to its <input> via htmlFor / id or by nesting. Placeholder text is not a label. Screen readers cannot identify unlabeled fields.
Missing document language. Your <html> tag needs a lang attribute. Without it, screen readers cannot select the correct pronunciation rules.
These five categories account for the majority of automated findings on most web apps. They are all fixable without redesigning anything — they are code-level issues, not design-level issues.
The cost of not fixing it
AODA violations carry fines of up to $100,000 per day for corporations. In practice, enforcement has historically been complaint-driven and fines have been modest. That is changing in two ways.
First, procurement. Large Ontario organizations — provincial agencies, municipalities, universities, hospitals, and increasingly private-sector enterprises — are adding WCAG conformance to their vendor evaluation criteria. If your SaaS product cannot produce an accessibility conformance report, you lose the deal. Not because of a fine — because of a checkbox on the procurement form.
Second, the legal landscape in the US (ADA Title II and III) is creating precedent that Canadian courts and regulators are watching. Thousands of web accessibility lawsuits have been filed in the US, and the number grows each year. Ontario has not seen the same litigation volume, but the regulatory framework exists for it.
The cheapest time to fix accessibility is before you ship the code. The second cheapest time is now, before a procurement team asks for your VPAT or a regulator files a complaint.
How to start (without a six-month project)
Step one: scan. Run an automated scanner against your production app and your codebase. This gives you a baseline count of violations by severity and WCAG criterion. Most teams are surprised by the number — hundreds or thousands of violations on a medium-sized app is normal, not alarming. The point is to know the number.
Step two: fix the automatable stuff. The violations listed above — alt text, contrast, keyboard access, labels, document language — can be fixed with targeted code changes. Many of these fixes are small: adding an attribute, changing a color value, swapping a <div> for a <button>. inklu generates these fixes as GitHub pull requests, so the fix workflow is the same as any other code change: review, approve, merge.
Step three: prioritize the manual stuff. The violations that scanners cannot catch — confusing navigation flows, complex ARIA widget behavior, cognitive load issues — need human testing, ideally with people who use assistive technology daily. This is where you spend your accessibility budget after the automatable work is done.
Step four: integrate into CI. Once the initial backlog is resolved, the goal is to prevent regression. A scanner running on every push or PR catches new violations before they reach production. This turns accessibility from a periodic audit into a continuous property of the codebase.
What inklu does in this picture
inklu handles steps one, two, and four. It scans your codebase against WCAG 2.2 using axe-core (the industry-standard open-source rule engine) plus our own proprietary rules, generates AI-powered code fixes, and opens pull requests on your GitHub repo. Your team reviews and merges the fixes like any other PR.
We also generate formatted compliance reports for AODA, ADA Title II, and the European Accessibility Act — the documentation your compliance team or procurement contact needs.
If you want to see what inklu finds on your own code, book a demo and we will set your team up with a 14-day evaluation. Or join the waitlist if you want to try it on your own when self-serve signup launches.
Book a demo at inklu.io or email hello@inklu.io.