You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implementors can do a variety of things to improve their consistency with ACT rules that don't include improving their consistency with ACT Rules. I think it would be useful for us to have a policy that describe what things are, and what things are not acceptable. For example:
Do implementations need to run in their default configuration? Can they turn off certain behaviours such as having stricter reporting on accessibility support?
Can implementors report on beta versions of their product, rather than released versions? If so, how does that relate to Q1?
Can one implementation include results from another. For example, could Accessibility Insights include results from axe-core?
Can implementors report examples with open issues as untested or cantTell?
These aren't things we can really enforce, but I think it's good to have this stuff documented, and ask implementors to agree with it.
The text was updated successfully, but these errors were encountered:
Another topic on this: Can implementors use ACT test cases as training data. This was discussed during TPAC 2023 where we concluded that this would be allowed, but that it is recommended not to.
Implementors can do a variety of things to improve their consistency with ACT rules that don't include improving their consistency with ACT Rules. I think it would be useful for us to have a policy that describe what things are, and what things are not acceptable. For example:
These aren't things we can really enforce, but I think it's good to have this stuff documented, and ask implementors to agree with it.
The text was updated successfully, but these errors were encountered: