The document discusses making 3D canvas content in HTML more accessible. It proposes several techniques to improve accessibility, such as adding fallback content, ensuring content is accessible to screen readers and keyboard navigation, providing audio cues and tactile feedback, and addressing challenges of making user-generated 3D content accessible with techniques like summarizing objects and filtering nearby objects. The document contains several links to examples of 3D content and discussions of inclusive design principles.
1 of 21
Downloaded 10 times
More Related Content
Accessibility in Canvas 3D
1. Taking Accessibility
to the Next Dimension:
Thoughts About
Canvas 3D
@kliehm
HTML WG & HTML Accessibility Task Force.
Originally people thought canvas was just about painting boring images on a bitmap...
... or for adding wet floor effects for photos.
Then people started getting creative, for example in this emulator written in JavaScript for the original Space Invaders engine that you can now play in your browser.
The lesson learned: given the tools people will do anything with a canvas!
Like building a rich text editor in canvas: Bespin was published by Mozilla and is now part of the Jetpack code editor.
Therefore it is important that there is fallback content.
Fallback content, aka. the shadow DOM, is always exposed to assistive technologies, unlike other fallback content thats only displayed if the browser doesnt support the element. Shadow DOM elements must be keyboard accessible, and the corresponding part of the bitmap must be highlighted with a focus ring, probably a caret position as well in the case of editable content.
In a 3D context one of the strongest use cases is in games. This is a demo of the 3D jump & run game Infinite Journey created in WebGL that runs in your browser (if you are lucky enough to have the right chipset).
In games the interaction is limited to a small number of objects, mainly player and non-player characters as well as items. They are in control of players or the game vendors and are well defined.
However, the main challenge in games is reaction time. Games for blind people feature several mechanics to support this, like audio cues, earcons (similar to icons), a sound radar to spot enemies and friends, sometimes tactile feedback is used (vibration) to identify objects, and there is speech synthesis for announcing the names of objects. There is a problem with hardware acceleration because it goes directly to the GPU, bypassing the accessibility API.
In 3D social worlds like Second Life the challenges are different.
There is user generated content. A lot of it! About 40% of the objects dont have alternative or descriptive texts, on some islands as high as 85%. To prevent clutter, users get summaries of the number of people and objects present. Also only objects within a 10-20 meter range are announced. Screen-reader support is crucial because users heavily customize their software.
Eelke Folmer did a lot of research on the topic of accessibility in 3D worlds. Also IBM published games for blind and vision impaired people. Id like to encourage you to contact them.
But it doesnt stop at games and social worlds. For example there are medical applications. A CT scan of a hand has about 20-30 MB, a full body scan about 700 MB. I can imagine tools to process this information in the browser.
And of course architects use 3D models, among others.
Therefore my plea would be to reach out to the Khronos Working Group and vice versa.
Because bolt-on accessibility is ugly,
Whereas inclusive, integrated accessibility is beautiful