The PhET Interactive Simulations project has been actively working to enhance the accessibility of simulations for a broad range of students, including those with disabilities. The PhET project’s recent focus in accessibility has been to add keyboard navigation and screen reader based auditory descriptions to the simulation feature set. For these features, the PhET project needed to develop a way of making visual information available to assistive technologies.
For graphical rendering, PhET uses a custom scene graph for HTML5 called Scenery. Scenery can be used to display interactive graphics using Canvas, SVG, and DOM. Since Canvas and SVG elements do not provide any outward information about their internal structure, assistive technologies have no way of reading what is inside. The screen reader is only able to interpret the single Canvas or SVG element. Because of this, PhET simulations have been inaccessible to assistive technology.
To increase the accessibility of the PhET simulations, the structure of the Canvas and SVG elements need to be publicly available to assistive technology. Additionally, this structure needs to be formatted in a way that assistive technology can easily interpret the included information using modern accessibility conventions. To accomplish this, we generate a structure that matches the scene graph with elements that provides custom accessibility information for assistive technology.
This is a challenge because every visual interface element or component that serves a pedagogical purpose must have representation in this structure. The structure must also be as dynamic as the visual representation so that the user gets a fully interactive experience, regardless of the method of access.
To allow assistive technology to communicate with PhET simulations, the PhET project’s custom scene graph, Scenery, generates a parallel HTML structure (a parallel DOM) that lies on top of the Scenery visual element. Elements in this parallel DOM are representations of elements in the screen view. Any visual node in the simulation can be represented in this parallel DOM. In this way, both the browser and assistive technologies have access to the the accessible representations of the elements on the screen. Since the parallel DOM is written in HTML, users can navigate to the various elements in the browser with a keyboard or other assistive technology. For example, this could include using Tab or other keys that are handled by a screen reader that moves focus to various landmarks defined in the document.
The parallel DOM is generated dynamically on a node by node basis so that each visual element can have a customizable HTML element in the DOM structure. This allows us to have full control over the HTML elements and their associated attributes for accessibility. Assistive technologies can easily interpret the parallel DOM according to modern HTML accessibility conventions.
The following is what the parallel DOM might look like for a single screen of a PhET simulation. This is the prototype for accessible content of the Net Force screen of the PhET Forces and Motion: Basics simulation.
<!-- Accessibility HTML for the Net Force screen of FAMB --> <div aria-labelledby='netForceLabel'> <!-- Title and description for the whole screen --> <h2 id='netForceLabel' aria-describedby='netForceDescription'>Net Force></h2> <p id='netForceDescription'> There is a heavily loaded cart on wheels sitting on a track... </p> <!-- Left puller group. A button is used to enter the nest so that the user can quickly navigate to this element and understand that using the button will begin a drag and drop mode. --> <input tabindex="0" type='button' value='Left pullers' aria-labelledby="leftPullerGroupDescription" id="leftPullerGroupButton"> <p id="leftPullerGroupDescription">Left pullers standing near rope. Press enter to select a puller for drag and drop.</p> <ul id="leftPullerGroup" hidden> <li tabindex="0" draggable="true" aria-grabbed="false">Left group, Large puller standing near rope</li> <li tabindex="0" draggable="true" aria-grabbed="false">Left group, medium puller standing near rope</li> <li tabindex="0" draggable="true" aria-grabbed="false">Left group, first small puller standing near rope</li> <li tabindex="0" draggable="true" aria-grabbed="false">Left group, second small puller standing near rope</li> </ul> <!-- Right puller group. A button is used to enter the nest so that the user can quickly navigate to this element and understand that using the button will begin a drag and drop mode. --> <input tabindex="0" type='button' value='Right pullers' aria-labelledby="rightPullerGroupDescription" id="rightPullerGroupDescription"> <p id="rightPullerGroupDescription">Right pullers standing near rope. Press enter to select a puller for drag and drop.</p> <ul id="rightPullerGroup" hidden> <li tabindex="0" draggable="true" aria-grabbed="false">Right group, Large puller standing near rope</li> <li tabindex="0" draggable="true" aria-grabbed="false">Right group, medium puller standing near rope</li> <li tabindex="0" draggable="true" aria-grabbed="false">Right group, first small puller standing near rope</li> <li tabindex="0" draggable="true" aria-grabbed="false">Right group, second small puller standing near rope</li> </ul> <!-- List of knots along the left side of the rope, using the aria-dropeffect to signify that these are potential locations for a puller --> <h4 id="leftKnotGroupDescription" hidden>Left knots. Press enter to place selected puller on knot.</h4> <ul tabindex="0" id="leftKnotGroup" aria-labelledby='leftKnotGroupDescription' hidden> <li tabindex="0" aria-dropeffect="move">First knot, closest to the cart</li> <li tabindex="0" aria-dropeffect="move">Second knot</li> <li tabindex="0" aria-dropeffect="move">Third knot</li> <li tabindex="0" aria-dropeffect="move">Fourth knot, farthest from the cart</li> </ul> <!-- List of knots along the right side of the rope, using the aria-dropeffect to signify that these are potential locations for a puller --> <h4 id="rightKnotDescription" hidden>Right knots. Press enter to place selected puller on knot.</h4> <ul tabindex="0" id="rightKnotGroup" aria-labelledby='rightKnotDescription' hidden> <li tabindex="0" aria-dropeffect="move">First knot, closest to the cart</li> <li tabindex="0" aria-dropeffect="move">Second knot</li> <li tabindex="0" aria-dropeffect="move">Third knot</li> <li tabindex="0" aria-dropeffect="move">Fourth knot, farthest from the cart</li> </ul> <!-- GO button with auditory description --> <input tabindex="0" type='button' value='Go' aria-disabled='true' aria-describedby='goButtonDescription'> <p id='goButtonDescription'>Select to start pullers</p> <!-- PAUSE button with auditory description --> <input tabindex="0" type='button' value='Pause' aria-disabled='true' aria-describedby='pauseButtonDescription'> <p id='pauseButtonDescription'>Select to pause pullers</p> <!-- Accessible visibility checkboxes, nested in a fieldset for some accessibility benefit in legend announcement and implicit arrow key navigation --> <fieldset> <legend>Visibility Controls</legend> <input type='checkbox' id='sumOfForcesCheckbox'> <label for='sumOfForcesCheckbox'>Sum of Forces</label><br> <input type='checkbox' id='valuesCheckbox'> <label for='valuesCheckbox'>Values</label> </fieldset> <!-- Accessible reset all button with an auditory description --> <input type='reset' value='Reset all' aria-describedby='resetAllDescription'> <p id='resetAllDescription'>Select to reset screen</p> <!-- Accessible TOGGLE SOUND button with an auditory description --> <input type='button' value='Toggle sound' aria-describedby='toggleSoundDescription'> <p id='toggleSoundDescription'>Select to toggle sound</p> <!--Element used to alert the user that an action or event has occured in the simulation--> <p><span id="ariaActionElement" aria-live="polite" aria-atomic="true"></span></p> </div>
Note that the above example is a snapshot of the parallel DOM. As the user interacts with the simulation, scripting in the simulation code changes the various DOM element attributes so that the document continues to represent the dynamic simulation.
The example above illustrates that the parallel DOM is composed of standard HTML elements that represent Scenery nodes. For instance, groups of pullers for a game of tug-of-war are represented by list elements inside of an unordered list. Buttons are represented by input elements of type button.
The above example also exemplifies accessibility information for the rich application content in a PhET simulation using WAI-ARIA (Web Accessibility Initiative - Accessible Rich Internet Applications). For example, there is a puller and a knot in the above example. In the simulation, a drag and drop interface is used to place a puller on a knot position on the rope for a game of tug-of-war. While the developer is responsible for defining the drag and drop behavior in the sim code, WAI-ARIA attributes can be used in the parallel DOM to let the user know that drag and drop is defined in the interface. For instance, one of the pullers has the following DOM representation:
<li tabindex="0" draggable="true" aria-grabbed="false">Right group, first small puller standing near rope</li>
The 'Draggable' attribute lets the user know that the element has a defined drag and drop behavior. The 'aria-grabbed' attribute further lets the user know that the element has not yet been selected for dragging. For example, NVDA will read aloud the above element as:
“Left group, Large puller standing near rope. Draggable one of four”
Similarly, each knot position on the rope has an aria attribute that describes the operation that should occur when the puller is released from dragging.
<li tabindex="0" aria-dropeffect="move">First knot, closest to the cart</li>
A screen reader such as NVDA will read aloud the above list item as:
"First knot, closest to the cart. Drop target one of four”
Each element in the above example is dynamically generated by view code. Since the scene graph provides a parent-child relationship for the visual elements, Scenery is able to structure the parallel DOM based on the hierarchical relationships that are already present within the scene graph. While rendering the visual content, Scenery is also assembling the separate HTML for accessibility which lies on top of the Canvas and SVG visual elements.
The following example requires some background knowledge of JavaScript and Scenery. If you are just getting started with development using Scenery, please see PhET's Scenery Documentation for more information on creating visual elements for the display.
The following block of code illustrates how such accessibility code might be defined in a PhET simulation.
As shown in the above code example, the accessible content is added to the node through the setAccessibleContent function. The function takes an object which wraps around a createPeer function of an AccessibleInstance. This object structure is the boilerplate needed by scenery to implement the hierarchy structure of the DOM. Inside the createPeer function, we can see the standard ways of dynamically creating DOM elements and setting various attributes with JavaScript. In addition, an example of adding an event listener to the element is shown. Further scripting could be added here to implement behavior for the keydown event.
A set of nodes can also have a specified focus order. This defines the order of navigation for a group of nodes. If not provided, the default focus order is the rendering order defined in the parent node's children array. The following is an example of how the accessible order can be defined for the children of a parent node. While navigating with an assistive technology, exampleChild2 will receive focus before exampleChild1.
In addition to setting attributes of the accessible HTML element, one can also create custom focus highlights for a Scenery node. This is set through an optional key called 'focusHighlight', for the object passed into setAccessibleContent. This focus highlight can be any custom Scenery Node or Shape. If no focus highlight is passed into the node's accessibleContent, the default highlight is a pink- ish rectangle that is defined by the node's bounds.
The following is an example of setting a rectangular node's focus highlight to a circular shape. This is a working example, so the rectangle can receive focus and using the keyboard to press the accessible button will change its color.
Implementing the parallel DOM has been successful in that we are able to generate the DOM and use it to provide assistive technologies with an interface for information. However, there are still many challenges and questions that we face with this implementation.
Cross-platform Compatibility: Accessibility HTML can behave very differently across platforms. Increasing consistent behavior across the PhET project’s supported platforms is one of our largest immediate challenges. Initial tests of the parallel DOM show that browsers and assistive technologies interpret the accessible content vastly differently. ARIA is still quite new and some combinations of browsers and screen readers do not seem to handle ARIA events correctly yet. We are hopeful that using native HTML with judicious use of WAI-ARIA will produce more predictable accessibility behavior. Browsers are also developing their accessibility so that they work well with WAI-ARIA. It is possible that accessible HTML will behave more predictably as accessibility continues to develop browser side.
Before the parallel DOM, the PhET project considered other methods of adding accessible content to interactive simulations. The parallel DOM approach was chosen due its comparative ease of implementation and maintenance.
ARIA Support on SVG Elements: The PhET project’s first implementation of accessibility used ARIA-enhanced SVG markup with a Canvas sub-DOM structure. The Canvas sub-DOM was similar to the parallel DOM in that it included native HTML elements for accessibility, but the elements were nested inside of the Canvas element. With this approach, accessibility features would be supported by SVG in some browsers, and by the Canvas sub-DOM in others. The SVG and Canvas implementation would be completely unrelated, and we would have to do at least double the work to implement and maintain both SVG and Canvas layers.
Custom Support with Global Key Listeners: Another attempted implementation was to use custom data structures with a focus rectangle that was drawn in SVG. Global key event listeners were used to provide custom keyboard accessibility support. Dynamic text was read aloud with a single 'aria-live' element whose text changed based on the user's interactions and the state of the scene. This strategy worked well in browsers that support 'aria-live'. However, elements were not exposed to assistive technology with aria markup, so features such as customization and screen reader specific navigation strategies were unavailable.
Comparatively, the parallel DOM approach allows us minimize complexity with a simplified structure. In this case, we have the visual SVG on top, the Canvas layer on the bottom, and a parallel DOM tree that handles all accessibility on the side. The display is entirely separate from the accessibility and we do not need to maintain accessible content in multiple layers. Similarly, the parallel DOM represents visual elements with mostly native HTML. We have a method of exposing the internals of the SVG and Canvas elements, so we can generate standard HTML for these input controls using W3C standards for accessible content. Keyboard navigation and accessibility should behave well and be handled by PhET's supported browsers.