티스토리 수익 글 보기

티스토리 수익 글 보기

How GitHub approaches UX – The GitHub Blog https://github.blog/engineering/user-experience/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Mon, 12 May 2025 22:05:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 How GitHub approaches UX – The GitHub Blog https://github.blog/engineering/user-experience/ 32 32 153214340 Design system annotations, part 2: Advanced methods of annotating components https://github.blog/engineering/user-experience/design-system-annotations-part-2-advanced-methods-of-annotating-components/ Fri, 09 May 2025 16:56:57 +0000 https://github.blog/?p=87681 How to build custom annotations for your design system components or use Figma’s Code Connect to help capture important accessibility details before development.

The post Design system annotations, part 2: Advanced methods of annotating components appeared first on The GitHub Blog.

]]>

In part one of our design system annotation series, we discussed the ways in which accessibility can get left out of design system components from one instance to another. Our solution? Using a set of “Preset annotations” for each component with Primer. This allows designers to include specific pre-set details that aren’t already built into the component and visually communicated in the design itself. 

That being said, Preset annotations are unique to each design system — and while ours may be a helpful reference for how to build them — they’re not something other organizations can utilize if you’re not also using the Primer design system. 

Luckily, you can build your own. Here’s how. 

How to make Preset annotations for your design system

Start by assessing components to understand which ones would need Preset annotations—not all of them will. Prioritize components that would benefit most from having a Preset annotation, and build that key information into each one. Next, determine what properties should be included. Only include key information that isn’t conveyed visually, isn’t in the component properties, and isn’t already baked into a coded component. 

The start of a list of Primer components with notes for those which need Preset annotations. There are notes pointing to ActionBar, ActionMenu, and Autocomplete with details about what information should be documented in their Preset.

Prioritizing components

When a design system has 60+ components, knowing where to start can be a challenge. Which components need these annotations the most? Which ones would have the highest impact for both design teams and our users? 

When we set out to create a new set of Preset annotations based on our proof of concept, we decided to use ten Primer components that would benefit the most. To help pick them, we used an internal tool called Primer Query that tracks all component implementations across the GitHub codebase as well as any audit issues connected to them. Here is a video breakdown of how it works, if you’re curious. 

We then prioritized new Preset annotations based on the following criteria:

  1. Components that align to organization priorities (i.e. high value products and/or those that receive a lot of traffic).
  2. Components that appear frequently in accessibility audit issues.
  3. Components with React implementations (as our preferred development framework).
  4. Most frequently implemented components. 

Mapping out the properties

For each component, we cross-referenced multiple sources to figure out what component properties and attributes would need to be added in each Preset annotation. The things we were looking for may only exist in one or two of those places, and thus are less likely to be accounted for all the way through the design and development lifecycle. The sources include:

Component documentation on Primer.style

Design system docs should contain usage guidance for designers and developers, and accessibility requirements should be a part of this guidance as well. Some of the guidance and requirements get built into the component’s Figma asset, while some only end up in the coded component. 

Look for any accessibility requirements that are not built into either Figma or code. If it’s built in, putting the same info in the Preset annotation may be redundant or irrelevant.

Coded demos in Storybook 

Our component sandbox helped us see how each component is built in React or Rails, as well as what the HTML output is. We looked for any code structure or accessibility attributes that are not included in the component documentation or the Figma asset itself—especially when they may vary from one implementation to another. 

Component properties in the Figma asset library

Library assets provide a lot of flexibility through text layers, image fills, variants, and elaborate sets of component properties. We paid close attention to these options to understand what designers can and can’t change. Worthwhile additions to a Preset Annotation are accessibility attributes, requirements, and usage guidance in other sources that aren’t built into the Figma component. 

Other potential sources 

  • Experiences from team members: The designers, developers, and accessibility specialists you work with may have insight into things that the docs and design tools may have missed. If your team and design system have been around for a while, their insights may be more valuable than those you’ll find in the docs, component demos, or asset libraries. Take some time to ask which components have had challenging bugs and which get intentionally broken when implemented.
  • Findings from recent audits: Design system components themselves may have unresolved audit issues and remediation recommendations. If that’s the case, those issues are likely present in Storybook demos and may be unaccounted for in the component documentation. Design system audit issues may have details that both help create a Preset annotation and offer insights about what should not be carried over from existing resources.

What we learned from creating Preset annotations

Preset annotations may not be for every team or organization. However, they are especially well suited for younger design systems and those that aren’t well adopted. 

Mature design systems like Primer have frequent updates. This means that without close monitoring, the design system components themselves may fall out of sync with how a Preset annotation is built. This can end up causing confusion and rework after development starts, so it may be wise to make sure there’s some capacity to maintain these annotations after they’ve been created. 

For newer teams at GitHub, new members of existing teams, and team members who were less familiar with the design system, the built-in guidance and links to documentation and component demos proved very useful. Those who are more experienced are also able to fine-tune the Presets and how they’re used.

If you don’t already have extensive experience with the design system components (or peers to help build them), it can take a lot of time to assess and map out the properties needed to build a Preset. It can also be challenging to name a component property succinctly enough that it doesn’t get truncated in Figma’s properties panel. If the context is not self-evident, some training or additional documentation may help.

It’s not always clear that you need a Preset annotation

There may be enough overlap between the Preset annotation for a component and types of annotations that aren’t specific to the design system. 
For example, the GitHub Annotation Toolkit has components to annotate basic <textarea> form elements in addition to a Preset annotation for our <TextArea> Primer component:

Comparison between a Form Element annotation for the textarea HTML element and a Preset annotation for the TextArea Primer component.

In many instances, this flexibility may be confusing because you could use either annotation. For example, the Primer <TextArea> Preset has built-in links to specific Primer docs, and while the non-Preset version doesn’t, you could always add the links manually. While there’s some overlap between the two, using either one is better than none. 

One way around this confusion is to add Primer-specific properties to the default set of annotations. This would allow you to do things like toggle a boolean property on a normal Button annotation and have it show links and properties specific to your design system’s button component. 

Our Preset creation process may unlock automation

There are currently a number of existing Figma plugins that advertise the ability to scan a design file to help with annotations. That being said, the results are often mixed and contain an unmanageable amount of noise and false positives. One of the reasons these issues happen is that these public plugins are design system agnostic.

Current automated annotation tools aren’t able to understand that any design system components are being used without bespoke programming or thorough training of AI models. For plugins like this to be able to label design elements accurately, they first need to understand how to identify the components on the canvas, the variants used, and the set properties. 

A Figma file showing an open design for Releases with an expanded layer tree highlighting a Primer Button component in the design. To the left of the screenshot are several git-lines and a Preset annotation for a Primer Button with a zap icon intersecting it. The git-line trails and the direction of the annotation give the feeling of flying toward the layer tree, which visually suggests this Primer Button layer can be automatically identified and annotated.

With that in mind, perhaps the most exciting insight is that the process of mapping out component properties for a Preset annotation—the things that don’t get conveyed in the visual design or in the code—is also something that would need to be done in any attempt to automate more usable annotations. 

In other words, if a team uses a design system and wants to automate adding annotations, the tool they use would need to understand their components. In order for it to understand their components well enough to automate accurately, these hidden component properties would need to be mapped out. The task of creating a set of Preset annotations may be a vital stepping stone to something even more streamlined. 

A promising new method: Figma’s Code Connect 

While building our new set of Preset annotations, we experimented with other ways to enhance Primer with annotations. Though not all of those experiments worked out, one of them did: adding accessibility attributes through Code Connect. 

Primer was one of the early adopters of Figma’s new Code Connect feature in Dev Mode. Says Lukas Oppermann, our staff systems designer, “With Code Connect, we can actually move the design and the code a little bit further apart again. We can concentrate on creating the best UX for the designers working in Figma with design libraries and, on the code side, we can have the best developer experience.” 

To that end, Code Connect allows us to bypass much of our Preset annotations, as well as the downsides of some of our other experiments. It does this by adding key accessibility details directly into the code that developers can export from Figma.

GitHub’s Octicons are used in many of our Primer components. They are decorative by default, but they sometimes need alt text or aria-label attributes depending on how they’re used. In the IconButton component, that button uses an Octicon and needs an accessible name to describe its function. 

When using a basic annotation kit, this may mean adding stamps for a Button and Decorative Image as well as a note in the margins that specifies what the aria-label should be. When using Preset annotations, there are fewer things to add to the canvas and the annotation process takes less time.

With Code Connect set up, Lukas added a hidden layer in the IconButton Figma component. It has a text property for aria-label which lets designers add the value directly from the component properties panel. No annotations needed. The hidden layer doesn’t disrupt any of the visuals, and the aria-label property gets exported directly with the rest of the component’s code.

An IconButton component with a code-review icon. On the left is a screenshot of the component’s properties panel, with an aria-label value of: Start code review. On the right is the Code Connect output showing usable React code for an IconButton that includes the parameter: aria-label=Start code review.

It takes time to set up Code Connect with each of your design system components. Here are a few tips to help:

  • Consistency is key. Make sure that the properties you create and how you place hidden layers is consistent across components. This helps set clear expectations so your teams can understand how these hidden layers and properties function. 
  • Use a branch of your design system library to experiment. Hiding attributes like aria-label is quite simple compared to other complex information that Preset annotations are capable of handling. 
  • Use visual regression testing (VRT). Adding complexity directly to a component comes with increased risk of things breaking in the future, especially for those with many variants. Figma’s merge conflict UI is helpful, but may not catch everything.

As we continue to innovate with annotations and make our components more accessible, we are aiming to release our GitHub Annotation Toolkit in the near future. Stay tuned!

Further reading

Accessibility annotation kits are a great resource, provided they’re used responsibly. Eric Bailey, one of the contributors to our forthcoming GitHub Annotation Toolkit, has written extensively about how annotations can highlight and amplify deeply structural issues when you’re building digital products.

The post Design system annotations, part 2: Advanced methods of annotating components appeared first on The GitHub Blog.

]]>
87681
Design system annotations, part 1: How accessibility gets left out of components https://github.blog/engineering/user-experience/design-system-annotations-part-1-how-accessibility-gets-left-out-of-components/ Fri, 09 May 2025 16:56:41 +0000 https://github.blog/?p=87647 The Accessibility Design team created a set of annotations to bridge the gaps that design systems alone can’t fix and proactively addresses accessibility issues within Primer components.

The post Design system annotations, part 1: How accessibility gets left out of components appeared first on The GitHub Blog.

]]>

When it comes to design systems, every organization tends to be at a different place in their accessibility journey. Some have put a great deal of work into making their design system accessible while others have a long way to go before getting there. To help on this journey, many organizations rely on accessibility annotations to make sure there are no access barriers when a design is ready to be built. 

However, it’s a common misconception (especially for organizations with mature design systems) that accessible components will result in accessible designs. While design systems are fantastic for scaling standards and consistency, they can’t prevent every issue with our designs or how we build them. Access barriers can still slip through the cracks and make it into production.

This is the root of the problem our Accessibility Design team set out to solve. 

In this two-part series, we’ll show you exactly how accessible design system components can produce inaccessible designs. Then we’ll demonstrate our solution: integrating annotations with our Primer components. This allows us to spend less time annotating, increases design system adoption, and reaches teams who may not have accessibility support. And in our next post, we’ll walk you through how you can do the same for your own components.

Let’s dig in.

What are annotations and their benefits? 

Annotations are notes included in design projects that help make the unseen explicit by conveying design intent that isn’t shown visually. They improve the usability of digital experiences by providing a holistic picture for developers of how an experience should function. Integrating annotations into our design process helps our teams work better together by closing communication gaps and preventing quality issues, accessibility audit issues, and expensive re-work. 

Some of the questions annotations help us answer include:

  • How is assistive technology meant to navigate a page from one element to another?
  • What’s the alternative text for informative images and buttons without labels?
  • How does content shift depending on viewport size, screen orientation, or zoom level?
  • Which virtual keyboard should be used for a form input on mobile?
  • How should focus be managed for complex interactions?

Our answers to questions like this—or the lack thereof—can make or break the experience of the web for a lot of people, especially users with disabilities. Some annotation tools are built specifically to help with this by guiding designers to include key details about web standards, platform functionality, and accessibility (a11y). 

Most public annotation kits are well suited for teams who are creating new design system components, teams who aren’t already using a design system, or teams who don’t have specialized accessibility knowledge. They usually help annotate things like:

  • Controls such as buttons and links
  • Structural elements such as headings and landmarks
  • Decorative images and informative descriptions 
  • Forms and other elements that require labels and semantic roles 
  • Focus order for assistive technology and keyboard navigation

GitHub’s annotation’s toolkit

One of our top priorities is to meet our colleagues where they’re at. We wanted all our designers to be able to use annotations out of the box because we believe they shouldn’t need to be a certified accessibility specialist in order to get things built in an accessible way. 

 A browser window showing the Web Accessibility Annotation Kit in the cvs-health/annotations repository.

To this end, last year we began creating an internal Figma library—the GitHub Annotation Toolkit (which we aim to release to the public soon). Our toolkit builds on the legacy of the former Inclusive Design team at CVS Health. Their two open source annotation kits help make documentation that’s easy to create and consume, and are among the most widely used annotation libraries in the Figma Community. 

While they add clarity, annotations can also add overhead. If teams are only relying on specialists to interpret designs and technical specifications for developers, the hand-off process can take longer than it needs to. To create our annotation toolkit, we rebuilt its predecessor from the ground up to avoid that overhead, making extensive improvements and adding inline documentation to make it more intuitive and helpful for all of our designers—not just accessibility specialists. 

Design systems can also help reduce that overhead. When you audit your design systems for accessibility, there’s less need for specialist attention on every product feature, since you’re using annotations to add technical semantics and specialist knowledge into every component. This means that designers and developers only need to adhere to the usage guidelines consistently, right?

The problems with annotations and design system components

Unfortunately, it’s not that simple. 

Accessibility is not binary

While design systems can help drive more accessible design at scale, they are constantly evolving and the work on them is never done. The accessibility of any component isn’t binary. Some may have a few severe issues that create access barriers, such as being inoperable with a keyboard or missing alt text. Others may have a few trivial issues, such as generic control labels. 

Most of the time, it will be a misnomer to claim that your design system is “fully accessible.” There’s always more work to do—it’s just a question of how much. The Web Content Accessibility Guidelines (WCAG) are a great starting point, but their “Success Criteria” isn’t tailored for the unique context that is your website or product or audience. 

While the WCAG should be used as a foundation to build from, it’s important to understand that it can’t capture every nuance of disabled users’ needs because your users’ needs are not every user’s needs. It would be very easy to believe that your design system is “fully accessible” if you never look past WCAG to talk to your users. If Primer has accessible components, it’s because we feel that direct participation and input from daily assistive technology users is the most important aspect of our work. Testing plans with real users—with and without disabilities—is where you really find what matters most. 

Accessible components do not guarantee accessible designs

Arranging a series of accessible components on a page does not automatically create an accurate and informative heading hierarchy. There’s a good chance that without additional documentation, the heading structure won’t make sense visually—nor as a medium for navigating with assistive technology.

A page wireframe showing a linear layout of an H1 title, an H2 in a banner below it, and a row of several cards below with headings of H4. The caption reads: this accessible card has an H4, breaking the page structure by skipping heading levels. Next to the wireframe is a diagram showing the page structure as a tree view, highlighting the level skipping from H2 to H4.

It’s great when accessible components are flexible and responsive, but what about when they’re placed in a layout that the component guidance doesn’t account for? Do they adapt to different zoom levels, viewport sizes, and screen orientations? Do they lose any functionality or context when any of those things change?

Component usage is contextual. You can add an image or icon to your design, but the design system docs can’t write descriptive text for you. You can use the same image in multiple places, but the image description may need to change depending on context. 

Similarly, forms built using the same input components may do different things and require different error validation messages. It’s no wonder that adopting design system components doesn’t get rid of all audit issues.

Design system components in Figma don’t include all the details

Annotation kits don’t include components for specific design systems because almost every organization is using their own. When annotation kits are adopted, teams often add ways to label their design system components. 

This labeling lets developers know they can use something that’s already been built, and that they don’t need to build something from scratch. It also helps identify any design system components that get ‘detached’ in Figma. And it reduces the number of things that need to be annotated. 

Let’s look at an example:

A green Primer button with a lightning bolt icon and a label that says: this button does something. To the right is a set of Figma component properties that control the button’s visual appearance.

If we’re using this Primer Button component from the Primer Web Figma library, there are a few important things that we won’t know just by looking at the design or the component properties:

  • Functional differences when components are implemented. Is this a link that just looks visually like a button? If so, a developer would use the <LinkButton> React component instead of <Button>.
  • Accessible labels for folks using assistive technology. The icon may need alt text. In some cases, the button text might need some visually-hidden text to differentiate it from similar buttons. How would we know what that text is? Without annotations, the Figma component doesn’t have a place to display this.
  • Whether user data is submitted. When a design doesn’t include an obvious form with input fields, how do we convey that the button needs specific attributes to submit data? 

It’s risky to leave questions like this unanswered, hoping someone notices and guesses the correct answer. 

A solution that streamlines the annotation process while minimizing risk

When creating new components, a set of detailed annotations can be a huge factor in how robust and accessible they are. Once the component is built, design teams can start to add instances of that component in their designs. When those designs are ready to be annotated, those new components shouldn’t need to be annotated again. In most cases, it would be redundant and unnecessary—but not in every case. 

There are some important details in many Primer components that may change from one instance to another. If we use the CVS Health annotation kit out of the box, we should be able to capture those variations, but we wouldn’t be able to avoid those redundant and unnecessary annotations. As we built our own annotation toolkit, we built a set of annotations for each Primer component to do both of those things at once. 

An annotated Primer Brand accordion with six Stamps and four Detail notes in the margins.

This accordion component has been thoroughly annotated so that an engineer has everything they need to build it the first time. These include heading levels, semantics for <detail> and <summary> elements, landmarks, and decorative icons. All of this is built into the component so we don’t need to annotate most of this when adding the accordion to our new designs.

However, there are two important things we need to annotate, as they can change from one instance to another:

  1. The optional title at the top.
  2. The heading level of each item within the accordion.

If we don’t specify these things, we’re leaving it to chance that the page’s heading structure will break or that the experience will be confusing for people to understand and navigate the page. The risks may be low for a single button or basic accordion, but they grow with pattern complexity, component nesting, interaction states, duplicated instances, and so on. 

An annotated Primer Brand accordion with one Stamp and one Detail note in the margins.

Instead of annotating what’s already built into the component or leaving these details to chance, we can add two quick annotations. One Stamp to point to the component, and one Details annotation where we fill in some blanks to make the heading levels clear. 

Because the prompts for specific component details are pre-set in the annotation, we call them Preset annotations.

A mosaic of preset annotation for various Primer components.

Introducing our Primer A11y Preset annotations

With this proof of concept, we selected ten frequently used Primer components for the same treatment and built a new set of Preset annotations to document these easily missed accessibility details—our Primer A11y Presets. 

Those Primer components tend to contribute to more accessibility audit issues when key details are missing on implementation. Issues for these components relate to things like lack of proper labels, error validation messages, or missing HTML or ARIA attributes

IconButton Preset annotation, with guidance toggled on.

Each of our Preset annotations is linked to component docs and Storybook demos. This will hopefully help developers get straight to the technical info they need without designers having to find and add links manually. We also included guidance for how to fill out each Preset, as well as how to use the component in an accessible way. This helps designers get support inline without leaving their Figma canvas. 

Want to create your own? Check out Design system annotations, part 2

Button components in Google’s Material Design and Shopify’s Polaris, IBM’s Carbon, or our Primer design system are all very different from one another. Because Preset annotations are based on specific components, they only work if you’re also using the design system they’re made for. 

In part 2 of this series, we’ll walk you through how you can build your own set of Preset annotations for your design system, as well as some different ways to document important accessibility details before development starts.

You may also like: 

If you’re more of a visual learner, you can watch Alexis Lucio explore Preset annotations during GitHub’s Dev Community Event to kick off Figma’s Config 2024. 

The post Design system annotations, part 1: How accessibility gets left out of components appeared first on The GitHub Blog.

]]>
87647
Building a more accessible GitHub CLI https://github.blog/engineering/user-experience/building-a-more-accessible-github-cli/ Fri, 02 May 2025 14:30:00 +0000 https://github.blog/?p=87460 How do we translate web accessibility standards to command line applications? This is GitHub CLI’s journey toward making terminal experiences for all developers.

The post Building a more accessible GitHub CLI appeared first on The GitHub Blog.

]]>

At GitHub, we’re committed to making our tools truly accessible for every developer, regardless of ability or toolset. The command line interface (CLI) is a vital part of the developer experience, and the GitHub CLI is our product that brings the power of GitHub to your terminal.

When it comes to accessibility, the terminal is fundamentally different from a web browser or a graphical user interface, with a lineage that predates the web itself. While standards like the Web Content Accessibility Guidelines (WCAG) provide a clear path for making web and graphical applications accessible, there is no equivalent, comprehensive standard for the terminal and CLIs. The W3C offers some high-level guidance for non-web software, but it stops short of prescribing concrete techniques, leaving much open to interpretation and innovation.

This gap has challenged us to think creatively and purposefully about what accessibility should look like in the terminal. Our recent Public Preview is focused on addressing the needs of three key groups: users who rely on screen readers, users who need high contrast between background and text, and users who require customizable color options. Our work aims to make the GitHub CLI more inclusive for all, regardless of how you interact with your terminal. Run gh a11y in the latest version of the GitHub CLI to enable these features, or read on to learn about our path to designing and implementing them.

Understanding the terminal landscape

Text-based and command-line applications differ fundamentally from graphical or web applications. On a web page, assistive technologies like screen readers make use of the document object model (DOM) to infer structure and context of the page. Web pages can be designed such that the DOM’s structure is friendly to these technologies without impacting the visual design of the page.  By contrast, CLI’s primary output is plain text, without hidden markup. A terminal emulator acts as the “user agent” for text apps, rendering characters as directed by the server application. Assistive technologies access this matrix of characters, analyze its layout, and try to infer structure. As the WCAG2ICT guidance notes, accessibility in this space means ensuring that all text output is available to assistive technologies, and that structural information is conveyed in a way that’s programmatically determinable—even if no explicit markup is present.

In our quest to improve the GitHub CLI’s usability for blind, low-vision, and colorblind users, we found ourselves navigating a landscape with lots of guidance, but few concrete techniques for implementing accessible experiences. We studied how assistive technology interacts with terminals: how screen readers review output, how color and contrast can be customized, and how structural cues can be inferred from plain text. Our recent Public Preview contains explorations into various use cases in these spaces. 

Rethinking prompts and progress for screen readers

One of the GitHub CLI’s strengths as a command-line application is its rich prompting experience, which gives our users an interactive interface to enter command options. However, this rich interactive experience poses a hurdle for speech synthesis screen readers: Non-alphanumeric visual cues and uses of constant screen redraws for visual or other effects can be tricky to correctly interpret as speech.


A demo video with sound of screen reader reading legacy prompter.

To reduce confusion and make it easier for blind and low vision users to confidently answer questions and navigate choices, we’re introducing a prompting experience that allows speech synthesis screen readers to accurately convey prompts to users. Our new prompter is built using Charm’s open source charmbracelet/huh prompting library.

A demo of a screenreader correctly reading a prompt.

Another use case where the terminal is redrawn for visual effect is when showing progress bars. Our existing implementation uses a “spinner” made by redrawing the screen to display different braille characters (yes, we appreciate the irony) to give the user the indication that their command is executing. Speech synthesis screen readers do not handle this well:

A demo of a screenreader and an old spinner.

This has been replaced with a static text progress indicator (with a relevant message to the action being taken where possible, falling back to a general “Working…” message). We’re working on identifying other areas we can further improve the contextual text.

A demo video of the new progress indicator experience.

Color, contrast, and customization

Color is more than decoration in the terminal: It’s a vital tool for highlighting information, signaling errors, and guiding workflows. But color can also be a barrier—if contrast between the color of the terminal background and the text displayed on it is too low, some users will have difficulty discerning the displayed information. Unlike in a web browser, a terminal’s background color is not set by the application. That task is handled by the user’s terminal emulator. In order to maintain contrast, it is important that a command line application takes into account this variable. Our legacy color palette used for rendering Markdown did not take the terminal’s background color into account, leading to low contrast in some cases.

A screenshot of the legacy Markdown palette.

The colors themselves also matter. Different terminal environments have varied color capabilities (some support 4-bit, some 8-bit, some 24-bit, etc). No matter the capability, terminals enable users to customize their color preferences, choosing how different hues are displayed. However, most terminals only support changing a limited subset of colors: namely, the sixteen colors in the ANSI 4-bit color table. The GitHub CLI has made extensive efforts to align our color palettes to 4-bit colors so our users can completely customize their experience using their terminal preferences. We built on top of the accessibility foundations pioneered by Primer when deciding which 4-bit colors to use.

A screenshot showing the improved Markdown palette.

Building for the CLI community

Our improvements aim to support a wide range of developer needs, from blind users who need screen readers, to low vision users who need high contrast, to colorblind users who require customizable color options. But this Public Preview does not mark the end of our team’s commitment to enabling all developers to use the GitHub CLI. We intend to make it easier for our extension authors to implement the same accessibility improvements that we’ve made to the core CLI. This will allow users to have a cohesive experience across all GitHub CLI commands, official or community-maintained, and so that more workflows can be made accessible by default. We’re also looking into experiences to customize the formatting of tables output by commands to be more easily read/interpreted by screen readers. We’re excited to continue our accessibility journey.

We couldn’t have come this far without collaboration with our friends at Charm and our colleagues on the GitHub Accessibility team. 

A call for feedback

We invite you to help us in our goal to make the GitHub CLI an experience for all developers:

  • Try it out: Update the GitHub CLI to v2.72.0 and run gh a11y in your terminal to learn more about enabling these new accessible features.
  • Share your experience: Join our GitHub CLI accessibility discussion to provide feedback or suggestions.
  • Connect with us: If you have a lived experience relevant to our accessibility personas, reach out to the accessibility team or get involved in our discussion panel.

Looking forward

Adapting accessibility standards for the command line is a challenge—and an opportunity. We’re committed to sharing our approach, learning from the community, and helping set a new standard for accessible CLI tools.

Thank you for building a more accessible GitHub with us.

Want to help us make GitHub the home for all developers? Learn more about GitHub’s accessibility efforts.

The post Building a more accessible GitHub CLI appeared first on The GitHub Blog.

]]>
87460
Considerations for making a tree view component accessible https://github.blog/engineering/user-experience/considerations-for-making-a-tree-view-component-accessible/ Tue, 28 Jan 2025 17:00:39 +0000 https://github.blog/?p=82266 A deep dive on the work that went into making the component that powers repository and pull request file trees.

The post Considerations for making a tree view component accessible appeared first on The GitHub Blog.

]]>

Tree views are a core part of the GitHub experience. You’ve encountered one if you’ve ever navigated through a repository’s file structure or reviewed a pull request.

Browsing files on Primer's design repository. A tree view showing the repositories directory structure occupies a quarter of the screen. The other three quarters are taken up by the content of the content subdirectory. The tree view shows expanded and collapsed directories, as well as files nested at multiple levels of depth.

On GitHub, a tree view is the list of folders and the files they contain. It is analogous to the directory structure your operating system uses as a way of organizing things.

Tree views are notoriously difficult to implement in an accessible way. This post is a deep dive into some of the major considerations that went into how we made GitHub’s tree view component accessible. We hope that it can be used as a reference and help others.

Start with Windows

It’s important to have components with complex interaction requirements map to something people are already familiar with using. This allows for responsiveness to the keypresses they will try to navigate and take action on our tree view instances.

We elected to adopt Windows File Explorer’s tree view implementation, given the prominence of Windows’ usage for desktop screen reader users.

A Windows 11 File Explorer window showing a tree view and a list of subdirectories that one of its folders contains. The tree view demonstrates how the C drive contains multiple nested folders to organize its content.

Navigating and taking actions on items in Windows’ tree view using NVDA and JAWS helped us get a better understanding of how things worked, including factors such as focus management, keyboard shortcuts, and expected assistive technology announcements.

Then maybe reference the APG

The ARIA Authoring Practices Guide (APG) is a bit of an odd artifact. It looks official but is no longer recognized by the W3C as a formal document.

This is to say that the APG can serve as a helpful high-level resource for things to consider for your overall approach, but its suggestions for code necessitate deeper scrutiny.

Build from a solid, semantic foundation

At its core, a tree view is a list of lists. Because of this, we used ul and li elements for parent and child nodes:

<ul>
  <li>
    <ul>
      <li>.github/</li>
      <li>source/</li>
      <li>test/</li>
    </ul>
  </li>
  <li>.gitignore</li>
  <li>README.md</li>
</ul>

There are a few reasons for doing this, but the main considerations are:

  • Better assurance that a meaningful accessibility tree is generated,
  • Lessening the work we need for future maintenance, and consequential re-verification that our updates continue to work properly, and
  • Better guaranteed interoperability between different browsers, apps, and other technologies.

NOTE: GitHub currently does not virtualize its file trees. We would need to revisit this architectural decision if this ever changes.

Better broad assistive technology support

The more complicated an interactive pattern is, the greater the risk that there are bugs or gaps with assistive technology support.

Given the size of the audience GitHub serves, it’s important that we consider more than just majority share assistive technology considerations.

We found that utilizing semantic HTML elements also performed better for some less-common assistive technologies. This was especially relevant with some lower-power devices, like an entry-level Android smartphone from 2021.

Better Forced Color Mode support

Semantic HTML elements also map to native operating system UI patterns, meaning that Forced Color Mode’s heuristics will recognize them without any additional effort. This is helpful for people who rely on the mode to see screen content.

The heuristic mapping behavior does not occur if we used semantically neutral div or span elements, and would have to be manually recreated and maintained.

Use a composite widget

A composite widget allows a component that contains multiple interactive elements to only require one tab stop unless someone chooses to interact with it further.

Consider a file tree for a repository that contains 500+ files in 20+ directories. Without a composite widget treatment, someone may have to press Tab far too many times to bypass the file tree component and get what they need.

Think about wrapping it in a landmark

Like using a composite widget, landmark regions help some people quickly and efficiently navigate through larger overall sections of the page. Because of this, we wrapped the entire file tree in a nav landmark element.

This does not mean every tree view component should be a landmark, however! We made this decision for the file tree because it is frequently interacted with as a way to navigate through a repository’s content.

Go with a roving tabindex approach

A roving tabindex is a technique that uses tabindex="-1" applied to each element in a series, and then updates the tabindex value to use 0 instead in response to user keyboard input. This allows someone to traverse the series of elements, as focus “roves” to follow their keypresses.

<li tabindex="-1">File 1</li>
<li tabindex="-1">File 2</li>
<li tabindex="0">File 3</li>
<li tabindex="-1">File 4</li>

The roving tabindex approach performed better than utilizing aria-activedescendant, which had issues with VoiceOver on macOS and iOS.

Enhance with ARIA

We use a considered set of ARIA declarations to build off our semantic foundation.

Note that while we intentionally started with semantic HTML, there are certain ARIA declarations that are needed. The use of ARIA here is necessary and intentional, as it expands the capabilities of HTML to describe something that HTML alone cannot describe—a tree view construct.

Our overall approach follows what the APG suggests, in that we use the following:

  • role="tree" is placed on the parent ul element, to communicate that it is a tree view construct.
  • role="treeitem" is placed on the child li elements, to communicate that they are tree view nodes.
  • role="group" is declared on child ul elements, to communicate that they contain branch and leaf nodes.
  • aria-expanded is declared on directories, with a value of true to communicate that the branch node is in an opened state and a value of false to communicate that it is in a collapsed state instead.
  • aria-selected is used to indicate if branch or leaf nodes have been chosen by user navigation, and can therefore have user actions applied to them.

We also made the following additions:

  • aria-hidden="true" is applied to SVG icons (folders, files, etc.) to ensure its content is not announced.
  • aria-current="true" is placed on the selected node to better support when a node is deep linked to via URL.

NOTE: We use “branch node” and “leaf node” as broad terms that can apply to all tree view components we use on GitHub. For the file tree, branch nodes would correspond to directories and subdirectories, and leaf nodes would correspond to files.

Support expected navigation techniques

The following behaviors are what people will try when operating a tree view construct, so we support them:

Keyboard keypresses

  • Tab: Places focus on the entire tree view component, then moves focus to the next focusable item on the view.
  • Enter:
    • If a branch node is selected: Displays the directory’s contents.
    • If a leaf node is selected: Displays the leaf node’s contents.
  • Down: Moves selection to the next node that can be selected without opening or closing a node.
  • Up: Moves selection to the previous node that can be selected without opening or closing a node.
  • Right:
    • If a branch node is selected and in a collapsed state: Expands the selected collapsed branch node and does not move selection.
    • If a branch node is selected and in an expanded state: Moves selection to the directory’s first child node.
  • Left:
    • If a branch node is selected and in an expanded state: Collapses the selected collapsed directory node and does not move selection.
    • If a branch node is selected and in a collapsed state: Moves selection to the node’s parent directory.
    • If a leaf node is selected: Moves selection to the leaf node’s parent directory.
  • End: Moves selection to the last node that can be selected.
  • Home: Moves selection to the first node that can be selected.

We also support typeahead selection, as we are modeling Windows File Explorer’s tree view behaviors. Here, we move selection to the node closest to the currently selected node whose name matches what the user types.

Middle clicking

Nodes on tree view constructs are tree items, not links. Because of this, tree view nodes do not support the behaviors you get with using an anchor element, such as opening its URL in a new tab or window.

We use JavaScript to listen for middle clicks and Control+Enter keypresses to replicate this behavior.

Consider states

Loading

Tree views on GitHub can take time to retrieve their content, and we may not always know how much content a branch node contains.

A directory called, 'src' that is selected and in an expanded that. It contains a single leaf node that contains a loading spinner with a label of 'Loading…".

Live region announcements are tricky to get right, but integral to creating an equivalent experience. We use the following announcements:

  • If there is a known amount of nodes that load, we enumerate the incoming content with an announcement that reads, “Loading {x} items.”
  • If there is an unknown number of nodes that load, we instead use a more generic announcement of, “Loading…”
  • If there are no nodes that load we use an announcement message that reads, “{branch node name} is empty.”

Additionally, we manage focus for loading content:

  • If focus is placed on a placeholder loading node when the content loads in: Move focus from the placeholder node to the first child node in the branch node.
  • If focus is on a placeholder loading node but the branch node does not contain content: Move focus back to the branch node. Additionally, we remove the branch node’s aria-expanded declaration.

Errors

Circumstances can conspire to interfere with a tree view component’s intended behavior. Examples of this could be a branch node failing to retrieve content or a partial system outage.

In these scenarios, the tree view component will use a straightforward dialog component to communicate the error.

Fix interoperability issues

As previously touched on, complicated interaction patterns run the risk of compatibility issues. Because of this, it’s essential to test your efforts with actual assistive technology to ensure it actually works.

We made the following adjustments to provide better assistive technology support:

Use aria-level

Screen readers can report on the depth of a nested list item. For example, a li element placed inside of a ul element nested three levels deep can announce itself as such.

We found that we needed to explicitly declare the level on each li element to recreate this behavior for a tree view. For our example, we’d also need to set aria-level="3" on the li element.

This fix addressed multiple forms of assistive technology we tested with.

Explicitly set the node’s accessible name on the li element

A node’s accessible name is typically set by the text string placed inside the li element:

<li>README.md</li>

However, we found that VoiceOver on macOS and iOS did not support this. This may be because of the relative complexity of each node’s inner DOM structure.

We used aria-labelledby to get around this problem, with a value that pointed to the id set on the text portion of each node:

<li aria-labelledby="readme-md">
  <div>
   <!-- Icon -->
  </div>
  <div id="readme-md">
    README.md
  </div>
</li>

This guarantees that:

  • the node’s accessible name is announced when focus is placed on the li element,
  • and that the announcement matches what is shown visually.

Where we’d like to go from here

There’s a couple areas we’re prototyping and iterating on to better serve our users:

Browsers apply a lot of behaviors to anchor elements, such as the ability to copy the URL.

We’d like to replace the JavaScript that listens for middle clicks with a more robust native solution, only without sacrificing interoperability and assistive technology support.

Supporting multiple actions per node

Tree views constructs were designed assuming a user will only ever navigate to a node and activate it.

GitHub has use cases that require actions other than activating the node, and we’re exploring how to accomplish that. This is exciting, as it represents an opportunity to evolve the tree view construct on the web.

Always learning

An accessible tree view is a complicated component to make, and it requires a lot of effort and testing to get right. However, this work helps to ensure that everyone can use a core part of GitHub, regardless of device, circumstance, or ability.

We hope that highlighting the considerations that went into our work can help you on your accessibility journey.

Share your experience: We’d love to hear from you if you’ve run into issues using our tree view component with assistive technology. This feedback is invaluable to helping us continue to make GitHub more accessible.

The post Considerations for making a tree view component accessible appeared first on The GitHub Blog.

]]>
82266
How to make Storybook Interactions respect user motion preferences https://github.blog/engineering/user-experience/how-to-make-storybook-interactions-respect-user-motion-preferences/ Wed, 20 Nov 2024 17:00:51 +0000 https://github.blog/?p=81231 With this custom addon, you can ensure your workplace remains accessible to users with motion sensitivities while benefiting from Storybook’s Interactions.

The post How to make Storybook Interactions respect user motion preferences appeared first on The GitHub Blog.

]]>

Recently, while browsing my company’s Storybook, I came across something that seemed broken: a flickering component that appeared to be re-rendering repeatedly. The open source tool that helps designers, developers, and others build and use reusable components was behaving weirdly. As I dug in, I realized I was seeing the unintended effects of the Storybook Interactions addon, which allows developers to simulate user interactions within a story, in action.

Storybook Interactions can be a powerful tool, enabling developers to simulate and test user behaviors quickly. But if you’re unfamiliar with Interactions—especially if you’re just looking to explore available components—the simulated tests jumping around on the screen can feel disorienting.

This can be especially jarring for users who have the prefers-reduced-motion setting enabled in their operating system. When these users encounter a story that includes an interaction, their preferences are ignored and they have no option to disable or enable it. Instead, the Storybook Interaction immediately plays on page load, regardless. These rapid screen movements can cause disorientation for users or in some cases can even trigger a seizure.

At this time, Storybook does not have built-in capabilities to toggle interactions on or off. Until this feature can be baked in I am hoping this blog will provide you with an alternative way to make your work environment more inclusive. Now, let’s get into building an addon that respects user’s motion preferences and allows users to toggle interactions on and off.

Goals

  1. Users with prefers-reduced-motion enabled MUST have interactions off by default.
  2. Users with prefers-reduced-motion enabled MUST have a way to toggle the feature on or off without altering their operating system user preferences.
  3. All users SHOULD have a way to toggle the feature on or off without altering their user preferences.

Let’s get started

Step 1: Build a Storybook addon

Storybook allows developers to create custom addons. In this case, we will create one that will allow users to toggle Interactions on or off, while respecting the prefers-reduced-motion setting.

Add the following code to a file in your project’s .storybook folder:

import React, {useCallback, useEffect} from 'react'

import {IconButton} from '@storybook/components'
import {PlayIcon, StopIcon} from '@storybook/icons'

export const ADDON_ID = 'toggle-interaction'
export const TOOL_ID = `${ADDON_ID}/tool`

export const INTERACTION_STORAGE_KEY = 'disableInteractions'

export const InteractionToggle = () => {
  const [disableInteractions, setDisableInteractions] = React.useState(
       window?.localStorage.getItem(INTERACTION_STORAGE_KEY) === 'true',
  )

  useEffect(() => {
    const reducedMotion = matchMedia('(prefers-reduced-motion)')

    if (window?.localStorage.getItem(INTERACTION_STORAGE_KEY) === null && reducedMotion.matches) {
      window?.localStorage?.setItem(INTERACTION_STORAGE_KEY, 'true')
      setDisableInteractions(true)
    }
  }, [])

  const toggleMyTool = useCallback(() => {
    window?.localStorage?.setItem(INTERACTION_STORAGE_KEY, `${!disableInteractions}`)
    setDisableInteractions(!disableInteractions)
      // Refreshes the page to cause the interaction to stop/start
      window.location.reload()
}, [disableInteractions, setDisableInteractions])

  return (
    <IconButton
      key={TOOL_ID}
      aria-label="Disable Interactions"
      onClick={toggleMyTool}
      defaultChecked={disableInteractions}
      aria-pressed={disableInteractions}
    >
      {disableInteractions ? <PlayIcon /> : <StopIcon />}
      Interactions
    </IconButton>
  )
}

Code breakdown

This addon stores user preferences for Interactions using window.localStorage. When the addon first loads, it checks whether the preference is already set and, if so, it defaults to the user’s preference.

const [disableInteractions, setDisableInteractions] = React.useState(
       window?.localStorage.getItem(INTERACTION_STORAGE_KEY) === 'true',
  )

This useEffect hook checks if a user has their motion preferences set to prefers-reduced-motion and ensures that Interactions are turned off if the user hasn’t already set a preference in Storybook.

useEffect(() => {
    const reducedMotion = matchMedia('(prefers-reduced-motion)')

    if (window?.localStorage.getItem(INTERACTION_STORAGE_KEY) === null && reducedMotion.matches) {
      window?.localStorage?.setItem(INTERACTION_STORAGE_KEY, 'true')
      setDisableInteractions(true)
    }
  }, [])

When a user clicks the toggle button, preferences are updated and the page is refreshed to reflect the changes.

const toggleMyTool = useCallback(() => {
    window?.localStorage?.setItem(INTERACTION_STORAGE_KEY, `${!disableInteractions}`)
    setDisableInteractions(!disableInteractions)
      // Refreshes the page to cause the interaction to stop/start
      window.location.reload()
  }, [disableInteractions, setDisableInteractions])

Step 2: Register your new addon with Storybook

In your .storybook/manager file, register your new addon:

addons.register(ADDON_ID, () => {
  addons.add(TOOL_ID, {
    title: 'toggle interaction',
    type: types.TOOL as any,
    match: ({ viewMode, tabId }) => viewMode === 'story' && !tabId,
    render: () => <InteractionToggle />,
  })
})

This adds the toggle button to the Storybook toolbar, which will allow users to change their Storybook Interaction preferences.

Step 3: Add functionality to check user preferences

Finally, create a function that checks whether Interactions should be played and add it to your interaction stories:

import {INTERACTION_STORAGE_KEY} from './.storybook/src/InteractionToggle'

export const shouldInteractionPlay = () => {
  const disableInteractions = window?.localStorage?.getItem(INTERACTION_STORAGE_KEY)
  return disableInteractions === 'false' || disableInteractions === null
}


 export const SomeComponentStory = {
  render: SomeComponent,
  play: async ({context}) => {
    if (shouldInteractionPlay()) {
...
    }
  })
 }

Wrap-up

With this custom addon, you can ensure your workplace remains accessible to users with motion sensitivities while benefiting from Storybook’s Interactions. For those with prefers-reduced-motion enabled, motion will be turned off by default and all users will be able to toggle interactions on or off.

The post How to make Storybook Interactions respect user motion preferences appeared first on The GitHub Blog.

]]>
81231
Exploring the challenges in creating an accessible sortable list (drag-and-drop) https://github.blog/engineering/user-experience/exploring-the-challenges-in-creating-an-accessible-sortable-list-drag-and-drop/ Tue, 09 Jul 2024 19:06:50 +0000 https://github.blog/?p=78695 Drag-and-drop is a highly interactive and visual interface. We often use drag-and-drop to perform tasks like uploading files, reordering browser bookmarks, or even moving a card in solitaire.

The post Exploring the challenges in creating an accessible sortable list (drag-and-drop) appeared first on The GitHub Blog.

]]>
Drag-and-drop is a highly interactive and visual interface. We often use drag-and-drop to perform tasks like uploading files, reordering browser bookmarks, or even moving a card in solitaire. It can be hard to imagine completing most of these tasks without a mouse and even harder without using a screen or visual aid. This is why the Accessibility team at GitHub considers drag-and-drop a “high-risk pattern,” often leading to accessibility barriers and a lack of effective solutions in the industry.

Recently, our team worked to develop a solution for a more accessible sortable list, which we refer to as ‘one-dimensional drag-and-drop.’ In our first step toward making drag-and-drop more accessible, we scoped our efforts to explore moving items along a single axis.

An example of a sortable to do list. This list has  7 items all of which are in a single column. One of the the items is hovering over the list to indicate it is being dragged to another position.

Based on our findings, here are some of the challenges we faced and how we solved them:

Challenge: Screen readers use arrow keys to navigate through content

One of the very first challenges we faced involved setting up an interaction model for moving an item through keyboard navigation. We chose to use arrow keys as we wanted keyboard operation to feel natural for a visual keyboard user. However, this choice posed a problem for users who relied on screen readers to navigate throughout a webpage.

To note: Arrow keys are commonly used by screen readers to help users navigate through content, like reading text or navigating between cells in a table. Consequently, when we tested drag-and-drop with screen reader users, the users were unable to use the arrow keys to move the item as intended. The arrow keys ignored drag-and-drop actions and performed typical screen reader navigation instead.

In order to override these key bindings we used role='application'.

Take note: I cannot talk about using role='application' without also providing a warning. role='application' should almost never be used. It is important to use role='application' sparingly and to scope it to the smallest possible element. The role='application' attribute alters how a screen reader operates, making it treat the element and its contents as a single application.

Considering the mentioned caution, we applied the role to the drag-and-drop trigger to restrict the scope of the DOM impacted by role='application'. Additionally, we exclusively added role='application' to the DOM when a user activates drag-and-drop. We remove role='application' when the user completes or cancels out of drag-and-drop. By employing role='application', we are able to override or reassign the screen reader’s arrow commands to correspond with our drag-and-drop commands.

Remember, even if implemented thoughtfully, it’s crucial to rely on feedback from daily screen reader users to ensure you have implemented role='application' correctly. Their feedback and experience should be the determining factor in assessing whether or not using role='application' is truly accessible and necessary.

Challenge: NVDA simulates mouse events when a user presses Enter or Space

Another challenge we faced was determining whether a mouse or keyboard event was triggered when an NVDA screen reader user activated drag-and-drop.

When a user uses a mouse to drag-and-drop items, the expectation is that releasing the mouse button (triggering an onMouseUp event) will finalize the drag-and-drop operation. Whereas, when operating drag-and-drop via the keyboard, Enter or Escape is used to finalize the drag-and-drop operation.

To note: When a user activates a button with the Enter or Space key while using NVDA, the screen reader simulates an onMouseDown and onMouseUp event rather than an onKeyDown event.

Because most NVDA users rely on keyboard operations to operate drag-and-drop instead of a mouse, we had to find a way to make sure our code ignored the onMouseUp event triggered by an NVDA Enter or Space key press.

We accomplished this by using two HTML elements to separate keyboard and mouse functionality:
1. A Primer Icon button to handle keyboard interactions.
2. An invisible overlay to capture mouse interactions.

A screenshot of the drag-and-drop trigger. Added to the screenshot are two informational text boxes the first explains the purpose of the invisible overlay, stating:  "An empty <div> overlays the iconButton to handle mouse events. This is neither focusable nor visible to keyboard users. Associated event handlers: onMouseDown". The second text box explains the purpose of the visible iconButton, stating: "The iconButton can only be activated by keyboard users. This is not clickable for mouse users. Associated event handlers: onMouseDown, onKeyDown.

Challenge: Announcing movements in rapid succession

Once we had our keyboard operations working, our next big obstacle was announcing movements to users, in particular announcing rapid movements of a selected item.

To prepare for external user testing we tested our announcements ourselves with screen readers. Because we are not native screen reader users, we moved items slowly throughout the page and the announcements sounded great. However, users typically move items rapidly to complete tasks quickly, so our screen reader testing did not reflect how users would actually interact with our feature.

To note: It was not until user testing that we discovered that when a user moved an item in rapid succession the aria-live announcements would lag or sometimes announce movements that were no longer relevant. This would disorient users, leading to confusion about the item’s current position.

To solve this problem, we added a small debounce to our move announcements. We tested various debounce speeds with users and landed on 100ms to ensure that we did not slow down a user’s ability to interact with drag-and-drop. Additionally, we used aria-live='assertive' to ensure that stale positional announcements are interrupted by the new positional announcement.

export const debounceAnnouncement = debounce((announcement: string) => {
  announce(announcement, {assertive: true})
}, 100)

Take note: aria-live='assertive' is reserved for time-sensitive or critical notifications. aria-live='assertive' interrupts any ongoing announcements the screen reader is making and can be disruptive for users. Use aria-live='assertive' sparingly and test with screen reader users to ensure your feature implores it correctly.

Challenge: First-time user experience of a new pattern

To note: During user testing we discovered that some of our users found it difficult to operate drag-and-drop with a keyboard. Oftentimes, drag-and-drop is not keyboard accessible or screen reader accessible. As a result, users with disabilities might not have had the opportunity to use drag-and-drop before, making the operations unfamiliar to them.

This problem was particularly challenging to solve because we wanted to make sure our instruction set was easy to find but not a constant distraction for users who frequently use the drag-and-drop functionality.

To address this, we added a dialog with a set of instructions that would open when a user activated drag-and-drop via the keyboard. This dialog has a “don’t show this again” checkbox preference for users who feel like they have a good grasp on the interaction and no longer want a reminder.

A screenshot of the instruction dialog. The dialog contains a table with three rows. Each row contains two columns the first column is the movement and the second column is the keyboard command. At the bottom is the

Challenge: Moving items with voice control in a scrollable container

One of the final big challenges we faced was operating drag-and-drop using voice control assistive technology. We found that using voice control to drag-and-drop items in a non-scrollable list was straightforward, but when the list became scrollable it was nearly impossible to move an item from the top of the list to the bottom.

To note: Voice control displays an overlay of numbers next to interactive items on the screen when requested. These numbers are references to items a user can interact with. For example, if a user says, “Click item 5” and item 5 is a button on a web page the assistive technology will then click the button. These number references dynamically update as the user scrolls through the webpage. Because references are updated as a user scrolls, scrolling the page while dragging an item via a numerical reference will cause the item to be dropped.

We found it critical to support two modes of operation in order to ensure that voice control users are able to sort items in a list. The first mode of operation, traditional drag-and-drop, has been discussed previously. The second mode is a move dialog.

The move dialog is a form that allows users to move items in a list without having to use traditional drag-and-drop:

A screenshot of the Move Dialog. At the top of the dialog is a span specifying  the item being moved. Followed by a form to move items then a preview of the movement.

The form includes two input fields: action and position.
The action field specifies the movement or direction of the operation, for example, “move item before” or “move item after.” And the position specifies the location of where the item should be moved.

Below the input fields we show a preview of where an item will be moved based on the input values. This preview is announced using aria-live and provides a way for users to preview their movements before finalizing them.

During our testing we found that the move dialog was the preferred mode of operation for several of our users who do not use voice control assistive technology. Our users felt more confident when using the move dialog to move items and we were delighted to find that our accessibility feature provided unexpected benefits to a wide range of users.

In Summary, creating an accessible drag-and-drop pattern is challenging and it is important to leverage feedback and consider a diverse range of user needs. If you are working to create an accessible drag-and-drop, we hope our journey helps you understand the nuances and pitfalls of this complex pattern.

A big thanks to my colleagues, Alexis Lucio, Matt Pence, Hussam Ghazzi, and Aarya BC, for their hard work in making drag-and-drop more accessible. Your dedication and expertise have been invaluable. I am excited about the progress we made and what we will achieve in the future!

Lastly, if you are interested in testing the future of drag-and-drop with us, consider joining our Customer Research Panel.

The post Exploring the challenges in creating an accessible sortable list (drag-and-drop) appeared first on The GitHub Blog.

]]>
78695
How we’re building more inclusive and accessible components at GitHub https://github.blog/engineering/user-experience/how-were-building-more-inclusive-and-accessible-components-at-github/ Tue, 07 May 2024 17:00:20 +0000 https://github.blog/?p=77929 We’ve made improvements to the way users of assistive technology can interact with and navigate lists of issues and pull requests and tables across GitHub.com.

The post How we’re building more inclusive and accessible components at GitHub appeared first on The GitHub Blog.

]]>
One of GitHub’s core values is Diverse and Inclusive. It is a guiding thought for how we operate, reminding us that GitHub serves a developer community that spans a wide range of geography and ability.

Putting diversity and inclusivity into practice means incorporating a wide range of perspectives into our work. To that point, disability and accessibility are an integral part of our efforts.

This consideration has been instrumental in crafting resilient, accessible components at GitHub. These components, in turn, help to guarantee that our experiences work regardless how they are interacted with.

Using GitHub should be efficient and intuitive, regardless of your device, circumstance, or ability. To that point, we have been working on improving the accessibility of our lists of issues and pull requests, as well as our information tables.

Our list of issues and pull requests are some of the most high-traffic experiences we have on GitHub. For many, it is the “homepage” of their open source projects, a jumping off point for conducting and managing work.

Our tables help to communicate, and facilitate taking action with confidence on complicated information relationships. These experiences are workhorses, helping to communicate information about branches, repositories, secrets, attestations, configurations, internal documentation, etc.

Nothing about us without us

Before we discuss the particulars of these updates, I would like to call attention to the most important aspect of the work: direct participation of, and input from daily assistive technology users.

Disabled people’s direct involvement in the inception, design, and development stages is indispensable. It’s crucial for us to go beyond compliance and weave these practices into the core of our organization. Only by doing so can we create genuinely inclusive experiences.

With this context established, we can now talk about how this process manifests in component work.

Improvements we’re making to lists of issues and pull requests

A list of nine GitHub issues. The issues topics are a blend of list component work and general maintenance tasks. Each issue has a checkbox for selecting it, a status icon indicating that it is an open issue, a title, metadata about its issue number, author, creation date, and source repository. These issues also have secondary information including labels, tallies for linked pull requests and comments, avatars for issue assignees, and overflow actions. Additionally, some issues have a small badge that indicates the number of tasks the issue contains, as well as how many of them are completed. Above the list of issues is an area that lists the total number of issues, allows you to select them all, control how they are sorted, change the information display density, and additional overflow actions.

Lists of issues and pull requests will continue to support methods of navigation via assistive technology that you may already be familiar with—making experiences consistent and predictable is a huge and often overlooked aspect of the work.

In addition, these lists will soon be updated to also have:

  • A dedicated subheading for quickly navigating to the list itself.
  • A dedicated subheading per issue or pull request.
  • List and list item screen reader keyboard shortcut support.
  • Arrow keys and Home/End to quickly move through each list item.
  • Focus management that allows using Tab to explore individual list item content.
  • Support for Space keypresses for selecting list items, and Enter for navigating to the issue or pull request the list item links to.

This allows a wide range of assistive technologies to efficiently navigate, and act on these experiences.

Improvements we’re making to tables

A table titled, ‘Active branches’. It has five columns and 7 rows. The columns are titled ‘branches’, ‘updated’, ‘check status’, ‘behind/ahead’, ‘pull request’, and ‘actions’. Each row lists a branch name and its associated metadata. The branch names use a GitHub user name/feature name pattern. The user names include people who worked on the table component, including Mike Perrotti, Josh Black, Eric Bailey, and James Scholes. They also include couple of subtle references to disability advocates Alice Wong and Patty Berne. The branches are sorted by last updated order, and after the table is a link titled, ‘View more branches’.

We are in the process of replacing one-off table implementations with a dedicated Primer component.

Primer-derived tables help provide consistency and predictability. This is important for expected table navigation, but also applies for other table-related experiences, such as loading content, sorting and pagination requests, and bulk and row-level actions.

At the time of this blog post’s publishing, there are 75 bespoke tables that have been replaced with the Primer component, spread across all of GitHub.

The reason for this quiet success has been due entirely to close collaboration with both our disabled partners and our design system experts. This collaboration helped to ensure:

  1. The new table experiences were seamlessly integrated.
  2. Doing so, improved and enhanced the underlying assistive technology experience.

Progress over perfection

Meryl K. Evans’ Progress Over Perfection philosophy heavily influenced how we approached this work.

Accessibility is never done. Part of our dedication to this work is understanding that it will grow and change to meet the needs of the people who rely on it. This means making positive, iterative change based on feedback from the community GitHub serves.

More to come

Tables will continue to be updated, and the lists should be released publicly soon. Beyond that, we’re excited about the changes we’re making to improve GitHub’s accessibility. This includes both our services and also our internal culture.

We hope that these components, and the process that led to their creation, help you as both part of our developer community and as people who build the world’s software.

Please visit accessibility.github.com to learn more and share feedback on our accessibility community discussion page.

The post How we’re building more inclusive and accessible components at GitHub appeared first on The GitHub Blog.

]]>
77929
Exploring developer happiness, inclusion, and productivity at GitHub’s Design Conference https://github.blog/engineering/user-experience/exploring-developer-happiness-inclusion-and-productivity-at-githubs-design-conference/ Wed, 19 Jul 2023 17:00:44 +0000 https://github.blog/?p=73245 As a design organization, we have the opportunity to make a significant impact on designing the platform for all developers. How does the emergence of creative AI impact our work? How can we achieve an inclusive experience for a spectrum of all abilities? What does designing for developer happiness look like?

The post Exploring developer happiness, inclusion, and productivity at GitHub’s Design Conference appeared first on The GitHub Blog.

]]>
As a design organization, we have the opportunity to make a significant impact on designing the platform for all developers. How does the emergence of creative AI impact our work? How can we achieve an inclusive experience for a spectrum of all abilities? What does designing for developer happiness look like? These are questions the GitHub Design team asked ourselves in our first-ever internal Design Conference, LGTM.

Why did we call the conference LGTM? LGTM is a common comment on pull requests meaning “looks good to me.” Because we’re constantly influenced by designing for developers and we’ve all approved a pull request or two (or many) with “LGTM 👍!” It has seeped into the very DNA of how we collaborate throughout our team.

Customers are at the center of everything we do on Design. As we aim to fundamentally transform the way people build software, the takeaways from this event reinforced our dedication to delivering a great product for all developers, everywhere.

Designing for productivity and AI

AI is already starting to transform the software development lifecycle, including innovations in the way we design. During the LGTM Design Conference, we explored the opportunities and implications for designers as AI continues to reshape the way we all interact with technology.

We were fortunate to be joined by several talented Design and Engineering leaders, Sally Woellner, Kathryn Gonzalez, and Amelia Wattenberger, in a fireside chat hosted by Martin Woodward. Their conversation touched on the most exciting opportunities unlocked by AI, including new ways to be creative, breaking down barriers for anyone to express their creativity, working alongside design systems to ship consistent output, and saying goodbye to the mundane work we all would rather not do.

Design systems and AI powered design tools are more about helping raise the floor versus limiting the ceiling of what a designer can do. Being able to give more time to the things that do actually give you space to think differently and have more creativity in your work… that’s where we’re going to see more creativity arise.

– Kathryn Gonzalez

Perhaps most interesting was the conversation around pushing AI beyond the showy and chat-box oriented ways it’s currently showing up and toward something more transformational and tailored to each person. Across the GitHub Design team, we’ve actively been exploring how this concept might come to life throughout the GitHub platform, whether it’s on pull requests, the dashboard, or GitHub Copilot. AI will certainly enable more innovation and interesting use-cases, and we’re excited to see how it takes shape.

…in many ways the greatest hype is now that the public is super aware of the capabilities of AI and everyone’s rallying around it to try and figure out exactly how it fits into our life and what we should be doing with it.

– Sally Woellner

Beyond the fireside chat, our own designers hosted talks on the basics of conversational AI and how to most effectively write prompts and set context to maximize the value of AI’s response. As designers, we are responsible for shaping the experience to deliver a particular intent. Tobias Ahlin pushed further into the theme of leveraging AI to unlock creativity. He used the example of Pablo Picasso’s hyperbolic and dark quote, “I have discovered photography now I can kill myself. I have nothing else to learn,” which we know was far from true. Picasso ultimately pushed art in a direction where photography could not go. We’re now at a critical juncture where creativity will start to look different. And the question is, how? AI tools, such as GitHub Copilot and Midjourney are enabling us to very quickly try out very divergent ideas, explore new problem areas, and are opening new doors for creativity. We’re all at the forefront of exploring this future, and we can positively impact what the journey holds.

It’s clear that we won’t just see the same thing, and more of it, we’ll see something completely new too.

– Tobias Ahlin

In the end, it’s imperative that we keep putting the customer at the center of everything we do. As we interact with AI and build new AI tools, we need to continue talking about customer needs and jobs to be done.

This is wonderful technology, it’s super powerful, mind blowing, and it’s important that you still remain focused on what the job is to be done for the user. Think about the technology as another tool to deliver an outcome and the outcome matters more than the technology itself. Next, it’s important to think about the context. What data is available? What data would I use or rely on to deliver this particular outcome?

– Skylar Anderson

You can watch some of the talks on YouTube:

Designing for everyone

As designers, we aim to create a joyful, usable and productive experience for everyone, including people with disabilities. During the conference, we heard from a diverse group of internal and external accessibility experts, as well as a customer panel of GitHub customers with lived experiences using a variety of assistive technologies. Great accessibility and great design are two sides of the same coin, and more often than not, focusing on inclusive experiences informed by data results in a better experience for everyone.

Christina Mallon reminded us that disability is caused by a mismatch between a person and a human-made experience. It’s the world around us that wasn’t built with disability in mind, and everyone experiences disability at multiple points in their lives.

I only ever feel disabled when I can’t do something and when that wasn’t designed with accessibility in mind.

– Christina Mallon

Designing complex interactions for users with a wide range of abilities is challenging, and as we learned during the panel, one individual’s needs can even be contradictory to others’. It’s easy to fall into traps where accessibility checks pass but the resulting outcome isn’t a good experience for anyone, especially for those outside of mainstream disabilities. We explored the idea that ability is a spectrum and that universal design, as it currently exists, doesn’t work. Designing for all developers cannot be solved by designing for the average of all people’s experiences.

… we make the biggest mistake of our lives in thinking of accessibility as a dichotomy, where either you’re a disabled person or you’re not; ability is a spectrum. It’s hard to make things that work for everybody at all times. It’s difficult, it’s really difficult. But, that’s not an excuse for us to not start walking towards it. We should start walking towards it.

– Ather Sharif

Technology surfaced as a critical opportunity to create better experiences for people with disabilities. Voice to text options, like GitHub Copilot Voice and Google’s auto-complete functionality, were cited as tools greatly improving efficiency where completing work would otherwise take a long longer with certain disabilities. Catharine McNally, who is deaf, further reinforced the opportunity technology presents by sharing her own hacks, such as using smart lights to flash repeatedly during a tornado watch and using ChatGPT to serve as a homegrown accessibility accommodation for meeting notes. We explored the possibility of leveraging AI and ML to proactively surface accessibility settings to users who exhibit certain behaviors and patterns.

What if instead of users adapting to new systems, the systems started to adapt to the user?

– Ather Sharif

A resounding takeaway was the importance of designing for accessibility and inclusion from the very beginning, with the perspectives of people with disabilities in the room. With the incredible speed at which AI tools are evolving, the industry collectively has an opportunity to be a part of the solution and design the blueprints of accessible AI.

Learn more about GitHub’s commitment to accessibility at https://accessibility.github.com/.

You can watch some of the talks on YouTube:

Designing for joyful developer experiences

GitHub’s Design team is constantly finding new ways to put developers first in everything we do, and one of those ways is sparking joy at the right moments. Sometimes this looks like entirely new and innovative solutions, and sometimes this looks like tiny incremental changes created with a whole lot of passion. We’re fueled by the constant flywheel of feedback from our customers and experiments.

Aja Shamblee led a session on how our Brand and Marketing Design team is drawing inspiration from cinema, and how continual learnings are contributing back to our flywheel to generate momentum and scale. We learned that the recent GitHub Copilot X campaign was inspired by the original poster from Ridley Scott’s Alien!

Every choice in lighting, set design, costume, hair and makeup, even angle shot, is meant to take you into the narrative, build a character, and spur some emotion through visual styles. Cinema as a principle gives us the opportunity to make objective aesthetic choices that evoke very deliberate reactions.

– Aja Shamblee

Aja highlighted how these core principles impacted the success of the recent Galaxy and GitHub Copilot X wins and revealed how they are being applied to in-progress Universe explorations. We believe brand is a product. And like any product, brands must evolve, adapt, and shift to highlight the heroes of GitHub, while maintaining a clear connection to our community.

Cameron Foxly expanded on the idea of cinematic storytelling with GitHub’s unique character-centered brand and what we lovingly refer to as “The Octocat Universe.” Here we are, 15 years out from Mona the Octocat’s original appearance, and not only is she still our logo, but she’s deeply loved by the community and her story is constantly evolving and adding new characters. Most recently, GitHub Copilot has emerged as the newest character in the Octocat Universe, and like the AI product it represents, GitHub Copilot is invented by Mona to be her AI assistant, powered by the collective knowledge of the Octocat community. The team is exploring new and interesting ways to bring GitHub Copilot to life across the product through illustration and animation as a way to create moments of joy. It’s been our team’s commitment to storytelling that has kept Mona’s magic alive for so long.

Illustration of Mona and Copilot work together to filter incoming notifications. Several green and pink shapes are floating out of a portal with a cloud of purple behind them.

Illustration with Hubot, Mona, and Copilot overseeing confirmation of their task list being completed. Mona is holding a clipboard and looking at a glowing green checkmark coming out of a rock that looks like a cauldron.

Grace Vorreuter and Jared Bauer from our Customer Research team shared how to measure and quantify delight by going beyond CSAT and NPS to satisfy customers’ emotional needs. We explored the hierarchy of customer needs, focusing on the deep delight that moves products from being good to great.

You can watch some of the talks on YouTube:

Designing a brand for designers, who design for developers

For our first LGTM Design Conference, we wanted to create a unique identity that was accessible, sparked joy, and was “GitHubby” to its core. The Brand team had fun creating everything from motion to slides to merch. Throughout the event, we saw the brand show up in everything from custom Zoom backgrounds to pixel art in Figma.

The team’s discovery process started with developing mild, medium, and hot looks to calibrate and set a direction. This “spice scale” is an integral part of Brand Studio’s process. As they explored various directions, it became clear that a simple thumbs up 👍 would be central to the identity.

Get a behind the scenes look into how the brand was made.

Moodboard with a collage of brightly colored images and patterns featuring the phrase LGTM.

LGTM Design Conference themes and sessions

LGTM Design Conference featured 16+ unique sessions across four themes: AI and innovation, accessibility and inclusion, designing for delight, and building GitHub using GitHub. The event had 35+ speakers, including many of our talented Design Hubbers, and we had the pleasure of hosting several external speakers, including Daniel Burka, John Maeda, Kathryn Gonzalez, Sally Woellner, Margaux Joffe, Riley Cran, and Christina Mallon.

You can watch some of the talks on our YouTube playlist.

Day 1 sketch Notes by Darby Thomas in a color watercolor painting style featuring highlights, images and quotes from sessions. At the center of the sketch is the LGTM thumb logo surrounded by blues, pinks, oranges, and yellows.

And, what’s any good conference without swag? The LGTM swag turned out too good not to share, so we made it available to everyone. Visit the GitHub Shop and order your favorite items!

The LGTM Design Conference looked good to us, and we hope it does to you as well.

The post Exploring developer happiness, inclusion, and productivity at GitHub’s Design Conference appeared first on The GitHub Blog.

]]>
73245
Accessibility considerations behind code search and code view https://github.blog/engineering/user-experience/accessibility-considerations-behind-code-search-and-code-view/ Thu, 06 Jul 2023 19:00:51 +0000 https://github.blog/?p=72897 A look at how we improved the readability of code on GitHub.

The post Accessibility considerations behind code search and code view appeared first on The GitHub Blog.

]]>
GitHub prides itself on being the home for all developers, including developers with disabilities. Accessibility is a core priority for all new projects at GitHub, so it was top of mind when we started our project to rework code search and the code view at GitHub. With the old code view, some developers preferred to look at raw code rather than code view due to accessibility barriers, and that’s what we set out to fix. This blog post will shed light on our process and three major tasks that required thoughtful design and implementation—code search and query builder, the file tree, and navigating code by keyboard.

Process

We worked with a team of experts at GitHub dedicated to accessibility, since while we are confident in our development skills, accessibility has many nuances and requires a large depth of knowledge to get right. This team helped us immensely in three main ways:

  1. External auditing
    In this project, we performed accessibility auditing on all of our features. This meant that another team of auditors reviewed our webpage with all the changes implemented to find accessibility errors. They used tools including screen readers, color contrast tools, and more, to identify areas that users with disabilities may find hard to use. Once those issues were identified, the accessibility team would take a look at those issues and suggest a proper solution to the problem. Then, it was up to us to implement the solution and collaborate with the team where we needed additional assistance.
  2. Design reviews
    The user interface for code search and code view were both entirely redesigned to support our new goals—to allow users to search, navigate, and understand code in a way they weren’t able to before. As a part of the design process, a team of accessibility designers reviewed the Figma mockups to determine proper HTML markup and tab order. Then, they included detailed explanations of what interactions should look like in order to meet our accessibility goals.
  3. Office hours
    The accessibility team at GitHub hosts weekly meetings where engineers can sign up and discuss how to solve problems with their features with the team and a consultant. The consultant is incredibly helpful and knowledgeable about how to properly address issues because he has lived experience with screen readers and accessibility.During these meetings, we were able to discuss complicated issues, such as the following: proper filtering for code search, making an accessible file tree, and navigating code on a keyboard, along with other issues like tab order, and focus management across the whole feature.

Implementation details

This has been written about frequently on our blog—from one post on the details of QueryBuilder, our implementation of an accessible finder component to details about what navigating code search accessibly looks like. Those two posts are great reads and I strongly recommend checking them out, but they’re not the only thing that we’ve worked on. Thanks to @lindseywild, @smockle, @kendallgassner, @owenniblock, and @khiga8 for their guidance and help resolving issues with code search. This is still a work in progress, especially in areas like the filtering behavior on the search pages.

Two areas that also required careful consideration were managing focus after changing the sort order of search results and how to announce the result of the search for screen reader users.

Changing sort—changing focus?

When we first implemented this sort dropdown for non-code search types, if you navigated to it with your keyboard and then selected a different sort option, the whole page would reload and your focus moved back to the header. Our preferred, accessible behavior is that, when a dropdown menu is closed, focus returns to the button that opened the menu. However, our problem was that the “Sort” dropdown didn’t merely perform client-side operations like most dropdowns where you select an option. In this case, once a user selects an option, we do a full page navigation to perform the new search with the sort order added. This meant that the focus was being placed back on the header by default after the page navigation, instead of returning to the sort dropdown. For sighted mouse users, this is a nonissue; for sighted keyboard users, it is unexpected and annoying; for blind users, this is confusing and makes the product hard to use. To fix this, we had to make sure we returned focus to the dropdown after reloading the page.

Screenshot of search results. I have searched “react” and the “Repositories” filter by item is selected. There are 2.2M results, found in 1 second. The "Sort By" dropdown is open with “Best match” selected. The other options are “Most stars”, “Fewest stars”, “Most forks”, “Fewest forks”, “Recently updated”, and “Least recently updated”.

Announcing search results

When a sighted user types in the search bar and presses ‘Enter’ they quickly receive feedback that the search has completed and whether or not they found any results by looking at the page. For screen reader users, though, how to give the feedback that there were results, how many, or if there was an error requires more thought. One solution could be to place focus on the first result. That has a number of problems though.

  1. The user will not receive feedback about the number of search results and may think there’s only one.
  2. The user may miss important context like the Single sign on banner.
  3. The user will have to tab through more elements if they want to get back.

Another solution could be to use aria-live announcements to read out the number of results and success. This has its own problems.

  1. We already have an announcement on page navigation to read out the new title and these two announcements would compete or cause a race condition.
  2. There isn’t a way to programmatically force announcements to be read in order on page load.
  3. Some screen reader users have aria-live announcements turned off since they can be annoying.

After some discussion, we decided to focus and read out the header with the number of search results after a search and allow users to explore for errors, banners, or results on their own once they know how many results they received.

A screenshot of search results showing 0 results and an error message saying, “Invalid repository name.” I have searched “repo:asdfghijkl." The “Code” filter item is selected. There is also some help text on the page that says “Your search did not match any code. However, we found 544k packages that matched your search query. Alternatively, try one of the tips below.

Tree navigation

We knew when redesigning the code viewing experience that we wanted to include the tree panel to allow users to quickly switch between folders or files, because understanding code often requires looking in multiple places. We started the project by making our own tree to the aria spec, but it was too verbose. To fix this, our accessibility and design teams created the TreeView, which is open source. This was implemented to support various generic list elements and use proper HTML structure to make navigating through the tree a breeze, including typeahead (the ability to type a bit and have focus move to the first matching element), proper announcements for asynchronous loading of deeply nested items, and proper IDs for all elements that are guaranteed to be unique (that is, not constructed from their contents which may be the same). The valid markup for a tree view is very specific and developing it requires careful reviews for common issues, like invalid child item types under a role="group" element or using nested list elements without considering screen reader support, which is unlike most TreeView implementations on the internet. For more information about the design details for the tree view, check out the markdown documentation. Thanks to @colebemis, @mperrotti, and @ericwbailey for the work on this.

Reading and navigating code by keyboard

Before the new code view, reading code on GitHub wasn’t a great experience for screen reader users. The old experience presented code as a table—a hidden detail for sighted users but a major hurdle for screen readers, since code isn’t a table, and the semantics didn’t make sense in that context. For example, in code, whitespace has meaning. Whether stylistic or functional, that whitespace gets lost for screen reader users when the code is presented as a table. In addition, table semantics force users into line-by-line navigation, instead of character-by-character or word-by-word navigation. For many users, these hurdles meant that they mostly used GitHub just enough to be able to access the raw code or use a code editor for reading code. Since we want to support all our users, we knew we needed to totally rethink the way we structured code in the DOM.

This problem became even more complicated when we introduced symbols and the symbols panel. Mouse users are able to click on a “symbol” in the code—a special code element, like a function name—to see extra details about it, including references and definitions, in the symbol panel. They can then explore the code more deeply, navigating between lines of code where that symbol is found to investigate and understand the code better. This ability to dive deep has been game changing for many developers. However, for keyboard users, it doesn’t work. At best, a keyboard user can use the “Open symbols panel” button in the code header and then filter all symbol definitions for one they are interested in, but this doesn’t allow users to access symbol references when no definitions are found in a file. In addition, this flow really isn’t the same—if we want to support all developers, then we need to offer keyboard users a way to navigate through the code and select symbols they are interested in.

In addition, for many performance reasons mentioned in the post “Crafting a better, faster code view,” we introduced virtualization to the code viewer which created its own accessibility problems—not having all elements in the DOM can interfere with screen readers and overriding the cmd / ctrl + f shortcut is generally bad practice for screen readers as well. In addition, virtualization posed a problem for selecting text outside of the virtualization window.

This is when we came up with the solution of using a cursor from a <textarea> or a <div> that has the property contentEditable. This solution is modeled after Monaco, the text editor that powers VS Code. <textarea> elements, when not marked as readOnly, and contentEditiable<div> elements have a built in cursor that allows screen reader users to navigate code in their preferred manner using their built in screen reader settings. While a contentEditiable <div> would support syntax highlighting and the deep interactions we needed, screen readers don’t support them well1 which defeats the purpose of the cursor. As a result, we decided to go with the <textarea> However, <textarea> elements do not support syntax highlighting or deeper interactions like selecting symbols, which meant we needed to use both the <textarea> as a hidden element and syntax highlighted elements aligned perfectly above. Since we hide the text element, we need to add a visual “fake” cursor to show and keep it in sync with the “real” <textarea> cursor.

While the <textarea> and cursor help with our goals of allowing screen reader users and keyboard users to navigate code and symbols easily, they also ended up helping us with some of the problems we had run into with virtualization. One example of this was the cmd + f problem that we talk about more in depth in the blog post here. Another problem this solved was the drag and select behavior (or select all) for long files. Since the <textarea> is just one DOM node, we are able to load the whole file contents and select the contents directly from the <textarea> instead of the virtualized syntax highlighted version.

Unfortunately, while the <textarea> solved many problems we had, it also introduced a few other tricky ones. Since we have two layers of text, one hidden unless selected and one visual, we need to make sure that the scroll states are aligned. To do this, we have written observers to watch when one scrolls and mimic the scroll on the other. In addition, we often need to override the default <textarea> behaviors for some events—such as the middle click on mouse which was taking users all the way to the bottom of the code. Along with that issue, different browsers handle <textarea>s differently and making sure our solution behaves properly on all browsers has proven to be time intensive to say the least. In addition, we found that some browsers, like Firefox, allow users to customize their font size using Text zoom, which would apply to the formatted text but not the <textarea>. This led to “ghost text” issues with selection. We were able to resolve that by measuring the height of the text that is rendered and passing that to the <textarea>, though there are still some issues with certain plugins that modify text. We are still working to resolve these specific issues as well. In addition, the <textarea> currently does not work with our Wrap lines view option, which we are working to fix. Thanks especially to @joycezhu, @andrialexandrou, @adamshwert, and @jbrown1618 who put in a ton of work here.

Always iterating

We have taken huge strides to improve accessibility for all developers, but we recognize that we aren’t all the way there. It can be challenging to accommodate all developers, but we are committed to improving and iterating on what we have until we get it right. We want our tools to support everyone to access and explore code. We still have work—a lot of work—to do across GitHub, but we are excited for this journey.

To learn more about accessibility at GitHub, go to accessibility.github.com.


  1. For more information about contentEditable and screen readers, see this article written by @joycezhu from our Accessibility Engineering team. 

The post Accessibility considerations behind code search and code view appeared first on The GitHub Blog.

]]>
72897
Design’s journey towards accessibility https://github.blog/engineering/user-experience/designs-journey-towards-accessibility/ Wed, 17 May 2023 15:30:00 +0000 https://github.blog/?p=71971 Design can have a significant impact on delivering accessible experiences to our users. It takes a cultural shift, dedicated experts, and permission to make progress over perfection in order to build momentum. We’ve got a long way to go, but we’re starting to see a real shift in our journey to make GitHub a true home for all developers.

The post Design’s journey towards accessibility appeared first on The GitHub Blog.

]]>
As a design organization we have the opportunity to make a significant impact on making GitHub inclusive for all developers. Designing complex interactions for users with a wide range of abilities is challenging. It’s easy to fall into traps where checks pass but the resulting experience isn’t actually a good experience for anyone, including those with disabilities. We are early on our path to accessibility, but we’ve also come a long way.

As a Hubber of 7+ years, I’ve seen a shift from my time as a manager leading a small scrappy design systems team, incrementally finding wins to improve the accessibility of our UI, to leading an org that has dedicated specialists helping to incorporate accessibility earlier into our design process–which is often referred to as “shifting left.” We’re seeing progress and momentum.

When I think about the circumstances that needed to start this shift, several key elements stand out: we needed a cultural shift in the organization with support of leadership, we needed full-time accessibility specialists (it couldn’t just be volunteers), and we needed to give people permission to make incremental progress without feeling they had to achieve perfect results.

The path towards delivering successful outcomes for accessibility is a journey. Like most meaningful change it takes time, a growth mindset, and some trial and error. I’d like to share GitHub design’s journey so far.

Follow the energy

Every design team has that person that asks, usually after a presentation of a potentially great design, “does that pass color contrast?” I’ve been that person. Turns out, it’s not the most effective way to get people energized about tackling accessibility. So, what does?

One of the more effective tactics I’ve found is to follow where the energy is. This can be the energy of passionate people that start to move an effort forward, or the energy an impactful and exciting project draws. For example, in 2017 we shipped a visual refresh to our marketing pages which included updates to making colors more vibrant. Naturally, there was a lot of energy and excitement around this project. We were already working on a color system to support this launch and took the opportunity to also improve the color contrast on marketing pages and GitHub UI via the new color system.

When testing out a new process, I’ve also found it most effective to start with those that are most excited, passionate, and open-minded–they generate the infectious momentum and energy. We’ve kicked off two new pilots this year: our Accessibility Design Bootcamp and Design Champions. The Accessibility Design Bootcamp is a bespoke training curriculum that is tailored to specific teams with a high focus on cultivating a strong relationship with the GitHub Accessibility team. The Design Champions program is designed to empower our individual contributors (ICs) to dedicate time for accessibility through training, process improvements, and accessible design consultation across GitHub.

The Design Champions program required people who were self-starters and passionate about accessibility, so we deliberately asked for people to opt into the first cohort, and worked with managers to ensure the individuals had support and bandwidth to participate. In just a few weeks since the pilot launched, we’ve already seen new processes, tools and approaches prototyped across our design teams. This is energizing for the accessibility team, and the momentum is creating interest from potential future Design Champions.

Building leverage with design systems

When I joined GitHub in 2015, I was very happy that we had the start of a design system with Primer. At the time Primer only included CSS for a small subset of UI patterns, but it was open sourced, packaged up and distributed via npm, and included the most important and highly used components.

Over time the design systems team developed Primer’s CSS architecture to be more reusable by untangling it from specific features and views, updated naming conventions to make it more intuitive, and added flexible single purpose class selectors called utilities. This got us far for a while before we turned to React and Rails UI components, which helped us provide behaviors along with markup and styles. UI components also give us greater ability to provide complete and accessible interaction patterns for feature teams to leverage out of the box.

As we developed UI components, we also developed our primitives layer and moved our color system, typography, and spacing into design tokens. In early 2020, we successfully shipped a visual design refresh to GitHub, primarily leveraging design tokens and style changes. This update not only revitalized the platform’s aesthetics but also prompted a broader adoption of Primer styles and primitives throughout GitHub’s UI. Building on this momentum, we unveiled Dark Mode and introduced a revamped color system at GitHub Universe in 2020, ensuring seamless theming capabilities. To accommodate the diverse needs and preferences of developers, we promptly followed up with the addition of dimmed and high contrast modes.

The architectural investment enabled us to automatically propagate design changes throughout the entire design system, eliminating the need for tedious manual adjustments. Recently, the Primer team was able to make another significant step on our accessibility strategy, and updated our color system to address thousands of color contrast issues in our default light and dark themes.

Primer played a crucial role in our accessibility journey, enabling incremental progress from the start and laying essential foundations for our ongoing efforts. It has now become a critical component of our strategy to deliver inclusive experiences to developers. While a design system ​​cannot guarantee universal inclusivity in everything you ship, it does give you a huge amount of leverage in delivering and maintaining a user experience that can be enjoyed by a broader set of users.

A cultural shift

The GitHub Design Org is home to our product designers, researchers, brand and marketing, and design systems teams. While we work embedded in squads with other functions like engineering and product, most design Hubbers also feel a strong identity to the GitHub Design Org, so although a cultural shift needs to happen at the company level, there are things we can do to influence our own microculture within the GitHub Design Org.

As the leader of our org, I made sure to be vocal about accessibility as a priority; however, the message needs to be repeated and brought into IC and project team communication. For design, that meant bringing this into discussions with my leadership team, ensuring we had clarity, and asking them to bring this into conversations with their managers and their ICs so that the message would cascade down.

Talking about accessibility needs to be backed up with programs, practices, and processes. Some ways we approached this were:

  • Hosting accessibility design office hours. Our designers working on accessibility host weekly office hours to give people a “live” opportunity to ask questions and get feedback. We hold regular design reviews with leadership and updated our review templates to include questions about accessibility to ensure it was part of every design review.
  • Highlighting in org-wide meetings. We hold monthly design “GitTogethers” where we often have ICs presenting on their work, and last November we hosted an “Accessibility Takeover” featuring multiple presentations from our accessibility designers and design engineers, showcasing examples of collaboration, and how to engage with the team. We also invited our Head of Accessibility, Ed Summers, to kick off the meeting to highlight the priority of this work.
  • Program management. It became clear we needed to think about shifting accessibility left across the entire GitHub Design Org and develop a holistic program. With our design leaders focused on delivering updates to our product, we decided to hire an Accessibility Program Manager who joined earlier this year. They’ve been able to work with design leadership as well as our counterparts in engineering and product to develop programs that focus on where design can make a difference in helping generate accessibility enablement at scale. Two such programs are our Design Champions program and Accessibility Training Bootcamp. Both are putting in place education, integration, and systematic approaches to scaling up accessibility across GitHub.
  • Inclusive workplace practices. As we’ve learned more about workplace accessibility we’ve incrementally updated our practices, such as improving how we run our GitTogethers, how we communicate in written and synchronous communication, and how we create inclusive presentations. Regardless of the abilities of who we currently work with, adopting inclusive workplace practices builds our practice, empathy and understanding, and sets us up better for onboarding people using assistive technologies in future.

Throughout all of these updates, we’ve celebrated wins to the point of almost over-communicating to keep the energy levels up while we tackle some difficult challenges.

All developers, not average developers

Designing an interface for many different user needs cannot be solved by designing the average of all people’s experiences. One size does not fit all, yet in the field of design we’re often making trade-offs in pursuit of finding a solution that works for the most people. That’s dangerous because what works for most people can completely exclude some people.

GitHub is used by millions of developers around the world, it’s been built and designed by hundreds and thousands of Hubbers, over 15+ years. When we make changes, it can have big impacts to customers who have grown accustomed to interaction patterns and workflows. Even in pursuit of improving our UI for people using a variety of assistive technologies, we could still end up with solutions that do not provide a good user experience. Improving individual features may not be enough, I think we’ll increasingly find the need to provide greater control to our customers at the application-wide level, similar to how an operating system gives customization options that influence your experience using a computer.

Regardless, we have to be careful not to make assumptions and challenge our biases. We need to explore, test, and seek feedback from real people with a range of abilities using a variety of assistive technologies. If you have a suggestion for improvement, a feature request, or just want to share your thoughts, let us know in Accessibility Feedback Community.

It’s important to hire people with disabilities, too. It has been clear to me how important a diverse team is from my experience working on design systems, where we are in danger of embedding our bias into systems the whole company uses to deliver experiences to our customers. We’re more likely to build inclusive experiences if we better represent the people we are designing for.

Progress over perfection

I strongly believe in creating an environment where people have permission to make progress over perfection. This can feel uncomfortable, especially when we understand the impact of getting things wrong. Embracing others’ enthusiasm is key, encouraging a safe environment to experiment where people can learn and make mistakes will result in faster progress overall. Consider everyone throughout the company as part of the accessibility team, not just the dedicated employees, if you want to build momentum on the journey towards accessibility.

If you’d like to learn more about accessibility at GitHub, check out accessibility.github.com.

The post Design’s journey towards accessibility appeared first on The GitHub Blog.

]]>
71971