Gemini App Gets Remade ‘Audio Overview’ player
A report from 9to5 Google’s Abner Li this week brings with it news Google has added what Li describes as a “nice quality-of-life update” to the Gemini app on iOS and Android: the ability to generate audio overviews, replete with native playback controls. The feature is in version 16.27 of the Google app on Android, as well as Gemini on iOS,
“Previously, tapping on a generated Audio Overview opened the file in your browser with a long URL,” Li wrote of the interface changes for Audio Overview. “You could listen in that Chrome tab or download (and use the Files app) for an unwieldy experience. Now, the Android and iOS app, like gemini.google.com, uses a native player.”
Li’s story includes screenshots (on Android) showing asking Gemini to “Generate Audio Overview” of various PDF files. In a broad scope, this functionality strikes me as conceptually similar to how, Forbes for instance, includes a button on webpages that people can click or tap to have an article read aloud to them. It effectively turns news stories into audiobooks—which, as Dr. Victor Pineda told me last year, were originally conceived by people in the Blind community. Ipso facto, the Gemini app’s new Audio Overview feature is, at its core, an accessibility feature. Beyond Blind and low vision people, the overviews may very well be a boon to, say, people who are strong auditory learners in grasping information. Likewise, it’s plausible someone with limited range of motion may find audio content more accessible than manually scrolling the aforementioned PDF document. Whatever the reason(s), it’s obvious the existence of Generate Audio Overview is as much about accessibility as it is purported convenience.
For its part, OpenAI has a Voice Mode for ChatGPT. I covered its Read Aloud feature last year, with my story including an interview with OpenAI’s Joanne Jang. She told me all about Read Aloud, as well as OpenAI’s philosophy on prioritizing accessibility for all.
‘signing into streaming accounts is a major pain’
Ryan Christoffel this week wrote for 9to5 Mac about an issue he described as Apple TV’s “biggest problem”: signing into streaming accounts. Apple, he said, is cognizant of the issue and is attempting to ameliorate things by building a new API for developers.
“If you’re a linear TV user, switching to an Apple TV 4K with streaming apps can require a huge learning curve. Different accounts, different apps and credentials, and of course entering those credentials on your TV,” Christoffel wrote. “Even for the most tech-savvy, it’s a clunky experience entering streaming app credentials using a TV remote.”
Apple’s new framework for tvOS 26, called Automatic Sign-In, is characterized by the company as “[letting] people sign in to your app once—on one device—and access it across each of their Apple devices,” adding the upshot of the API is it “eliminates the need to re-enter usernames and passwords, so people can enjoy your app seamlessly from any screen.” Obviously, the user-facing benefits are contingent upon companies such as ESPN, Hulu, and others actually adopting the API in the same way, for example, Netflix supports the native playback controls on tvOS rather a building a custom setup.
“You’ll be able to log in to Netflix once [with Automatic Sign-In] on your iPhone, then automatically get logged in on iPad, Apple TV 4K, and so on,” Christoffel said.
Many streaming services have, to their credit, offered QR codes to help expedite the sign-in process. I use this method occasionally, and it’s been fine if not ideal. While there is a cogent argument for the accessibleness of QR codes in presenting information to disabled people, Apple’s solution vis-a-vis its new Automatic Sign-In API should prove markedly more accessible. The idea behind it is conceptually identical to how, say, AirPods pairing works: do so once and it’ll spread across the galaxy, so to speak. Moreover, the framework is yet one more example of a de-facto accessibility feature; Christoffel rightly frames the problem as tedious and annoying and inconvenient, but as ever, it’s an accessibility issue too. The fact of manually signing in to each and every streaming service—even with QR codes in tow—can be an arduous journey for many in the disability community. It isn’t a trivial distinction because, as I often say, it’s the little things that end up making the biggest difference in shaping the overall user experience—good or bad. For those with disabilities, implementation details like sign-in can have an outsized effect on how someone can use a product.
In other words, there’s a thin line between convenience and accessibility.
Finally, here’s a tvOS tip for me that’s helped with accessibility. Thanks to Joe Rossignol at MacRumors, I recently learned it’s possible to change the keyboard layout from a linear view to a grid view. Having letters, numbers, et al, packed together in a compact space is much easier to maneuver; I prefer the grid view to the seemingly infinite expanse of the (default) linear view. (And of course, I use my iPhone as a keyboard too.)
Forthcoming PlayStation software Update adds Multi-Device Pairing to dualSense Controllers
Sony on Wednesday announced its DualSense controllers will receive the ability to be automatically paired to multiple devices simultaneously. The software update, available in beta later this week, was detailed by Sony Interactive Entertainment’s vice president of product management Shuzo Kikuchi in a post published on Sony’s PlayStation blog.
“Many PS5 peripherals, including the DualSense wireless controller, are designed to support a variety of devices beyond PS5 including PC, Mac, and mobile devices. We believe enabling compatibility of our peripherals across multiple platforms creates a more flexible and seamless gaming experience,” Kikuchi wrote in the introduction. “As part of this effort, we’re excited to announce that the latest PS5 system update beta will preview a new feature that allows DualSense wireless controllers and DualSense Edge wireless controllers to be paired across multiple devices simultaneously, making it easier to switch between them without needing to pair each time.”
Kikuchi notes users have heretofore been required to pair their controller(s) each time someone wanted to use it with other devices. With the update, that tedium will be gone; users will be able to pair up to four devices at the same time and easily switch between them from the controller itself. “For example, you can take your controller which you use with your PS5, then seamlessly switch connection to a PC to play PC games, or connect it to a smartphone to enjoy Remote Play from your PS5,” Kikuchi said. “With this enhanced flexibility, you can enjoy gaming more freely across multiple devices.”
The pairing/switching process involves a combination of presses on the controller.
For its part, Apple has supported PlayStation controllers for several years on its panoply of platforms. In addition, the company announced at WWDC last month visionOS 26 supports Sony’s PlayStation’s VR2 Sense controllers after it had been rumored awhile.
Today’s news from Sony is welcome, particularly in an accessibility context. The big win here is alluded to in Kikuchi’s piece, as users needn’t manually pair their PlayStation controller with their other device(s). As with setting up AirPods, pair once and it propagates to one’s other kit. A disabled person who, for instance, has any number of cognitive/motor/visual conditions which could cause friction with bespoke pairing, this feature of ostensible convenience is transcended to become something arguably more meaningful by breaking a barrier and enabling greater accessibility to gaming to all. It goes to show how a seemingly mundane implementation detail—pairing accessories—can in actuality have a profound role in shaping a positive user experience for people.
Electronic Arts’ New ‘FC26’ Includes ‘First-Ever’ High Contrast Mode, more accessibility Features
Electronic Arts (EA) this week published so-called “pitch notes” for the latest edition of its forthcoming EA Sports’ soccer title, FC26. The game comes out on September 25.
The post’s authors are a trio of FC26 designers and producers in Keegan Sabatino, Gigo Navarro, and Thomas Caleffi. They write the team’s overarching goal with the new version of the game is “[making] gameplay better,” which has been achieved, they said, by focusing on three key areas: (1) competitive and authentic gameplay; (2) gameplay fundamentals; and (3) features and updates inspired by listening to user feedback.
“FC26 is powered by your feedback, and we aim to make it feel more responsive, rewarding, and enjoyable,” the team said of the game’s muse during development.
Most pertinent to me, of course, is accessibility. The team writes it has “worked closely with our Accessibility Design Council to introduce a range of new accessibility features” to FC26, with the marquee feature being a high-contrast mode. EA touts the addition is “one of the first-ever sports titles” to include such functionality. The all-new high contrast mode is described as “[increasing] the visual separation between footballers, the pitch, and other key gameplay elements, making action clearer and more visible to those who need it,” with EA adding the mode should prove especially helpful to blind and low vision gamers who “find it challenging to track fast-paced gameplay.” Moreover, high contrast mode is highly customizable; according to EA, users can choose whether to apply the increased contrast to a player’s entire body or just their kit, for example. Additionally, there’s a Pitch Saturation Slider that people can fiddle with to “adjust the intensity of the pitch colors, helping you focus on the most important parts of the game.” The feature’s raison d’être is characterized by EA as “[reducing] visual clutter and make gameplay more comfortable for everyone.”
Beyond high contrast mode, EA notes other accessibility features for FC26 include an “accessibility boot flow screen” that, similar to how Apple devices work, gives people the chance to toggle on whatever accommodation(s) they need to accessibly play the game, as well as improved, more configurable captions (or subtitles, as EA says).
FC26 will be compatible with PlayStations 4 and 5, Xbox Series X and S, Nintendo Switch and Switch 2, and PC, as well as cloud gaming services such as Amazon’s Luna.
A Brief (And belated) review of Beats Studio Buds+
Here’s my short take on Beats Studio Buds+: the transparent ones look fucking cool.
My longer take is I bought the $170 earbuds after seeing them discounted for last year’s Prime Day. I’ve used them with good regularity over the last year, alternating between them and my AirPods Pro, and they’re very nice. As an Apple-owned subsidiary, Beats’ products seem to more or less achieve feature parity with the canonical AirPods products. With notable few exceptions—the Beats’ lack support for head gestures for accepting or declining phone calls, as one example—the Studio Buds+ stand toe-to-toe with AirPods by offering 90% of the latter’s features. There’s active noise cancellation. There’s Transparency mode. There’s the same custom chip. There’s the one-tap, iCloud-backed setup and pairing process. The case is nice, replete with the cool-as-hell transparent aesthetic. There’s USB-C for charging. And yes, audio quality is top-notch.
At first blush, Beats Studio Buds+ are great—yet the devil lies in the accessibility details.
Accessibility obviously means different things to different people, but in my experience, the Beats Studio Buds+ fall short in terms of their hardware accessibility. The biggest pain point for me is getting them in and out of my ears without dropping them. Beats’ earbuds have decidedly shorter “stems” than on AirPods, which is bad because it makes grabbing them more precarious. My lackluster fine-motor skills are such that I find there’s less to grab onto with the Studio Buds+; this means I have to be much more deliberate and careful when taking them out of the case and putting them in my ears or vice-versa. It’s not a matter of can versus can’t—in absolute terms, I can get them in and out of my ears. I’ve grown accustomed to the design of the Studio Buds+ such that I know how to best finagle them for my needs. The crux of the issue is I shouldn’t have to handle them so gingerly. By contrast, the stems on the AirPods (of any surname) strewn about my office—ah, the life of a regular gadget reviewer—is a de-facto accessibility feature. What grab bars are to my shower, the stems are to my AirPods. The markedly lengthy stems give me something to grab onto as I get the earbuds in and out of my ears and back into the case. The saving grace for the Studio Buds+ is, again like AirPods, the case is magnetic so the earbuds are “sucked in” by physics when I go to put them away.
Then there’s the charging story. I won’t rehash my oft-cited argument, but suffice it to say, USB-C continues to belie accessibility in a functional sense. It’s surely nice to have One Cable To Rule Them All, but said cable oneness means little if you struggle to plug in the cable to charge the Studio Buds+. Ironically, this problem is exacerbated by my pair’s otherwise badass transparent finish, as it visually obfuscates the port in a similar way to how many people find Liquid Glass to be visually obfuscating (and thus less legible) in iOS 26, et al. As a practical matter, my hand-eye coordination sometimes struggles with finding the USB-C port on my Studio Buds+; it happened often enough that, once again, I beg of Apple/Beats to throw standardization by the wayside and make a magnetic USB-C port. I’m no materials engineer, so maybe this isn’t feasible, but my point stands: a magnetic USB-C port would make it much more accessible.
Score another point for AirPods and wireless charging, which the Studio Buds+ lack.
Finally, a cursory note on AirPods Max. Apple sent me a pair of the “new” ones shortly after the company announced back in late March iOS 18.4 would bring lossless audio and ultra-low latency audio to the high-end headphones. The AirPods Max have USB-C now and, despite sounding incredible, suffer from the inaccessibility issues faced by the Studio Buds+ (and my old pair of Lightning AirPods Max, for that matter). Not to mention AirPods Max remain, five years into life, heavy on the head that wears them.
Anyway, back to Studio Buds+. USB-C aside, they’re a terrific alternative to AirPods.
Google Introduces ‘Gemini Drops’ for Its AI Chatbot
Abner Li reports for 9to5 Google this week Google has begun doing what it calls monthly “Gemini Drops” for its AI assistant. On its Gemini Drop website, the company describes Gemini Drops as “[making] it easier to keep up with what’s being released” and encourages users to “check here regularly to find feature announcements, product tips, and see how our community is using Gemini to create, research, and do more.”
Gemini Drops are modeled after the Pixel Drops for Google’s flagship smartphones.
The Gemini Drop page features things like model updates such as Gemini 2.5 Pro, as well as accessibility-oriented features like Gemini Live Captions, but as Li points out, the latest addition is something Google calls Productivity Planner Gem. Google describes the Planner Gem as “[bringing] your emails, calendar, and more all in one place for your easiest prioritization yet.” According to Li, the software can do things like give you a morning brief on your task(s), plan and schedule products for the day, suggest which projects to tackle today, and more. The conceit here is obvious: Google is positioning Gemini as a tool with which people can be more productive in their work. Chatbots like Gemini that rely on artificial intelligence are particularly well-suited for such jobs. If nothing else, a computer’s greatest strength lies in its ability to automate. It’s why, for example, keyboard shortcuts exist or that we type documents on computers instead of typewriters or—as I did long ago—painstakingly written by hand.
But Gemini (and by extension, Planner Gem) is good for something else: accessibility.
To use Gemini to, say, automatically collate a list of one’s to-do items for the day can be a boon for a disabled people who, for instance, may have a cognitive condition that makes sorting through information inaccessible. Likewise, someone with visual and/or fine-motor disabilities may find it hard to comb through their task manager, however tidy it is, to find the top-line items to complete on a given day. Again, to utilize a tool like Planner Gem vis-a-vis Gemini to apply automation to this sort of digital housekeeping can be worth its weight in gold in an accessibility context. It does the work for a person in a way that isn’t born of sheer apathy or laziness—or even convenience. On the contrary, it taps into computers’ greatest strength to make being productive a more accessible endeavor. As I often say, this isn’t trivial, especially to disabled people.
It’s for these very reasons why I love using Gemini for web searches. While the AI hasn’t totally supplanted using Google Search in Safari, it has subsumed a lot of it for me. Having Gemini present me with information in a singular space, spanning various prompts, is far more accessible than juggling a trillion and one browser tabs. Of course I must be vigilant about possible errors and hallucination, but that’s okay; not everything found through in Google Search is, and never was, a paragon of truth anyway. One must be vigilant either way. Better yet, I like the warm and fuzzy feelings I get from using Gemini in this manner to keep my finger on the pulse of the hottest technology in AI.
The advent of Planner Gems comes on the heels of OpenAI releasing ChatGPT Agent. The tool is similarly scoped to Planner Gems, with OpenAI describing the Agent as “[completing] complex online tasks on your behalf [by] seamlessly [switching] between reasoning and action—conducting in-depth research across public websites, uploaded files, and connected third-party sources (like email and document repositories), and performing actions such as filling out forms and editing spreadsheets—all while keeping you in control.” My pal (and former editorial overlord) Federico Viticci at MacStories posted a piece over the weekend with his early thoughts on the software.
Katarina Mogus Talks Diversity and inclusion in tech, being nerdy and girly, more In Interview
For Katarina Mogus, her advocacy for more women in tech parallels that of more accessibility in tech. Intersectionality notwithstanding, the industry sorely needs both.
“[It’s important] for the younger generation,” she said to me in a recent telephone interview. “Maybe I start creating videos because there wasn’t someone for me to look at and to be inspired by. I love having a female, girly spin to all of my videos and I feel like that’s what has made my platform so successful: because I’ve opened up the doors to a whole new market of women who may like certain features or want things explained to them differently, or want things to look a certain way or match their office and things like that. It’s a great for the industry to have female tech creators, especially in such a male-dominated field, because we bring such a unique thoughts and unique ideas to everything. The majority of my viewership is women, so a lot of brands love that, and it’s opening up a whole new area and ideas for brands to continue to create products and features that a lot of the population really enjoy and love.”
Mogus, a Toronto native who worked pre-pandemic for her hometown Toronto Raptors and Toronto Maple Leafs of the NBA and NHL respectively, is a full-time content creator who boasts 2.5 million followers on TikTok followed by 332,000 on Instagram and another 146,000 on YouTube. When COVID put the world in a vice grip back in 2020 and professional sports shut down, Mogus knew she “had to pivot” career-wise to meet the moment. She started making videos in which she offered advice to businesses on how to best succeed in an online-only format; the videos went viral “right away,” Mogus said, which landed her work with companies running their social media—all the while, she remained committed to churning out her educational content. Naturally, people began asking Mogus what kit she used to create her videos. As a nerd herself, she was more than happy to spill the technical tea to those looking to walk a similar path.
“All my videos [on TikTok] were surrounding the Apple ecosystem… all you need is the iPhone you have at home with you to create amazing content [and] amazing product photography [and] your laptop,” Mogus said. “Since it was a pandemic, people were also wondering ‘What desk setup do we need?’ I became the hub for all people who were working from home—small businesses, creative entrepreneurs, or anyone who wanted to really thrive using their technology during the pandemic—and my videos kept going from there, continuous with all the technology.”
Her tech-focused videos quickly gained the attention of captains of industry such as Apple, as well as Google, Meta, and Samsung. Since the pandemic, Mogus has worked with all these heavyweights in covering their products; she’s even worked with luxury automakers like Ferrari as Porsche in a similar capacity, which satiates her keen interest in cars. Even L’Oreal has tapped Mogus’ expertise in showcasing what she called their “beauty tech innovation.” Her nerdiness is strong, as she explained she’s “always been the friend” people go to for tech questions and troubleshooting. She has a “passion” for being helpful and an educator, describing herself as “really handy” and someone who “knew all the tips and tricks” long before she was a professional creator.
Like accessibility in mainstream tech media, Mogus sees women in tech as still being an outlier for the most part. She has humbly anointed herself a trailblazer of sorts, as she said there weren’t any educational TikTok videos when she started posting there; it was all “dancing videos and stuff,” Mogus said. In the years since, the tide has turned such that an increasing number of women are following the trail she blazed. One such person is Justine Ezarik, known as iJustine, who Mogus “really looks up to” and now considers a close friend. Mogus called Ezarik “an amazing woman who opens up doors for other women in this industry,” adding she models herself after iJustine’s example.
“That’s what I continuously strive to do as well,” Mogus said. “I’m really inspired by women founders and entrepreneurs or other women who work in tech who are executives, and I love to highlight them and showcase them on my podcast.”
Mogus is host of Future Obsession.
That Mogus caught Apple’s eye has a bit of a six degrees of separation effect for me. As a fellow member of the technology media, I’ve known Mogus for years now, personally and professionally, and have seen her from afar when covering Apple events. Likewise with the aforementioned iJustine, whom I also know and have interviewed in the past.
Like Ezarik, Mogus is an ally of disabled people and an advocate for accessibility.
My conversation with Mogus happened shortly after WWDC. When I asked what announcement(s) excited her most, Mogus told me she “loves” Apple’s new Liquid Glass design language and how it “streamlines” design across Apple’s expanse of an ecosystem. Moreover, Mogus also is appreciative of Apple’s willingness to allow users to customize, say, iOS with widgets and more. Some of her earliest TikTok material focused on this personalization, with Mogus praising Apple for “listening to its users” by empowering people to get creative without necessarily relying on third-party tools. Elsewhere, Mogus also spoke effusively about the overhauled multitasking paradigm in iPadOS 26, telling me she “really loves now we can have multiple different windows on the screen and have a clean and clear multitasking view and kind of turn your iPad into a mini MacBook.” Nonetheless, Mogus called her M4 MacBook Pro “my baby” along with her iPhone 16 Pro Max. She’s bullish on artificial intelligence too, using Google Gemini and ChatGPT daily to help boost her productivity and she loves AI for that.
When asked about feedback to her work, Mogus said it’s positive from brands and followers alike. The main thing, she told me, is her visibility as a young woman in tech and showing off her technical acumen. Similarly, brands have returned to solicit her thoughtful coverage time and time again because of the way she “[makes] videos that are more fun and more accessible for more people to understand” without all the technical minutiae. In a nutshell, Mogus can play both sides of the ball with equal skill.
“A lot of creators are very spec-focused or the way they explain it might be too difficult for someone who’s just starting out or doesn’t know what to do with their device,” she said. “I think I make it really easy for people to understand, and brands really appreciate that too. At the same time, when I first started creating content, a lot of brands may have doubted me because I am a young female showing up with a pink suit on and a really cute pink camera. I feel like I had to prove myself to these brands that ‘Hey, I can create amazing content educate my audience and the broader audiences on what’s important in technology: what’s new, what’s newest in innovation.’ I’ve built trust with [different brands] because I’ve built such a large community over the past five years.”
Looking towards the future of her work, Mogus said she’s focused now on continuing to push for greater diversity and inclusion in the tech media landscape. One conduit for that is her aforementioned Future Obsession podcast, of which she described as a show “[talking] about all the latest innovations and tech trends and what’s happening in the industry with amazing executives from companies like Spotify, Apple, Meta, Google and female founders” and added “I’m highlighting female voices and female executives, not only talking about what’s new in tech and all the latest innovations and trends, but also hearing from them for their inspiration and their growth.”
“[The show is] my passion right now,” Mogus said of working on Future Obsession. “I’m really leaning into it, and I’ve had a lot of incredible responses and a lot of exciting episodes coming up where I’m highlighting these incredible females.”
Google’s ‘Fast Pair’ Feature May See Scarce Adoption, but Its accessibility gains are Abundant
Ben Schoon reports for 9to5 Google this week Google has confirmed its “Fast Pair” feature, introduced in 2022, is available on only four devices. Schoon writes the supported devices all are streaming boxes: the Chromecast with Google TV, Chromecast with Google TV (HD), Google TV Streamer, and Walmart Onn 4K Plus.
According to Schoon, Fast Pair, which he describes as “a feature that works across Android devices, ChromeOS, and compatible headphones to allow for easy pairing through an on-screen pop-up, as well as the ability to quickly switch headphones between devices,” is optional for developers. He notes Google told Android Authority device makers have discretion over supporting Fast Pair in their product(s), adding Google is “actively working with its partners to enable support on more devices.”
Fast Pair, conceptually speaking, is identical to Apple’s longstanding setup for things like AirPods, HomePod, and even Beats devices. In an accessibility context, I’ve long banged the drum that what’s ostensibly an implementation detail—a means to an end—is actually of great importance. To wait, rather than having to, say, find the Bluetooth section of Settings to find and pair a device, users can simply, for example, open the lid of their AirPods’ case close to their iPhone to initiate pairing. Better still, Apple uses iCloud (because most people are signed in) to propagate the pairing to a user’s constellation of devices. The ultimate accessibility wins are twofold: (1) a person coping with perhaps cognitive and/or visual disability needn’t remember how and where to pair something, then be forced to skim a list to find the correct product and then, motor-wise, tap the Connect button; and (2) that, in Apple’s case, the pairing setup involves iCloud in the backend means a person needn’t tediously and repetitiously work so hard to pair their AirPods with their Apple Watch and MacBook, for instance. Put another way, hitting the Connect button once when pairing one’s AirPods is a shortcut that consolidates what’s normally a multi-step task into one. Most people, including folks at Apple, would say this is convenience—and it is, but it’s just as aptly about accessibility.
The bottom line? Apple’s accessory setup game is a de-facto accessibility feature.
For the longest time, Apple used a private API to build the works-like-magic setup flow for its accessories like AirPods, etc. In releasing iOS 18 last year, however, the company finally extended the function to App Store developers; with the then-new AccessorySetupKit framework, device makers could make the setup process of their own products as accessible as Apple’s blessed first-party kit. That isn’t at all trivial, especially for those people who rapaciously add smart home stuff to their dwelling.
Back to Google. I got an Onn 4K Pro from Walmart a few months ago to play around with, and didn’t know until reading Schoon’s story it doesn’t support Fast Pair. The streaming box’s setup process is easy enough, especially using the Google Home app on your phone, but Fast Pair would make it that much better. It’s good to see Google, like Apple, lead by example by integrating Fast Pair with Pixel Buds. Especially for something like third-party earbuds, using Fast Pair would be a huge accessibility win for many people.
How FIFA Made The Club World Cup Accessible to All
In late May, FIFA announced this year’s FIFA Club World Cup would be “the most inclusive tournament ever” as tournament matches would be augmented with audio descriptions for Blind and low vision people. The ADC, or audio descriptive commentary, was first used by FIFA during the 2014 World Cup in Brazil, followed by the 2022 World Cup in Qatar and the 2023 Women’s World Cup held in Australia and New Zealand.
FIFA worked with disability inclusion company AccessibAll in producing the ADC.
Natalie Gross, FIFA’s accessibility coordinator, explained to me earlier this week—after the Club World Cup ended—in an interview part of her purview is “make the fan experience more accessible,” both digitally and in person at live events. Why her work matters, she told me, is everyone—disabled people included—use computers to find information on things—especially sports. There are “thousands and thousands of people getting online” to inform themselves, but what happens when the information isn’t accessible to them? No accessibility means no information, and that’s a major problem. It can be “alarming,” Gross said, for those in the disability community to face barriers like no screen reader support, captioning, or whatever other accommodation(s) they need. In a nutshell, her job entails ensuring those technologies are available to all.
“When it comes to tech and accessibility, those are the things that we’re [at FIFA] looking at digital accessibility,” Gross said. “That’s part of what I do… part of what I do is go through our [FIFA] app and make sure it’s accessible for everyone and they’re able to navigate through it without asking for help. That’s where we stand as far as accessibility with technology… I’m looking at whether folks are able to access it by themselves.”
The impetus for enriching the Club World Cup with audio descriptions began with FIFA’s fan operations team ruminating over ways in which to make the fan experience better, with Gross saying FIFA as an organization is “always researching and thinking about how to make the fan experience more enjoyable for everyone.” With audio descriptions’ steady rise in popularity, Gross said FIFA thought it befitting to enhance the tournament with them. She emphasized FIFA is “leading in that area” because there aren’t many people who are doing audio descriptions for live events. By contrast, audio descriptions are usually done in “post,” referring to the post-production process during which audio descriptive tracks are inserted by film editors. “I think part of what’s so cool about FIFA is we do try to get ahead of things,” Gross said. “We do try to say, ‘Okay, if we’re thinking about every single fan, what does that look like for us to give this person an experience versus them just showing up?’ That was the brain behind why we wanted to include audio descriptions. We want people to come in and say, ‘Okay, I didn’t miss anything.’”
As to AccessibAll’s role, Gross said the company proved instrumental by bringing in commentators and training them on how to provide proper audio descriptions. First and foremost, people had to have an earnest interest in soccer (or football or fútbol). It does neither announcer nor listener any favors if the person giving the descriptions is either (a) apathetic about soccer; and commensurately (b) knows next to nothing about the game. Gross, herself a fashionista who was behind the Not Your Inspiration line likened someone not understanding (or liking) soccer giving audio descriptions akin to someone who’s not at all interested in the fashion world doing the same work for a TV show like Netflix’s Emily in Paris. Again, such mismatches make no sense whatsoever.
“We had that same concept to doing audio description for live sports,” Gross said.
In technical terms, Gross told me the in-stadium audio descriptions were delivered by way of an app. Fans downloaded it prior to arriving at the venue, at which time it found their location and automatically connected to the feed. As a troubleshooting matter, users had the ability to enter a short code to connect if they had issues, with Gross telling me FIFA’s app is “fairly easy to use.” Gross acknowledged the audio descriptions weren’t available to fans watching at home on television, but said it’s something FIFA is “thinking through and having conversations around” accessibility for future touneys.
Feedback on the audio descriptions has been positive, according to Gross. She told me many of her friends who, like her, are part of the Blind community raved about the experience. Gross said her friends reported being “so excited for once to know what was going on on the pitch and to be included.” Better yet, many of her friends aren’t soccer fans, but when they went to matches, they ended up having a really good time.
“Not only is FIFA making an experience more enjoyable for people who are already fans, but we have the opportunity to create fans who may not have understood what was going on,” Gross said of the impact of FIFA’s efforts. “They weren’t excited about it [before], but now that they can get involved and hear everything that’s going on, there’s an opportunity for us to build new fans. That makes me excited about the future.”
Looking towards the future, Gross demurred when I asked if FIFA was setting a standard with its audio descriptions. Instead, she said it’s more apt to say FIFA is “setting an example” and sets a tone for bringing more accessibility to live sporting events. A common misconception, she added, is people with disabilities don’t like sports; especially for the Blind and low vision community, people (like yours truly) would like sports more were they accessible. Tech is uniquely suited to address inaccessibility, as Gross told me a disabled fan is just like any other if given the right access. Attending live sporting events should be fun, Gross said, and it’s that fun “vibe” that she hopes will be caught by other institutions who see FIFA’s work. NBC Sports did work on accessibility for last year’s Olympic and Paralympic Games in Paris. I posted a feature story last September on the network’s efforts to further disability inclusion, which included interviews with NBC Sports senior producer Alexa Pritting and others. Elsewhere, ESPN’s Sam Boden covered the nexus of accessibility and live sports last year. (Full disclosure: Boden asked my participation, but my insights didn’t make the final story.)
Gross noted working on accessibility, as ever, is an evergreen endeavor by saying “there’s always room for improvement.” She said FIFA is already holding internal discussions about said improvements ahead of next summer’s World Cup here in the United States. Technology is “constantly changing,” Gross said, so what next year will entail isn’t yet known. Nonetheless, Gross said she and team are “excited to jump into the work and see what [accessibility is] going to look like for our tournament next year.”
Chelsea beat Paris Saint-Germain 3-0 in this year’s Club World Cup final in New York.
Uber Makes Getting Groceries More Accessible By adding more SNAP-Friendly Stores to UberEats
San Francisco-based Uber on Wednesday announced in a press release what it describes as “the arrival of even more grocery and convenience selection for consumers shopping with SNAP benefits across the country.” Grocers such as Safeway, Smart & Final, and more are amongst a “growing list” of SNAP-friendly stores.
SNAP, or food stamps, is one social service facing cuts by the Trump administration.
“The increasing selection reinforces UberEats’ commitment to expand food accessibility and support communities in need,” Uber said. “Eligible customers can now use their SNAP EBT benefits to order groceries from participating locations through the UberEats app, with payments powered by Forage, the mission-driven payments company. In the months ahead, Uber teams will work to add even more retailers who will accept SNAP EBT payments on UberEats orders, including those that are new to UberEats.”
As Uber’s statement makes clear, today’s news is a big deal for accessibility. It’s one area where technology shines, as the smartphone’s ubiquity combined with on-demand services like Uber’s, means, for instance, those in the disability community who are perhaps relatively immobile (or homebound altogether) can still shop for the things they need for daily sustenance with just a few taps on their iPhone or other smartphone. Better yet, one’s groceries is made even more accessible by virtue of being less expensive with the aforementioned SNAP benefits. Overall, it’s an example of the quintessential win-win situation: not only does Uber get more business, but customers are happy and the whole experience can stand as a shining example of technology at its best. In this case, what’s ostensibly seen as mere nicety for convenience’s sake transcends into becoming something much more meaningful and, frankly, life-changing. To reiterate, services like UberEats—or Amazon or Instacart, for that matter—can be bonafide lifesavers for a not-insignificant portion of the population. Moreover, it’s proof positive assistive technologies have relevance far beyond a discrete, often esoteric suite of features included in a computer’s operating system.
“We believe that everyone deserves the convenience of delivery, and with this growing list of merchant partners we can continue to provide access to the fresh food that families and individuals need, especially those without access to reliable transportation for whom food delivery can be a lifeline,” Hashim Amin, Uber North America’s head of grocery and retail delivery, said in a statement for the press release.
Last year, I interviewed Ofek Lavian for my old Forbes column. Lavian, himself an Uber alum, is co-founder and chief executive officer of San Francisco-based payments company Forage, whose scope is focused on, apropos here, “enabling SNAP EBT for merchants and grocery savings for shoppers.” Lavian was quoted in Uber’s announcement, saying in part in a statement Forage is “proud to power the technology behind it, helping more families use their benefits with confidence and convenience.”
According to Uber, there’s a new SNAP icon in the UberEats for easy identification. The company also says EBT cardholders are eligible for no delivery fees on their first three SNAP-eligible orders as well as a limited free trial of its Uber One subscription service.
How tech teaches neurodivergent teens to drive
The allure of autonomous vehicles notwithstanding, I long ago made my peace with not being able to drive as someone with profoundly low vision. Nonetheless, I do get around pretty well, and I’m thankful to be living amongst such a technological golden age such that suitable substitutions like Waymo are relatively readily available. At the very least, my checkbook is even more thankful because not having a driver’s license means I save a helluva lot of money on car payments, auto insurance, and—particularly here in the Bay Area—gas costs. Yet the acceptance of my life’s circumstances doesn’t mean I don’t sometimes wish I could drive or even learn how in the first place. I absolutely do.
It was these feelings that led me to the Nicklaus Children’s Hospital’s Drive Program and interviewing its coordinator in Dr. Christina Potter. Located in Miami, Nicklaus Children’s—renamed in 2015 from Miami Children’s Hospital after golf legend Jack Nicklaus, winner of a record 18 majors—established its Drive Program in 2023 in an effort to “prepare neurodiverse individuals for a driving exam.” The Drive Program uses virtual reality to what Nicklaus Children’s described as “[providing] students with an experience that simulates a driving exam.” Students receive real-time feedback, and a behavior analyst is present to help guide them through various situations whilst closely monitoring their stress and anxiety levels. There’s also classroom time, during which students are taught foundational pieces like the rules of the road, signs, and more.
Dr. Potter, an academic researcher and experimental psychologist by training who works as Nicklaus Children’s manager of IT and digital technologies, explained to me the impetus behind launching the Drive Program was “simple but powerful”: the idea, she told me, is to “help young people, especially those who face challenges like autism, anxiety, or ADHD, to gain the confidence and skills they need to become safe and independent drivers.” Conventional driving schools, Dr. Potter went on to tell me, aren’t always conducive to many neurodivergent people; they “don’t offer the flexibility or patience or support that [neurodivergent people] really need to succeed,” she said.
“We saw an opportunity to fill that gap in a way that aligned with with our mission at Nicklaus Children’s,” Dr. Potter said.
Nicklaus Children’s had been playing with virtual reality before, using the technology in its surgical areas and noticing “major benefits” for patients coping with anxiety, according to Dr. Potter. Driving in Miami, she said, can be harrowing at times with lots of traffic and the roads being a “madhouse.” Dr. Potter acknowledged there’s a lot of anxiety surrounding driving in general, but noted those feelings can be heightened in neurodiverse people. She also said while other hospitals, such as Children’s Hospital of Philadelphia, do teach teens to drive, her team found a program specifically designed for, and catered to, the neurodiverse community really doesn’t exist anywhere else.
“Research shows individuals on the spectrum tend to experience heightened levels of anxiety anyway, particularly around driving,” Dr. Potter said. “We thought we might be able to use VR as the tool to help them learn driving skills in a safe and controlled way.”
The Drive Program, Dr. Potter told me, has been designed to “provide our students with a strong foundation and safe and confident driving—both in knowledge and in real world application.” Teaching begins with the fundamentals, of course: students are taught the basic rules of the road, reading road signs, and more. The curriculum is classroom-based and built using Florida’s driving manual. She emphasized, though, it’s the way in which concepts are taught that sets Nicklaus Children’s efforts apart. Program staff are sent to become certified driving instructors, then use the VR-based simulator—which comes replete with a headset, steering wheel, and pedals—as a proxy for a real car. This combination of strategies, according to Dr. Potter, “[reinforces and supports] students with different learning styles to meet them where they’re at.”
Dr. Potter stressed the Drive Program does not issue licenses to people. Rather, the program is intentionally designed to facilitate the reduction of anxiety and “provide [students] with the framework to be successful at a traditional driving school.”
When asked how exactly the Drive Program actually helps a neurodivergent person, Dr. Potter said it’s a “mix” of both the technology and, more pointedly, the coaching and instruction. The tech is cool, to be sure, but Dr. Potter told me the first thing they tell new recruits is they’re safe and the staff is there to support them. The beauty of the Drive Program’s simulator, she went on to tell me, is it enables students to learn and make mistakes in a safe, emotionally enriching, and carefully controlled environment.
“Through repetition, guided instruction, and supported coaching, over time our students are building confidence and competence behind the wheel,” Dr. Potter said.
She added: “No matter where they’re at, you know, we meet them where they are emotionally and we’re very patient and empathetic and understanding… if they crash the car, which they do, or if they veer off road, nobody gets hurt. We use that [experience] as an opportunity with our students to help them learn and grow without having that fear someone can get hurt. Really, we just practice, practice, practice.”
Feedback on the Drive Program has been positive. Dr. Potter noted comments from students and parents alike are “wildly optimistic,” adding the Drive Program has grown into something that has exceeded everyone’s expectations. Many students who have successfully completed the Drive Program have gone on to attain their driver’s license from the DMV. It’s been equally rewarding for parents to watch their children learn such crucial skills. And as a data scientist, Dr. Potter said the team has copious amounts of data on the Drive Program; anxiety in parents has been reduced by as much as 50%, while the number is almost 70% from the beginning of the program to its conclusion.
“It’s confidence not just for students, but for the parent as well,” Dr. Potter said.
Looking towards the future, Dr. Potter said she and her team at Nicklaus Children’s hopes to grow the Drive Program, as well as evangelize it. She believes the Drive Program is “breaking down barriers to mobility and inclusion” for not only the neurodiverse community, but for anyone experiencing driving-related stressors. It helps tremendously in greater independence, as well as in opportunity and social connection. The Drive Program has a multitude of benefits that go beyond cars.
“There’s a lot that we can do with the technology to mimic those scenarios to help a broader range of individuals to reduce driving-related anxiety—even for folks who already have their driver’s license,” Dr. Potter said in pondering the Drive Program’s future. “Providing those opportunities in a more broad approach is one area we want to go down. Another would be working with local high schools, as that’s something we haven’t done yet. We’ve shown we have this proof-of-concept that works really well with individuals on the spectrum, folks who have heightened levels of anxiety [and] folks who have ADHD. We know our curriculum and our process will work with a broader audience. We want to get the word out and have as big of an impact as we can.”
She continued: “The Drive Program is a program that benefits the entire community. It’s for anyone who has heightened anxiety. It gives them access to a supportive and adaptive driver training program, opening doors to greater independence and greater opportunities and connection. These are life-changing milestones. Helping individuals to fully participate in their communities is really what we’re trying to achieve here.”
Apple’s Hearing Health, Sleep Apnea Features Expand to more places Around the World
Marcus Mendes reports today for 9to5 Mac Apple’s hearing health features for AirPods Pro, including the hearing test and hearing aid functionality, is expanding to more countries. The sleep apnea feature on Apple Watch is seeing similar availability growth.
“Both features were first introduced last year, but they require approval from local health regulators before being rolled out regionally,” Mendes wrote.
Argentina, Australia, Taiwan, and Thailand are but a few locales that received support for both the hearing health and sleep apnea features, according to Mendes. In a delightful little aside, I learned Christmas Island is a real place on Earth and it is one of 15 new places to gain access to both Apple’s hearing health and sleep apnea software.
Apple CEO Tim Cook took to X earlier today to share the news of the expansion. I highlighted his announcement in a post of my own. To reiterate my point, it’s not at all trivial or banal that Cook boast about this ostensibly ho-hum bit of Apple news; on the contrary, it’s a huge deal because of Cook’s stature. He’s one of the highest profile chief executives on the planet—that Cook took time to talk about what’s essentially an accessibility feature is notable. While the hearing health and sleep apnea features have been around for several months here in the United States, any bit of the limelight is worth a thousand suns when it comes to accessibility and, by extension, the disability community. Cook (and others on Apple’s leadership team) post about accessibility other times, but still, every moment is a big. The resonance cannot be overstated.
The hearing health features for AirPods Pro 2 were released to much fanfare last October—alongside the much-ballyhooed (and much-maligned) Apple Intelligence, no less—with iOS 18.1. I posted a first look, as well as an interview with Apple accessibility leader Sarah Herrlinger about building the features a couple months later, in December. As to sleep apnea, I’ve covered that before too, in a story published in February 2023.
Accessible Street-Crossing App Oko Gets Acquired
It isn’t often I cover M&A news, but today happens to be one of those rare times.
Texas-based transportation safety company Synapse on Tuesday announced its plan to acquire Oko. Oko, an Apple Design Award winner and last year’s App of the Year, is an iPhone app which uses the device’s camera(s) and artificial intelligence to help Blind and low vision people accessibly (and safely!) cross streets. In its announcement, Synapse describes Oko as developed by “a team of accessibility and mobility experts” and which technical attributes include “[harnessing] location data, audio cues, and accessible interface design to help guide users through the most dangerous aspects of crossing intersections.” Oko collaborated with the Blind community to build its app.
Once the deal is officially completed, Oko will become a free app in an effort to “make its services accessible to as many pedestrians as possible,” according to Synapse.
Synapse is owner of Polara, which specializes in building accessible pedestrian signal (APS) technology. The deal with Oko enables Synapse to integrate Oko’s technologies with the existing APS infrastructure by way of its consumer-oriented PedApp in an effort to make street-crossing more accessible to the Blind and low vision community.
“This acquisition is a natural extension of our mission to increase roadway safety, accessibility, and efficiency with state-of-the-art software,” Josh LittleSun, Synapse’s chief technology officer, said in a statement included in the company’s press release. “The fusion of Oko’s smart navigation technology with Polara’s trusted PedApp moves us closer to a future where pedestrian equity and safety are built into every crosswalk.”
Synapse’s vice president of intersection sales, Matthew Baker, agrees with LittleSun.
“This is the kind of life-changing innovation we’re proud to bring into the Polara family,” Baker said. “By eliminating subscription fees, we’re making Oko accessible to all and speeding up adoption in communities that need it the most.”
In an email to me, Oko founder Willem Van de Mierop said he believes the acquisition “can be seen as one of the most exciting deals for the accessibility space.” He also noted Oko recently hit a milestone, as it has helped people navigate 10 million streets.
Catching Up with CSD Chief Executive Chris Soukup
When I recently caught up with Chris Soukup over email amidst Disability Pride Month and ahead of the Americans with Disabilities Act’s (ADA) 35th birthday later this month, the chief executive officer of Communication Service for the Deaf (CSD) explained to me plainly and succinctly he believes disability to be “an inherent part of the human condition.” How those in the disability community experience the world around them, he went on to rightfully tell me, “is continuously evolving throughout our lives.”
“Disability Pride Month is a celebration of how these experiences vary, the beauty of our collective intersection, and our commitment to unity and inclusion,” Soukup said. “The ADA and laws similar to it provide our society with important guardrails to ensure that no one is inadvertently or intentionally left behind as progress and innovation propel us into the future.”
As I wrote in my profile of Soukup and CSD last October, CSD, established in 1975, is a self-described “Deaf-led social impact organization” which exists to “create a better world for Deaf and hard-of-hearing people.” In a nutshell, CSD is devoted to the betterment of the Deaf and hard-of-hearing community; in January, the organization announced it was supporting efforts for the Los Angeles-area Deaf community whose lives were upended by the wildfires that utterly ravaged the Southern California region.
Deaf people, Soukup said, still bear the brunt of bias by society writ large. Discrimination in the workplace, as well as unemployment altogether, remain “pervasive issues” for the community, he said. Not only are securement and sustainability problematic for Deaf job-seekers, Soukup went on to say, career advancement is equally few and far between for Deaf workers. And although technology has unquestionably broken down barriers and subsequently increased access to communication and information, the reality is there still exists “significant challenges,” according to Soukup. Many Deaf people, he said, remain reliant upon intermediaries—people like sign language interpreters, for one—to facilitate proper communication. It’s oftentimes difficult, Soukup said, to ensure “qualified individuals are always available to facilitate between a signing Deaf person and a non-signer.”
Soukup believes a crucial part of gaining greater accessibility is good storytelling.
“[It’s] so important,” he said. “Amplifying and spotlighting people with disabilities that are thriving and achieving their goals is so important. There is not enough attention in the media. People who identify as being disabled make up at least 25% of the population. We need to be seen, and our successes should be celebrated as an opportunity to transform how people perceive disability.”
When asked to elaborate more on the ADA’s significance, Soukup told me the law (and any extending legislation) is of crucial import because they provide “important protection” to all disabled Americans. He reiterated a popular refrain in the disability community, which is the ADA is seen as the “floor” rather than the ceiling. The ADA, Soukup said, is a “baseline” atop of which society can “[build] a world that is designed to be more inclusive from the beginning.” The disability community, he added, would “love” to see lawmakers go further with regulation by “[continuing] to close gaps and eliminate barriers that make it harder for people with disabilities to lead fulfilling lives.”
Soukup’s last sentiment was a good segue into talking about the future. In peering into his proverbial crystal ball, he said it’s his hope society reaches at point at which we “normalize disability.” In addition, he spoke of the domino effect disability has on the lives of everyone, disabled or not, saying “we recognize that when we speak about disability, we are talking about everyone: ourselves, our families, [and] our loved ones.”
“Embracing disability as an ordinary part of what it means to be human expands our thinking and challenges some of the implicit bias that we carry,” Soukup said of his hopes and dreams for disability’s societal future. “A Deaf person can be an incredible physician, airline pilot, college professor, or entrepreneur. Our work is closer to being achieved when the general public internalizes and embraces all of these possibilities.”
Waymo, Autism Society of America Partner On giving riders on the spectrum Greater Accessibility
Following up on my story from last Friday on Waymo introducing teen accounts, I came across another report involving Waymo. This one sees the company working with the Autism Society of America to help people on the spectrum gain fuller independence.
The work was detailed in a story by Phillip Palmer at the ABC7 affiliate in Los Angeles.
“Waymo has partnered with the Autism Society to highlight how a driverless vehicle can actually offer a consistent, safe and predictable way to travel for young adults on the spectrum,” Palmer wrote in describing the pair’s partnership. “By working with the Autism Society of America in the very early stages of development, they can avoid any challenges that might come as the company grows.”
The allure of Waymo to autistic people is pretty much precisely what it is for the Blind and low vision: driverless cars affords those in our respective communities greater agency and autonomy with which to travel. Palmer notes only a third of people on the spectrum have driver’s licenses, according to a study by the Journal of Autism. By contrast, many autistic people do hold college degrees and hold jobs; having access to Waymo means their independence (and self-esteem) is increased. Indeed, the Autism Society’s chief marketing officer, Kristyn Roth, said to Palmer in part “having this autonomy [via Waymo] is something that builds confidence and it uplifts people.”
For its part, Waymo is committed to tackling these kinds of accessibility issues.
“What are the moments where the existing services and transportation office options are not working? Because we take those problems and we actually design product of future solutions around their specific problems, so that we’re not just assuming these are the problems that you faced,” said Orlee Smith, senior product manager at Waymo.
Waymo, and its underlying technologies, are not above reproach. They should both be scrutinized. Yet in a world where a lot of people are suspicious of artificial intelligence—and make no mistake, driverless cars are effectively AI-powered robots on wheels—the genuine good a company like Waymo can do for people pushed by the wayside under the guise of prioritizing safety and competence. The reality is, there are a lot of disabled people out there for whom conventional driving is impossible. For those folks, which includes yours truly, the advent of autonomous vehicles is nothing short of revolutionary. As the technical bits and bobs inevitably mature, the next step for mainstreaming driverless cars even further is advocating for individual ownership. That will be an even more monumental task because it entwines the law and regulation with the societal view on disability—in this case, challenging the notion Blind and low vision people can buy and “drive” their own driverless cars. Perhaps I’m overly pessimistic, but such a sea change makes climbing Mount Kilimanjaro akin to climbing a molehill.
I’m not certain such a thing will happen in my lifetime, if ever.
Waymo’s Making Getting around more accessible to blind, low vision Kids with new teen Accounts
Jennifer Elias reported for CNBC this week Waymo has begun offering accounts to teenagers ages 14–17 in Phoenix. The decision reflects Waymo’s desire to “increase ridership amid a broader expansion of its ride-hailing service across US cities.”
“The Alphabet-owned company said that, beginning Tuesday, parents in Phoenix can use their Waymo accounts ‘to invite their teen into the program, pairing them together.’ Once their account is activated, teens can hail fully autonomous rides,” Elias wrote in describing Waymo’s ambitions for rider expansion. “Previously, users were required to be at least 18 years old to sign up for a Waymo account, but the age range expansion comes as the company seeks to increase ridership amid a broader expansion of its ride-hailing service across US cities. Alphabet has also been under pressure to monetize AI products amid increased competition and economic headwinds.”
Elias noted Waymo will provide so-called “specially-trained Rider Support agents” for teen riders, adding teens are able to share real-time updates of their trip status with parents. Their parents also receive the ride receipt(s). Another friend of mine, Bloomberg’s Natalie Lung, wrote on X Waymo’s teen accounts are limited to Phoenix for now because “California currently does not allow unaccompanied minors on AVs.”
Uber launched a similar service oriented towards teens in 2023, according to Elias.
Reading this story got me thinking wistfully about how Waymo—and particularly its new teen account feature—would’ve been so great during my high school years. I took a yellow school bus to and fro school every day from the time I was a wee kindergartner until I graduated high school in 2000. The vehicles were one of the shorter, smaller busses generally used to transport disabled kids back and forth from school. I never minded it, as I made some great friends on those trips—my favorite driver, Shirley, is a longtime Facebook friend of mine now—but as I grew into driving age, had Waymo existed then, I would’ve felt of a feather amongst my flock of friends who had normal vision and thus could drive. With Waymo, I could’ve asked a friend to ride with me after school to visit a Tower Records or The Wherehouse with me. I could’ve had the very same agency and autonomy I enjoy today two decades earlier as I was traversing my formative years. This is why I chose to cover Elias’ report: as it stands, Blind and low vision kids—in Arizona’s capital city, anyway—have a tremendous opportunity before them to potentially “drive” a car and get around independently. Not only is the autonomous driving tech cool as hell, the heightened feelings of self-esteem and empowerment on the still-in-development brains of today’s teenagers make a helluva difference in their socio-emotional growth—especially if they cope with a disability.
Waymo’s teen account comes not long after it announced expansion to New York City, as well as Tesla’s own robotaxi competitor seeks to add service here in the Bay Area.
White House Argues ASL Interpreters unnecessary for Accessibility at press Briefings, report says
Molly Reinmann reported for CNN last week US District Judge Amir Ali, a Biden appointee, “grappled for over an hour” over whether to force the Trump administration to provide American Sign Language (ASL) interpreters at White House press briefings.
A lawsuit was brought by the National Association of the Deaf (NAD). Reinmann writes the suit alleges “the White House is violating deaf Americans’ rights under the Rehabilitation Act of 1973 from accessing ‘critical information in real time.’” The attorney for the NAD, Ian Hoffman, subsequently argued Deaf and hard-of-hearing Americans are “deprived of their ability to participate in the democratic process.”
Biden’s briefings were staffed with ASL interpreters. The Justice Department ended the practice upon the transfer of power for President Trump’s second term, contending assistive technologies such as captioning and transcripts are sufficient enough. The NAD pushed back, saying—rightly so—that ASL and English are bespoke, distinct languages while emphasizing captioning oftentimes can prove “especially inaccessible to the many thousands of deaf persons fluent only in ASL,” according to Reinmann.
Relatedly, the NAD took umbrage over the first Trump administration’s lack of sign language interpretation during critical Covid–19 pressers that took place back in 2020.
Reinmann’s story, while newsworthy on merit alone, is especially appalling given the backdrop of July being Disability Pride Month and the Americans with Disabilities Act turning 35 on the 26th. The cretins representing the Justice Department argued the burden of proof is on the NAD to, as Reinmann said, “show that more thorough ASL translations were necessary and repeated her previous claim that the type of services provided should be at the discretion of the White House.” The Department of Justice is essentially paternally (and patronizingly) dictating accessibility—a move suggesting the able-bodied majority know best how to accommodate people with disabilities.
If Trump’s immigration policies are racist—they are—the inaccessibility is ableist.
Moreover, what rankles me most is the part in Reinmann’s lede when she writes Judge Ali “grappled” with his decision. I don’t blame her, but what anguish is there? You have a segment of the citizenry advocating for accessibility so as to be more informed. Disabled Americans, myself included, are Americans. We tune into White House news conferences. We read and watch CNN. We vote. That Judge Ali wrestled with some combination of legal and/or legal gymnastics in issuing his ruling underscores the deeply entrenched societal ableism that, in so many respects, are a bedrock of how not only the country works, but the world too. Most people treat disability like a disease.
As a CODA, I can empathize with Reinmann’s story on so many levels. My parents watched the local news every night after dinner while I was growing up and, despite the presence of captioning, they would rely on me to translate what was going on in the world and explain its meaning. It was quite the ask of a disabled kid himself going into his pre-pubescent years and beyond, but it’s illustrative of the notion that captions, however vital in their own right, has but limited utility. Captions can go only so far. Likewise, transcripts are good, but have their problems because, again, English typically isn’t a Deaf person’s first language and thus comprehension is compromised.
Karoline Leavitt and team clearly don’t understand that—or, if they do, they don’t care.
Editors, this is why accessibility in tech so richly deserves its own beat in newsrooms.
Xbox Announces AGI Tags Availability, More
Microsoft-owned Xbox put out a blog post this week wherein it announced the Accessible Games Initiative (AGI) tags are available “across all digital Xbox experiences.” Team Xbox wrote the news is “just in time” for Disability Pride Month.
The news comes a few months after Xbox announced plans in March to join the AGI.
“[Xbox] is proud to announce that the new Accessible Games Initiative tags, designed to provide players with clear and consistent information about the accessibility features in video games, are now available across all digital Xbox experiences including console, PC, mobile, and web storefronts,” Xbox wrote in the post’s introduction.
Xbox’s contribution to the AGI, a consortium which includes fellow gaming industry heavyweights Nintendo of America, Electronic Arts, Ubisoft, and others, builds upon its own work, begun in 2021 with the release of the Xbox Game Accessibility Feature tags. Of note is Xbox’s explicit callout in the announcement that any Xbox accessibility tags which don’t have a correlating AGI tag will remain available on the platform, with the company saying the combination “will make it even easier for players with disabilities to learn about available accessibility features and find their next great game.”
Xbox’s post finishes with an Q&A-style interview with Brannon Zahand, senior technical program manager at Xbox, and content creator and gaming accessibility advocate Steve Saylor. The conversation “[discusses] the work towards greater standardization of accessibility in games, what these tags mean for players today, and why this work is important,” according to Xbox. Additionally, Xbox published another blog post featuring an interview with Phil Crabtree from Kaizen Game Works. Microsoft Game Dev contributing editor Miguel Lopez writes his conversation with Crabtree delves into “how integrating Accessible Games Initiative tags has supported their development practices, highlights the community’s enthusiastic reception, and explores how accessibility tools and standards can further transform the gaming industry.”
I posted an interview with Entertainment Software Association senior vice president Aubrey Quinn back in early April. The executive told me all about her organization’s stewardship of the AGI and the need for tags, as well as how the group came to exist.
“Before the Accessible Games Initiative, the burden was on players to interpret existing tags in the marketplace from platform to platform, game to game. We hope to change that,” Quinn said of the driving force behind the AGI’s formation. “This new initiative is meant to help consumers identify specific accessibility features in individual video games, so that players buying games can make better informed purchasing decisions. Having a clear approach to identify accessibility features across different games, with criteria behind each accessibility tag, will provide consumers with information about the accessibility features they can find in games prior to purchasing them.”
How Amazon is made ‘Accessible for everyone’
Amazon last week published a blog post on its European site in which the Seattle-based company detailed myriad ways it makes Amazon “accessible for everyone.” The post, bylined by the About Amazon Team, was shared coincident with the recent enforcement of the European Accessibility Act. Amazon said it is “well prepared” for the legislation.
“Amazon’s vision is to be Earth’s most customer-centric company, which means making our devices and services accessible to everyone. We have pioneered accessibility features across our products and services for over a decade and our commitment to accessibility is deeply rooted in our customer-obsessed culture. For us, this is more than meeting requirements: it’s about staying true to our mission of serving every customer,” Amazon wrote of its North Star in the post’s introduction. “We design with inclusivity at the forefront, making our products and services accessible to all customers. By integrating new technologies, such as AI, we are able to create solutions that enhance the experience for all customers, including those with specific accessibility needs. This shows that when accessibility is treated not as an afterthought but as a core design principle, technology can truly become a force for inclusion.”
The post, the scope of which is familiar, offers a cursory rundown of the accessibility attributes of Amazon-branded products—including Alexa, Fire TV, Kindle, Prime Video, and more. In addition, the company highlights how it has made the general shopping experience more inclusive and which goes beyond “more than just a well-structured webpage.” The company mentions features such as Navigation Assistant and Product Summary, both of which are characterized as helping keyboard-oriented and/or screen reader users “shop more efficiently, showing that accessibility and convenience go hand-in-hand.” Likewise, Amazon Locker locations, where people go to a physical place to retrieve their order(s), features a Lower Locker Slot selection for wheelchair users, as well as an audio-based UI (with audio jack) for the Blind and low vision.
“More than 100 million people have disabilities in Europe,” Peter Korn, director of accessibility for Amazon devices, said in a statement. “What I love most about this work is how it embodies one of Amazon’s core principles: Customer Obsession. Building products customers love means including them in the process, not only by talking directly with customers with accessibility needs, including the elderly or with disabilities. Our culture of inclusion is also reflected in the many people with disabilities who work throughout Amazon in diverse roles—including our product teams.”
I covered the redesigned Prime Video app last summer; my report included interviews with executives Raf Soltanovich and Kam Keshmiri. And I last spoke with Korn last year about, amongst other things, using artificial intelligence to power search on Fire TV.
For more on the European Accessibility Act, check out my story earlier this week on it.
Gemini App Gets support for Google Broadcasts
Abner Li reports today for 9to5 Google the Gemini app is gaining the ability to broadcast voice messages to Nest speakers and smart displays integrated with Google Home.
Li notes the aptly-named Broadcasts feature was previewed by Google last month and, perhaps most notably, comes ahead of Gemini supplanting Google Assistant on Android devices. The change will happen sometime “later this year,” according to Li.
Broadcasts can be sent to specific devices or rooms, or to the entire household.
While certainly a feature of convenience—nobody wants to run around their house telling the family dinner is ready, Paul Revere-style—the reality is, as ever, Broadcasts can prove beneficial for accessibility too. Especially for those with limited mobility, or someone who’s immobile altogether, trying to share a message (or get assistance) can be arduous if people are spread across other parts of the house. For example, someone might need to alert their caregiver, who may be in the kitchen, they need to bring their medications to take along with their food. Likewise, someone in a wheelchair may not be able to move about to every part of the house to tell everyone a meal is ready, so Broadcasts makes relaying the message a more accessible task. Google positions Broadcasts as an amenity that makes life easier and nicer when, in truth, the feature has serious applicability as a lifesaving de-facto accessibility feature for many people.
Google’s Broadcasts is similar to Apple’s Intercom on HomePods. Apple’s implementation is effectively identical, catered towards HomePod and HomePod mini, but comes with an additional perk: Intercom can transcribe messages. They will show up on people’s iPhone or Apple Watch, for instance, and is a cool but useful feature for those in mixed households where, say, hearing and Deaf people cohabitate. That Intercom offers transcripts makes the feature more inclusionary when it ostensibly would be exclusionary, much like how—as I’ve proffered often lately—Music Haptics and transcripts breaks barriers for music and podcast listening in the Apple ecosystem.