Katarina Mogus Talks Diversity and inclusion in tech, being nerdy and girly, more In Interview
For Katarina Mogus, her advocacy for more women in tech parallels that of more accessibility in tech. Intersectionality notwithstanding, the industry sorely needs both.
“[It’s important] for the younger generation,” she said to me in a recent telephone interview. “Maybe I start creating videos because there wasn’t someone for me to look at and to be inspired by. I love having a female, girly spin to all of my videos and I feel like that’s what has made my platform so successful: because I’ve opened up the doors to a whole new market of women who may like certain features or want things explained to them differently, or want things to look a certain way or match their office and things like that. It’s a great for the industry to have female tech creators, especially in such a male-dominated field, because we bring such a unique thoughts and unique ideas to everything. The majority of my viewership is women, so a lot of brands love that, and it’s opening up a whole new area and ideas for brands to continue to create products and features that a lot of the population really enjoy and love.”
Mogus, a Toronto native who worked pre-pandemic for her hometown Toronto Raptors and Toronto Maple Leafs of the NBA and NHL respectively, is a full-time content creator who boasts 2.5 million followers on TikTok followed by 332,000 on Instagram and another 146,000 on YouTube. When COVID put the world in a vice grip back in 2020 and professional sports shut down, Mogus knew she “had to pivot” career-wise to meet the moment. She started making videos in which she offered advice to businesses on how to best succeed in an online-only format; the videos went viral “right away,” Mogus said, which landed her work with companies running their social media—all the while, she remained committed to churning out her educational content. Naturally, people began asking Mogus what kit she used to create her videos. As a nerd herself, she was more than happy to spill the technical tea to those looking to walk a similar path.
“All my videos [on TikTok] were surrounding the Apple ecosystem… all you need is the iPhone you have at home with you to create amazing content [and] amazing product photography [and] your laptop,” Mogus said. “Since it was a pandemic, people were also wondering ‘What desk setup do we need?’ I became the hub for all people who were working from home—small businesses, creative entrepreneurs, or anyone who wanted to really thrive using their technology during the pandemic—and my videos kept going from there, continuous with all the technology.”
Her tech-focused videos quickly gained the attention of captains of industry such as Apple, as well as Google, Meta, and Samsung. Since the pandemic, Mogus has worked with all these heavyweights in covering their products; she’s even worked with luxury automakers like Ferrari as Porsche in a similar capacity, which satiates her keen interest in cars. Even L’Oreal has tapped Mogus’ expertise in showcasing what she called their “beauty tech innovation.” Her nerdiness is strong, as she explained she’s “always been the friend” people go to for tech questions and troubleshooting. She has a “passion” for being helpful and an educator, describing herself as “really handy” and someone who “knew all the tips and tricks” long before she was a professional creator.
Like accessibility in mainstream tech media, Mogus sees women in tech as still being an outlier for the most part. She has humbly anointed herself a trailblazer of sorts, as she said there weren’t any educational TikTok videos when she started posting there; it was all “dancing videos and stuff,” Mogus said. In the years since, the tide has turned such that an increasing number of women are following the trail she blazed. One such person is Justine Ezarik, known as iJustine, who Mogus “really looks up to” and now considers a close friend. Mogus called Ezarik “an amazing woman who opens up doors for other women in this industry,” adding she models herself after iJustine’s example.
“That’s what I continuously strive to do as well,” Mogus said. “I’m really inspired by women founders and entrepreneurs or other women who work in tech who are executives, and I love to highlight them and showcase them on my podcast.”
Mogus is host of Future Obsession.
That Mogus caught Apple’s eye has a bit of a six degrees of separation effect for me. As a fellow member of the technology media, I’ve known Mogus for years now, personally and professionally, and have seen her from afar when covering Apple events. Likewise with the aforementioned iJustine, whom I also know and have interviewed in the past.
Like Ezarik, Mogus is an ally of disabled people and an advocate for accessibility.
My conversation with Mogus happened shortly after WWDC. When I asked what announcement(s) excited her most, Mogus told me she “loves” Apple’s new Liquid Glass design language and how it “streamlines” design across Apple’s expanse of an ecosystem. Moreover, Mogus also is appreciative of Apple’s willingness to allow users to customize, say, iOS with widgets and more. Some of her earliest TikTok material focused on this personalization, with Mogus praising Apple for “listening to its users” by empowering people to get creative without necessarily relying on third-party tools. Elsewhere, Mogus also spoke effusively about the overhauled multitasking paradigm in iPadOS 26, telling me she “really loves now we can have multiple different windows on the screen and have a clean and clear multitasking view and kind of turn your iPad into a mini MacBook.” Nonetheless, Mogus called her M4 MacBook Pro “my baby” along with her iPhone 16 Pro Max. She’s bullish on artificial intelligence too, using Google Gemini and ChatGPT daily to help boost her productivity and she loves AI for that.
When asked about feedback to her work, Mogus said it’s positive from brands and followers alike. The main thing, she told me, is her visibility as a young woman in tech and showing off her technical acumen. Similarly, brands have returned to solicit her thoughtful coverage time and time again because of the way she “[makes] videos that are more fun and more accessible for more people to understand” without all the technical minutiae. In a nutshell, Mogus can play both sides of the ball with equal skill.
“A lot of creators are very spec-focused or the way they explain it might be too difficult for someone who’s just starting out or doesn’t know what to do with their device,” she said. “I think I make it really easy for people to understand, and brands really appreciate that too. At the same time, when I first started creating content, a lot of brands may have doubted me because I am a young female showing up with a pink suit on and a really cute pink camera. I feel like I had to prove myself to these brands that ‘Hey, I can create amazing content educate my audience and the broader audiences on what’s important in technology: what’s new, what’s newest in innovation.’ I’ve built trust with [different brands] because I’ve built such a large community over the past five years.”
Looking towards the future of her work, Mogus said she’s focused now on continuing to push for greater diversity and inclusion in the tech media landscape. One conduit for that is her aforementioned Future Obsession podcast, of which she described as a show “[talking] about all the latest innovations and tech trends and what’s happening in the industry with amazing executives from companies like Spotify, Apple, Meta, Google and female founders” and added “I’m highlighting female voices and female executives, not only talking about what’s new in tech and all the latest innovations and trends, but also hearing from them for their inspiration and their growth.”
“[The show is] my passion right now,” Mogus said of working on Future Obsession. “I’m really leaning into it, and I’ve had a lot of incredible responses and a lot of exciting episodes coming up where I’m highlighting these incredible females.”
Google’s ‘Fast Pair’ Feature May See Scarce Adoption, but Its accessibility gains are Abundant
Ben Schoon reports for 9to5 Google this week Google has confirmed its “Fast Pair” feature, introduced in 2022, is available on only four devices. Schoon writes the supported devices all are streaming boxes: the Chromecast with Google TV, Chromecast with Google TV (HD), Google TV Streamer, and Walmart Onn 4K Plus.
According to Schoon, Fast Pair, which he describes as “a feature that works across Android devices, ChromeOS, and compatible headphones to allow for easy pairing through an on-screen pop-up, as well as the ability to quickly switch headphones between devices,” is optional for developers. He notes Google told Android Authority device makers have discretion over supporting Fast Pair in their product(s), adding Google is “actively working with its partners to enable support on more devices.”
Fast Pair, conceptually speaking, is identical to Apple’s longstanding setup for things like AirPods, HomePod, and even Beats devices. In an accessibility context, I’ve long banged the drum that what’s ostensibly an implementation detail—a means to an end—is actually of great importance. To wait, rather than having to, say, find the Bluetooth section of Settings to find and pair a device, users can simply, for example, open the lid of their AirPods’ case close to their iPhone to initiate pairing. Better still, Apple uses iCloud (because most people are signed in) to propagate the pairing to a user’s constellation of devices. The ultimate accessibility wins are twofold: (1) a person coping with perhaps cognitive and/or visual disability needn’t remember how and where to pair something, then be forced to skim a list to find the correct product and then, motor-wise, tap the Connect button; and (2) that, in Apple’s case, the pairing setup involves iCloud in the backend means a person needn’t tediously and repetitiously work so hard to pair their AirPods with their Apple Watch and MacBook, for instance. Put another way, hitting the Connect button once when pairing one’s AirPods is a shortcut that consolidates what’s normally a multi-step task into one. Most people, including folks at Apple, would say this is convenience—and it is, but it’s just as aptly about accessibility.
The bottom line? Apple’s accessory setup game is a de-facto accessibility feature.
For the longest time, Apple used a private API to build the works-like-magic setup flow for its accessories like AirPods, etc. In releasing iOS 18 last year, however, the company finally extended the function to App Store developers; with the then-new AccessorySetupKit framework, device makers could make the setup process of their own products as accessible as Apple’s blessed first-party kit. That isn’t at all trivial, especially for those people who rapaciously add smart home stuff to their dwelling.
Back to Google. I got an Onn 4K Pro from Walmart a few months ago to play around with, and didn’t know until reading Schoon’s story it doesn’t support Fast Pair. The streaming box’s setup process is easy enough, especially using the Google Home app on your phone, but Fast Pair would make it that much better. It’s good to see Google, like Apple, lead by example by integrating Fast Pair with Pixel Buds. Especially for something like third-party earbuds, using Fast Pair would be a huge accessibility win for many people.
How FIFA Made The Club World Cup Accessible to All
In late May, FIFA announced this year’s FIFA Club World Cup would be “the most inclusive tournament ever” as tournament matches would be augmented with audio descriptions for Blind and low vision people. The ADC, or audio descriptive commentary, was first used by FIFA during the 2014 World Cup in Brazil, followed by the 2022 World Cup in Qatar and the 2023 Women’s World Cup held in Australia and New Zealand.
FIFA worked with disability inclusion company AccessibAll in producing the ADC.
Natalie Gross, FIFA’s accessibility coordinator, explained to me earlier this week—after the Club World Cup ended—in an interview part of her purview is “make the fan experience more accessible,” both digitally and in person at live events. Why her work matters, she told me, is everyone—disabled people included—use computers to find information on things—especially sports. There are “thousands and thousands of people getting online” to inform themselves, but what happens when the information isn’t accessible to them? No accessibility means no information, and that’s a major problem. It can be “alarming,” Gross said, for those in the disability community to face barriers like no screen reader support, captioning, or whatever other accommodation(s) they need. In a nutshell, her job entails ensuring those technologies are available to all.
“When it comes to tech and accessibility, those are the things that we’re [at FIFA] looking at digital accessibility,” Gross said. “That’s part of what I do… part of what I do is go through our [FIFA] app and make sure it’s accessible for everyone and they’re able to navigate through it without asking for help. That’s where we stand as far as accessibility with technology… I’m looking at whether folks are able to access it by themselves.”
The impetus for enriching the Club World Cup with audio descriptions began with FIFA’s fan operations team ruminating over ways in which to make the fan experience better, with Gross saying FIFA as an organization is “always researching and thinking about how to make the fan experience more enjoyable for everyone.” With audio descriptions’ steady rise in popularity, Gross said FIFA thought it befitting to enhance the tournament with them. She emphasized FIFA is “leading in that area” because there aren’t many people who are doing audio descriptions for live events. By contrast, audio descriptions are usually done in “post,” referring to the post-production process during which audio descriptive tracks are inserted by film editors. “I think part of what’s so cool about FIFA is we do try to get ahead of things,” Gross said. “We do try to say, ‘Okay, if we’re thinking about every single fan, what does that look like for us to give this person an experience versus them just showing up?’ That was the brain behind why we wanted to include audio descriptions. We want people to come in and say, ‘Okay, I didn’t miss anything.’”
As to AccessibAll’s role, Gross said the company proved instrumental by bringing in commentators and training them on how to provide proper audio descriptions. First and foremost, people had to have an earnest interest in soccer (or football or fútbol). It does neither announcer nor listener any favors if the person giving the descriptions is either (a) apathetic about soccer; and commensurately (b) knows next to nothing about the game. Gross, herself a fashionista who was behind the Not Your Inspiration line likened someone not understanding (or liking) soccer giving audio descriptions akin to someone who’s not at all interested in the fashion world doing the same work for a TV show like Netflix’s Emily in Paris. Again, such mismatches make no sense whatsoever.
“We had that same concept to doing audio description for live sports,” Gross said.
In technical terms, Gross told me the in-stadium audio descriptions were delivered by way of an app. Fans downloaded it prior to arriving at the venue, at which time it found their location and automatically connected to the feed. As a troubleshooting matter, users had the ability to enter a short code to connect if they had issues, with Gross telling me FIFA’s app is “fairly easy to use.” Gross acknowledged the audio descriptions weren’t available to fans watching at home on television, but said it’s something FIFA is “thinking through and having conversations around” accessibility for future touneys.
Feedback on the audio descriptions has been positive, according to Gross. She told me many of her friends who, like her, are part of the Blind community raved about the experience. Gross said her friends reported being “so excited for once to know what was going on on the pitch and to be included.” Better yet, many of her friends aren’t soccer fans, but when they went to matches, they ended up having a really good time.
“Not only is FIFA making an experience more enjoyable for people who are already fans, but we have the opportunity to create fans who may not have understood what was going on,” Gross said of the impact of FIFA’s efforts. “They weren’t excited about it [before], but now that they can get involved and hear everything that’s going on, there’s an opportunity for us to build new fans. That makes me excited about the future.”
Looking towards the future, Gross demurred when I asked if FIFA was setting a standard with its audio descriptions. Instead, she said it’s more apt to say FIFA is “setting an example” and sets a tone for bringing more accessibility to live sporting events. A common misconception, she added, is people with disabilities don’t like sports; especially for the Blind and low vision community, people (like yours truly) would like sports more were they accessible. Tech is uniquely suited to address inaccessibility, as Gross told me a disabled fan is just like any other if given the right access. Attending live sporting events should be fun, Gross said, and it’s that fun “vibe” that she hopes will be caught by other institutions who see FIFA’s work. NBC Sports did work on accessibility for last year’s Olympic and Paralympic Games in Paris. I posted a feature story last September on the network’s efforts to further disability inclusion, which included interviews with NBC Sports senior producer Alexa Pritting and others. Elsewhere, ESPN’s Sam Boden covered the nexus of accessibility and live sports last year. (Full disclosure: Boden asked my participation, but my insights didn’t make the final story.)
Gross noted working on accessibility, as ever, is an evergreen endeavor by saying “there’s always room for improvement.” She said FIFA is already holding internal discussions about said improvements ahead of next summer’s World Cup here in the United States. Technology is “constantly changing,” Gross said, so what next year will entail isn’t yet known. Nonetheless, Gross said she and team are “excited to jump into the work and see what [accessibility is] going to look like for our tournament next year.”
Chelsea beat Paris Saint-Germain 3-0 in this year’s Club World Cup final in New York.
Uber Makes Getting Groceries More Accessible By adding more SNAP-Friendly Stores to UberEats
San Francisco-based Uber on Wednesday announced in a press release what it describes as “the arrival of even more grocery and convenience selection for consumers shopping with SNAP benefits across the country.” Grocers such as Safeway, Smart & Final, and more are amongst a “growing list” of SNAP-friendly stores.
SNAP, or food stamps, is one social service facing cuts by the Trump administration.
“The increasing selection reinforces UberEats’ commitment to expand food accessibility and support communities in need,” Uber said. “Eligible customers can now use their SNAP EBT benefits to order groceries from participating locations through the UberEats app, with payments powered by Forage, the mission-driven payments company. In the months ahead, Uber teams will work to add even more retailers who will accept SNAP EBT payments on UberEats orders, including those that are new to UberEats.”
As Uber’s statement makes clear, today’s news is a big deal for accessibility. It’s one area where technology shines, as the smartphone’s ubiquity combined with on-demand services like Uber’s, means, for instance, those in the disability community who are perhaps relatively immobile (or homebound altogether) can still shop for the things they need for daily sustenance with just a few taps on their iPhone or other smartphone. Better yet, one’s groceries is made even more accessible by virtue of being less expensive with the aforementioned SNAP benefits. Overall, it’s an example of the quintessential win-win situation: not only does Uber get more business, but customers are happy and the whole experience can stand as a shining example of technology at its best. In this case, what’s ostensibly seen as mere nicety for convenience’s sake transcends into becoming something much more meaningful and, frankly, life-changing. To reiterate, services like UberEats—or Amazon or Instacart, for that matter—can be bonafide lifesavers for a not-insignificant portion of the population. Moreover, it’s proof positive assistive technologies have relevance far beyond a discrete, often esoteric suite of features included in a computer’s operating system.
“We believe that everyone deserves the convenience of delivery, and with this growing list of merchant partners we can continue to provide access to the fresh food that families and individuals need, especially those without access to reliable transportation for whom food delivery can be a lifeline,” Hashim Amin, Uber North America’s head of grocery and retail delivery, said in a statement for the press release.
Last year, I interviewed Ofek Lavian for my old Forbes column. Lavian, himself an Uber alum, is co-founder and chief executive officer of San Francisco-based payments company Forage, whose scope is focused on, apropos here, “enabling SNAP EBT for merchants and grocery savings for shoppers.” Lavian was quoted in Uber’s announcement, saying in part in a statement Forage is “proud to power the technology behind it, helping more families use their benefits with confidence and convenience.”
According to Uber, there’s a new SNAP icon in the UberEats for easy identification. The company also says EBT cardholders are eligible for no delivery fees on their first three SNAP-eligible orders as well as a limited free trial of its Uber One subscription service.
How tech teaches neurodivergent teens to drive
The allure of autonomous vehicles notwithstanding, I long ago made my peace with not being able to drive as someone with profoundly low vision. Nonetheless, I do get around pretty well, and I’m thankful to be living amongst such a technological golden age such that suitable substitutions like Waymo are relatively readily available. At the very least, my checkbook is even more thankful because not having a driver’s license means I save a helluva lot of money on car payments, auto insurance, and—particularly here in the Bay Area—gas costs. Yet the acceptance of my life’s circumstances doesn’t mean I don’t sometimes wish I could drive or even learn how in the first place. I absolutely do.
It was these feelings that led me to the Nicklaus Children’s Hospital’s Drive Program and interviewing its coordinator in Dr. Christina Potter. Located in Miami, Nicklaus Children’s—renamed in 2015 from Miami Children’s Hospital after golf legend Jack Nicklaus, winner of a record 18 majors—established its Drive Program in 2023 in an effort to “prepare neurodiverse individuals for a driving exam.” The Drive Program uses virtual reality to what Nicklaus Children’s described as “[providing] students with an experience that simulates a driving exam.” Students receive real-time feedback, and a behavior analyst is present to help guide them through various situations whilst closely monitoring their stress and anxiety levels. There’s also classroom time, during which students are taught foundational pieces like the rules of the road, signs, and more.
Dr. Potter, an academic researcher and experimental psychologist by training who works as Nicklaus Children’s manager of IT and digital technologies, explained to me the impetus behind launching the Drive Program was “simple but powerful”: the idea, she told me, is to “help young people, especially those who face challenges like autism, anxiety, or ADHD, to gain the confidence and skills they need to become safe and independent drivers.” Conventional driving schools, Dr. Potter went on to tell me, aren’t always conducive to many neurodivergent people; they “don’t offer the flexibility or patience or support that [neurodivergent people] really need to succeed,” she said.
“We saw an opportunity to fill that gap in a way that aligned with with our mission at Nicklaus Children’s,” Dr. Potter said.
Nicklaus Children’s had been playing with virtual reality before, using the technology in its surgical areas and noticing “major benefits” for patients coping with anxiety, according to Dr. Potter. Driving in Miami, she said, can be harrowing at times with lots of traffic and the roads being a “madhouse.” Dr. Potter acknowledged there’s a lot of anxiety surrounding driving in general, but noted those feelings can be heightened in neurodiverse people. She also said while other hospitals, such as Children’s Hospital of Philadelphia, do teach teens to drive, her team found a program specifically designed for, and catered to, the neurodiverse community really doesn’t exist anywhere else.
“Research shows individuals on the spectrum tend to experience heightened levels of anxiety anyway, particularly around driving,” Dr. Potter said. “We thought we might be able to use VR as the tool to help them learn driving skills in a safe and controlled way.”
The Drive Program, Dr. Potter told me, has been designed to “provide our students with a strong foundation and safe and confident driving—both in knowledge and in real world application.” Teaching begins with the fundamentals, of course: students are taught the basic rules of the road, reading road signs, and more. The curriculum is classroom-based and built using Florida’s driving manual. She emphasized, though, it’s the way in which concepts are taught that sets Nicklaus Children’s efforts apart. Program staff are sent to become certified driving instructors, then use the VR-based simulator—which comes replete with a headset, steering wheel, and pedals—as a proxy for a real car. This combination of strategies, according to Dr. Potter, “[reinforces and supports] students with different learning styles to meet them where they’re at.”
Dr. Potter stressed the Drive Program does not issue licenses to people. Rather, the program is intentionally designed to facilitate the reduction of anxiety and “provide [students] with the framework to be successful at a traditional driving school.”
When asked how exactly the Drive Program actually helps a neurodivergent person, Dr. Potter said it’s a “mix” of both the technology and, more pointedly, the coaching and instruction. The tech is cool, to be sure, but Dr. Potter told me the first thing they tell new recruits is they’re safe and the staff is there to support them. The beauty of the Drive Program’s simulator, she went on to tell me, is it enables students to learn and make mistakes in a safe, emotionally enriching, and carefully controlled environment.
“Through repetition, guided instruction, and supported coaching, over time our students are building confidence and competence behind the wheel,” Dr. Potter said.
She added: “No matter where they’re at, you know, we meet them where they are emotionally and we’re very patient and empathetic and understanding… if they crash the car, which they do, or if they veer off road, nobody gets hurt. We use that [experience] as an opportunity with our students to help them learn and grow without having that fear someone can get hurt. Really, we just practice, practice, practice.”
Feedback on the Drive Program has been positive. Dr. Potter noted comments from students and parents alike are “wildly optimistic,” adding the Drive Program has grown into something that has exceeded everyone’s expectations. Many students who have successfully completed the Drive Program have gone on to attain their driver’s license from the DMV. It’s been equally rewarding for parents to watch their children learn such crucial skills. And as a data scientist, Dr. Potter said the team has copious amounts of data on the Drive Program; anxiety in parents has been reduced by as much as 50%, while the number is almost 70% from the beginning of the program to its conclusion.
“It’s confidence not just for students, but for the parent as well,” Dr. Potter said.
Looking towards the future, Dr. Potter said she and her team at Nicklaus Children’s hopes to grow the Drive Program, as well as evangelize it. She believes the Drive Program is “breaking down barriers to mobility and inclusion” for not only the neurodiverse community, but for anyone experiencing driving-related stressors. It helps tremendously in greater independence, as well as in opportunity and social connection. The Drive Program has a multitude of benefits that go beyond cars.
“There’s a lot that we can do with the technology to mimic those scenarios to help a broader range of individuals to reduce driving-related anxiety—even for folks who already have their driver’s license,” Dr. Potter said in pondering the Drive Program’s future. “Providing those opportunities in a more broad approach is one area we want to go down. Another would be working with local high schools, as that’s something we haven’t done yet. We’ve shown we have this proof-of-concept that works really well with individuals on the spectrum, folks who have heightened levels of anxiety [and] folks who have ADHD. We know our curriculum and our process will work with a broader audience. We want to get the word out and have as big of an impact as we can.”
She continued: “The Drive Program is a program that benefits the entire community. It’s for anyone who has heightened anxiety. It gives them access to a supportive and adaptive driver training program, opening doors to greater independence and greater opportunities and connection. These are life-changing milestones. Helping individuals to fully participate in their communities is really what we’re trying to achieve here.”
Apple’s Hearing Health, Sleep Apnea Features Expand to more places Around the World
Marcus Mendes reports today for 9to5 Mac Apple’s hearing health features for AirPods Pro, including the hearing test and hearing aid functionality, is expanding to more countries. The sleep apnea feature on Apple Watch is seeing similar availability growth.
“Both features were first introduced last year, but they require approval from local health regulators before being rolled out regionally,” Mendes wrote.
Argentina, Australia, Taiwan, and Thailand are but a few locales that received support for both the hearing health and sleep apnea features, according to Mendes. In a delightful little aside, I learned Christmas Island is a real place on Earth and it is one of 15 new places to gain access to both Apple’s hearing health and sleep apnea software.
Apple CEO Tim Cook took to X earlier today to share the news of the expansion. I highlighted his announcement in a post of my own. To reiterate my point, it’s not at all trivial or banal that Cook boast about this ostensibly ho-hum bit of Apple news; on the contrary, it’s a huge deal because of Cook’s stature. He’s one of the highest profile chief executives on the planet—that Cook took time to talk about what’s essentially an accessibility feature is notable. While the hearing health and sleep apnea features have been around for several months here in the United States, any bit of the limelight is worth a thousand suns when it comes to accessibility and, by extension, the disability community. Cook (and others on Apple’s leadership team) post about accessibility other times, but still, every moment is a big. The resonance cannot be overstated.
The hearing health features for AirPods Pro 2 were released to much fanfare last October—alongside the much-ballyhooed (and much-maligned) Apple Intelligence, no less—with iOS 18.1. I posted a first look, as well as an interview with Apple accessibility leader Sarah Herrlinger about building the features a couple months later, in December. As to sleep apnea, I’ve covered that before too, in a story published in February 2023.
Accessible Street-Crossing App Oko Gets Acquired
It isn’t often I cover M&A news, but today happens to be one of those rare times.
Texas-based transportation safety company Synapse on Tuesday announced its plan to acquire Oko. Oko, an Apple Design Award winner and last year’s App of the Year, is an iPhone app which uses the device’s camera(s) and artificial intelligence to help Blind and low vision people accessibly (and safely!) cross streets. In its announcement, Synapse describes Oko as developed by “a team of accessibility and mobility experts” and which technical attributes include “[harnessing] location data, audio cues, and accessible interface design to help guide users through the most dangerous aspects of crossing intersections.” Oko collaborated with the Blind community to build its app.
Once the deal is officially completed, Oko will become a free app in an effort to “make its services accessible to as many pedestrians as possible,” according to Synapse.
Synapse is owner of Polara, which specializes in building accessible pedestrian signal (APS) technology. The deal with Oko enables Synapse to integrate Oko’s technologies with the existing APS infrastructure by way of its consumer-oriented PedApp in an effort to make street-crossing more accessible to the Blind and low vision community.
“This acquisition is a natural extension of our mission to increase roadway safety, accessibility, and efficiency with state-of-the-art software,” Josh LittleSun, Synapse’s chief technology officer, said in a statement included in the company’s press release. “The fusion of Oko’s smart navigation technology with Polara’s trusted PedApp moves us closer to a future where pedestrian equity and safety are built into every crosswalk.”
Synapse’s vice president of intersection sales, Matthew Baker, agrees with LittleSun.
“This is the kind of life-changing innovation we’re proud to bring into the Polara family,” Baker said. “By eliminating subscription fees, we’re making Oko accessible to all and speeding up adoption in communities that need it the most.”
In an email to me, Oko founder Willem Van de Mierop said he believes the acquisition “can be seen as one of the most exciting deals for the accessibility space.” He also noted Oko recently hit a milestone, as it has helped people navigate 10 million streets.
Catching Up with CSD Chief Executive Chris Soukup
When I recently caught up with Chris Soukup over email amidst Disability Pride Month and ahead of the Americans with Disabilities Act’s (ADA) 35th birthday later this month, the chief executive officer of Communication Service for the Deaf (CSD) explained to me plainly and succinctly he believes disability to be “an inherent part of the human condition.” How those in the disability community experience the world around them, he went on to rightfully tell me, “is continuously evolving throughout our lives.”
“Disability Pride Month is a celebration of how these experiences vary, the beauty of our collective intersection, and our commitment to unity and inclusion,” Soukup said. “The ADA and laws similar to it provide our society with important guardrails to ensure that no one is inadvertently or intentionally left behind as progress and innovation propel us into the future.”
As I wrote in my profile of Soukup and CSD last October, CSD, established in 1975, is a self-described “Deaf-led social impact organization” which exists to “create a better world for Deaf and hard-of-hearing people.” In a nutshell, CSD is devoted to the betterment of the Deaf and hard-of-hearing community; in January, the organization announced it was supporting efforts for the Los Angeles-area Deaf community whose lives were upended by the wildfires that utterly ravaged the Southern California region.
Deaf people, Soukup said, still bear the brunt of bias by society writ large. Discrimination in the workplace, as well as unemployment altogether, remain “pervasive issues” for the community, he said. Not only are securement and sustainability problematic for Deaf job-seekers, Soukup went on to say, career advancement is equally few and far between for Deaf workers. And although technology has unquestionably broken down barriers and subsequently increased access to communication and information, the reality is there still exists “significant challenges,” according to Soukup. Many Deaf people, he said, remain reliant upon intermediaries—people like sign language interpreters, for one—to facilitate proper communication. It’s oftentimes difficult, Soukup said, to ensure “qualified individuals are always available to facilitate between a signing Deaf person and a non-signer.”
Soukup believes a crucial part of gaining greater accessibility is good storytelling.
“[It’s] so important,” he said. “Amplifying and spotlighting people with disabilities that are thriving and achieving their goals is so important. There is not enough attention in the media. People who identify as being disabled make up at least 25% of the population. We need to be seen, and our successes should be celebrated as an opportunity to transform how people perceive disability.”
When asked to elaborate more on the ADA’s significance, Soukup told me the law (and any extending legislation) is of crucial import because they provide “important protection” to all disabled Americans. He reiterated a popular refrain in the disability community, which is the ADA is seen as the “floor” rather than the ceiling. The ADA, Soukup said, is a “baseline” atop of which society can “[build] a world that is designed to be more inclusive from the beginning.” The disability community, he added, would “love” to see lawmakers go further with regulation by “[continuing] to close gaps and eliminate barriers that make it harder for people with disabilities to lead fulfilling lives.”
Soukup’s last sentiment was a good segue into talking about the future. In peering into his proverbial crystal ball, he said it’s his hope society reaches at point at which we “normalize disability.” In addition, he spoke of the domino effect disability has on the lives of everyone, disabled or not, saying “we recognize that when we speak about disability, we are talking about everyone: ourselves, our families, [and] our loved ones.”
“Embracing disability as an ordinary part of what it means to be human expands our thinking and challenges some of the implicit bias that we carry,” Soukup said of his hopes and dreams for disability’s societal future. “A Deaf person can be an incredible physician, airline pilot, college professor, or entrepreneur. Our work is closer to being achieved when the general public internalizes and embraces all of these possibilities.”
Waymo, Autism Society of America Partner On giving riders on the spectrum Greater Accessibility
Following up on my story from last Friday on Waymo introducing teen accounts, I came across another report involving Waymo. This one sees the company working with the Autism Society of America to help people on the spectrum gain fuller independence.
The work was detailed in a story by Phillip Palmer at the ABC7 affiliate in Los Angeles.
“Waymo has partnered with the Autism Society to highlight how a driverless vehicle can actually offer a consistent, safe and predictable way to travel for young adults on the spectrum,” Palmer wrote in describing the pair’s partnership. “By working with the Autism Society of America in the very early stages of development, they can avoid any challenges that might come as the company grows.”
The allure of Waymo to autistic people is pretty much precisely what it is for the Blind and low vision: driverless cars affords those in our respective communities greater agency and autonomy with which to travel. Palmer notes only a third of people on the spectrum have driver’s licenses, according to a study by the Journal of Autism. By contrast, many autistic people do hold college degrees and hold jobs; having access to Waymo means their independence (and self-esteem) is increased. Indeed, the Autism Society’s chief marketing officer, Kristyn Roth, said to Palmer in part “having this autonomy [via Waymo] is something that builds confidence and it uplifts people.”
For its part, Waymo is committed to tackling these kinds of accessibility issues.
“What are the moments where the existing services and transportation office options are not working? Because we take those problems and we actually design product of future solutions around their specific problems, so that we’re not just assuming these are the problems that you faced,” said Orlee Smith, senior product manager at Waymo.
Waymo, and its underlying technologies, are not above reproach. They should both be scrutinized. Yet in a world where a lot of people are suspicious of artificial intelligence—and make no mistake, driverless cars are effectively AI-powered robots on wheels—the genuine good a company like Waymo can do for people pushed by the wayside under the guise of prioritizing safety and competence. The reality is, there are a lot of disabled people out there for whom conventional driving is impossible. For those folks, which includes yours truly, the advent of autonomous vehicles is nothing short of revolutionary. As the technical bits and bobs inevitably mature, the next step for mainstreaming driverless cars even further is advocating for individual ownership. That will be an even more monumental task because it entwines the law and regulation with the societal view on disability—in this case, challenging the notion Blind and low vision people can buy and “drive” their own driverless cars. Perhaps I’m overly pessimistic, but such a sea change makes climbing Mount Kilimanjaro akin to climbing a molehill.
I’m not certain such a thing will happen in my lifetime, if ever.
Waymo’s Making Getting around more accessible to blind, low vision Kids with new teen Accounts
Jennifer Elias reported for CNBC this week Waymo has begun offering accounts to teenagers ages 14–17 in Phoenix. The decision reflects Waymo’s desire to “increase ridership amid a broader expansion of its ride-hailing service across US cities.”
“The Alphabet-owned company said that, beginning Tuesday, parents in Phoenix can use their Waymo accounts ‘to invite their teen into the program, pairing them together.’ Once their account is activated, teens can hail fully autonomous rides,” Elias wrote in describing Waymo’s ambitions for rider expansion. “Previously, users were required to be at least 18 years old to sign up for a Waymo account, but the age range expansion comes as the company seeks to increase ridership amid a broader expansion of its ride-hailing service across US cities. Alphabet has also been under pressure to monetize AI products amid increased competition and economic headwinds.”
Elias noted Waymo will provide so-called “specially-trained Rider Support agents” for teen riders, adding teens are able to share real-time updates of their trip status with parents. Their parents also receive the ride receipt(s). Another friend of mine, Bloomberg’s Natalie Lung, wrote on X Waymo’s teen accounts are limited to Phoenix for now because “California currently does not allow unaccompanied minors on AVs.”
Uber launched a similar service oriented towards teens in 2023, according to Elias.
Reading this story got me thinking wistfully about how Waymo—and particularly its new teen account feature—would’ve been so great during my high school years. I took a yellow school bus to and fro school every day from the time I was a wee kindergartner until I graduated high school in 2000. The vehicles were one of the shorter, smaller busses generally used to transport disabled kids back and forth from school. I never minded it, as I made some great friends on those trips—my favorite driver, Shirley, is a longtime Facebook friend of mine now—but as I grew into driving age, had Waymo existed then, I would’ve felt of a feather amongst my flock of friends who had normal vision and thus could drive. With Waymo, I could’ve asked a friend to ride with me after school to visit a Tower Records or The Wherehouse with me. I could’ve had the very same agency and autonomy I enjoy today two decades earlier as I was traversing my formative years. This is why I chose to cover Elias’ report: as it stands, Blind and low vision kids—in Arizona’s capital city, anyway—have a tremendous opportunity before them to potentially “drive” a car and get around independently. Not only is the autonomous driving tech cool as hell, the heightened feelings of self-esteem and empowerment on the still-in-development brains of today’s teenagers make a helluva difference in their socio-emotional growth—especially if they cope with a disability.
Waymo’s teen account comes not long after it announced expansion to New York City, as well as Tesla’s own robotaxi competitor seeks to add service here in the Bay Area.
White House Argues ASL Interpreters unnecessary for Accessibility at press Briefings, report says
Molly Reinmann reported for CNN last week US District Judge Amir Ali, a Biden appointee, “grappled for over an hour” over whether to force the Trump administration to provide American Sign Language (ASL) interpreters at White House press briefings.
A lawsuit was brought by the National Association of the Deaf (NAD). Reinmann writes the suit alleges “the White House is violating deaf Americans’ rights under the Rehabilitation Act of 1973 from accessing ‘critical information in real time.’” The attorney for the NAD, Ian Hoffman, subsequently argued Deaf and hard-of-hearing Americans are “deprived of their ability to participate in the democratic process.”
Biden’s briefings were staffed with ASL interpreters. The Justice Department ended the practice upon the transfer of power for President Trump’s second term, contending assistive technologies such as captioning and transcripts are sufficient enough. The NAD pushed back, saying—rightly so—that ASL and English are bespoke, distinct languages while emphasizing captioning oftentimes can prove “especially inaccessible to the many thousands of deaf persons fluent only in ASL,” according to Reinmann.
Relatedly, the NAD took umbrage over the first Trump administration’s lack of sign language interpretation during critical Covid–19 pressers that took place back in 2020.
Reinmann’s story, while newsworthy on merit alone, is especially appalling given the backdrop of July being Disability Pride Month and the Americans with Disabilities Act turning 35 on the 26th. The cretins representing the Justice Department argued the burden of proof is on the NAD to, as Reinmann said, “show that more thorough ASL translations were necessary and repeated her previous claim that the type of services provided should be at the discretion of the White House.” The Department of Justice is essentially paternally (and patronizingly) dictating accessibility—a move suggesting the able-bodied majority know best how to accommodate people with disabilities.
If Trump’s immigration policies are racist—they are—the inaccessibility is ableist.
Moreover, what rankles me most is the part in Reinmann’s lede when she writes Judge Ali “grappled” with his decision. I don’t blame her, but what anguish is there? You have a segment of the citizenry advocating for accessibility so as to be more informed. Disabled Americans, myself included, are Americans. We tune into White House news conferences. We read and watch CNN. We vote. That Judge Ali wrestled with some combination of legal and/or legal gymnastics in issuing his ruling underscores the deeply entrenched societal ableism that, in so many respects, are a bedrock of how not only the country works, but the world too. Most people treat disability like a disease.
As a CODA, I can empathize with Reinmann’s story on so many levels. My parents watched the local news every night after dinner while I was growing up and, despite the presence of captioning, they would rely on me to translate what was going on in the world and explain its meaning. It was quite the ask of a disabled kid himself going into his pre-pubescent years and beyond, but it’s illustrative of the notion that captions, however vital in their own right, has but limited utility. Captions can go only so far. Likewise, transcripts are good, but have their problems because, again, English typically isn’t a Deaf person’s first language and thus comprehension is compromised.
Karoline Leavitt and team clearly don’t understand that—or, if they do, they don’t care.
Editors, this is why accessibility in tech so richly deserves its own beat in newsrooms.
Xbox Announces AGI Tags Availability, More
Microsoft-owned Xbox put out a blog post this week wherein it announced the Accessible Games Initiative (AGI) tags are available “across all digital Xbox experiences.” Team Xbox wrote the news is “just in time” for Disability Pride Month.
The news comes a few months after Xbox announced plans in March to join the AGI.
“[Xbox] is proud to announce that the new Accessible Games Initiative tags, designed to provide players with clear and consistent information about the accessibility features in video games, are now available across all digital Xbox experiences including console, PC, mobile, and web storefronts,” Xbox wrote in the post’s introduction.
Xbox’s contribution to the AGI, a consortium which includes fellow gaming industry heavyweights Nintendo of America, Electronic Arts, Ubisoft, and others, builds upon its own work, begun in 2021 with the release of the Xbox Game Accessibility Feature tags. Of note is Xbox’s explicit callout in the announcement that any Xbox accessibility tags which don’t have a correlating AGI tag will remain available on the platform, with the company saying the combination “will make it even easier for players with disabilities to learn about available accessibility features and find their next great game.”
Xbox’s post finishes with an Q&A-style interview with Brannon Zahand, senior technical program manager at Xbox, and content creator and gaming accessibility advocate Steve Saylor. The conversation “[discusses] the work towards greater standardization of accessibility in games, what these tags mean for players today, and why this work is important,” according to Xbox. Additionally, Xbox published another blog post featuring an interview with Phil Crabtree from Kaizen Game Works. Microsoft Game Dev contributing editor Miguel Lopez writes his conversation with Crabtree delves into “how integrating Accessible Games Initiative tags has supported their development practices, highlights the community’s enthusiastic reception, and explores how accessibility tools and standards can further transform the gaming industry.”
I posted an interview with Entertainment Software Association senior vice president Aubrey Quinn back in early April. The executive told me all about her organization’s stewardship of the AGI and the need for tags, as well as how the group came to exist.
“Before the Accessible Games Initiative, the burden was on players to interpret existing tags in the marketplace from platform to platform, game to game. We hope to change that,” Quinn said of the driving force behind the AGI’s formation. “This new initiative is meant to help consumers identify specific accessibility features in individual video games, so that players buying games can make better informed purchasing decisions. Having a clear approach to identify accessibility features across different games, with criteria behind each accessibility tag, will provide consumers with information about the accessibility features they can find in games prior to purchasing them.”
How Amazon is made ‘Accessible for everyone’
Amazon last week published a blog post on its European site in which the Seattle-based company detailed myriad ways it makes Amazon “accessible for everyone.” The post, bylined by the About Amazon Team, was shared coincident with the recent enforcement of the European Accessibility Act. Amazon said it is “well prepared” for the legislation.
“Amazon’s vision is to be Earth’s most customer-centric company, which means making our devices and services accessible to everyone. We have pioneered accessibility features across our products and services for over a decade and our commitment to accessibility is deeply rooted in our customer-obsessed culture. For us, this is more than meeting requirements: it’s about staying true to our mission of serving every customer,” Amazon wrote of its North Star in the post’s introduction. “We design with inclusivity at the forefront, making our products and services accessible to all customers. By integrating new technologies, such as AI, we are able to create solutions that enhance the experience for all customers, including those with specific accessibility needs. This shows that when accessibility is treated not as an afterthought but as a core design principle, technology can truly become a force for inclusion.”
The post, the scope of which is familiar, offers a cursory rundown of the accessibility attributes of Amazon-branded products—including Alexa, Fire TV, Kindle, Prime Video, and more. In addition, the company highlights how it has made the general shopping experience more inclusive and which goes beyond “more than just a well-structured webpage.” The company mentions features such as Navigation Assistant and Product Summary, both of which are characterized as helping keyboard-oriented and/or screen reader users “shop more efficiently, showing that accessibility and convenience go hand-in-hand.” Likewise, Amazon Locker locations, where people go to a physical place to retrieve their order(s), features a Lower Locker Slot selection for wheelchair users, as well as an audio-based UI (with audio jack) for the Blind and low vision.
“More than 100 million people have disabilities in Europe,” Peter Korn, director of accessibility for Amazon devices, said in a statement. “What I love most about this work is how it embodies one of Amazon’s core principles: Customer Obsession. Building products customers love means including them in the process, not only by talking directly with customers with accessibility needs, including the elderly or with disabilities. Our culture of inclusion is also reflected in the many people with disabilities who work throughout Amazon in diverse roles—including our product teams.”
I covered the redesigned Prime Video app last summer; my report included interviews with executives Raf Soltanovich and Kam Keshmiri. And I last spoke with Korn last year about, amongst other things, using artificial intelligence to power search on Fire TV.
For more on the European Accessibility Act, check out my story earlier this week on it.
Gemini App Gets support for Google Broadcasts
Abner Li reports today for 9to5 Google the Gemini app is gaining the ability to broadcast voice messages to Nest speakers and smart displays integrated with Google Home.
Li notes the aptly-named Broadcasts feature was previewed by Google last month and, perhaps most notably, comes ahead of Gemini supplanting Google Assistant on Android devices. The change will happen sometime “later this year,” according to Li.
Broadcasts can be sent to specific devices or rooms, or to the entire household.
While certainly a feature of convenience—nobody wants to run around their house telling the family dinner is ready, Paul Revere-style—the reality is, as ever, Broadcasts can prove beneficial for accessibility too. Especially for those with limited mobility, or someone who’s immobile altogether, trying to share a message (or get assistance) can be arduous if people are spread across other parts of the house. For example, someone might need to alert their caregiver, who may be in the kitchen, they need to bring their medications to take along with their food. Likewise, someone in a wheelchair may not be able to move about to every part of the house to tell everyone a meal is ready, so Broadcasts makes relaying the message a more accessible task. Google positions Broadcasts as an amenity that makes life easier and nicer when, in truth, the feature has serious applicability as a lifesaving de-facto accessibility feature for many people.
Google’s Broadcasts is similar to Apple’s Intercom on HomePods. Apple’s implementation is effectively identical, catered towards HomePod and HomePod mini, but comes with an additional perk: Intercom can transcribe messages. They will show up on people’s iPhone or Apple Watch, for instance, and is a cool but useful feature for those in mixed households where, say, hearing and Deaf people cohabitate. That Intercom offers transcripts makes the feature more inclusionary when it ostensibly would be exclusionary, much like how—as I’ve proffered often lately—Music Haptics and transcripts breaks barriers for music and podcast listening in the Apple ecosystem.
Apple, Columbia University Researchers discuss Accessibility-Minded ‘SceneScout’ Project
Marcus Mendes reports for 9to5 Mac on a project from Apple and Columbia University aimed at using artificial intelligence to make navigation and street-crossing more accessible to the Blind and low vision. The prototype, called SceneScout, is described on Apple’s Machine Learning Research blog as “a multimodal large language model (MLLM)-driven AI agent that enables accessible interactions with street view imagery.”
The paper was written by Gaurav Jain, Leah Findlater, and Cole Gleason.
“People who are blind or have low vision (BLV) may hesitate to travel independently in unfamiliar environments due to uncertainty about the physical landscape,” the trio wrote in the paper’s introduction. “While most tools focus on in-situ navigation, those exploring pre-travel assistance typically provide only landmarks and turn-by-turn instructions, lacking detailed visual context. Street view imagery, which contains rich visual information and has the potential to reveal numerous environmental details, remains inaccessible to BLV people.”
SceneScout relies upon Apple Maps APIs alongside the aforementioned LLM to “provide interactive, AI-generated descriptions of street view images,” Mendes wrote.
As the researchers explain, SceneScout supports two modes: Route Preview and Virtual Exploration. The former is intended to “[enable] users to familiarize themselves with visual details along a route” while the latter is meant to “[enable] free movement within street view imagery.” Mendes notes SceneScout also features “a GPT–4o-based agent within real-world map data and panoramic images from Apple Maps.”
According to Mendes, the SceneScout study, comprising 10 Blind and low vision people, found both SceneScout modes highly lauded by participants. The more open-ended Virtual Exploration mode in particular was praised for providing information “they would normally have to ask others about.” The study’s participants all were well-versed in using screen readers and all worked in the tech industry, Mendes wrote.
As ever, however, there were as many shortcomings as there were advances.
“A technical evaluation [of SceneScout] shows that most descriptions are accurate (72%) and describe stable visual elements (95%) even in older imagery, though occasional subtle and plausible errors make them difficult to verify without sight,” the researchers said of the problems that cropped up. “We discuss future opportunities and challenges of using street view imagery to enhance navigation experiences.”
At 30,000 feet, the SceneScout project is encouraging because, dangerous hallucinations aside, it does prove further potential of artificial intelligence as an assistive technology. As SceneScout is iterated and refined on, it’s plausible the technology could be incorporated somewhere else so as to be available to a more “mainstream” contingent of Blind users such as myself. If SceneScout someday is able to enable fuller agency and autonomy in travel for the Blind and low vision community, then the tool will have reached self-actualization in a way that would make Maslow proud. Put another way, SceneScout theoretically could someday be as impactful to the Blind and low vision community for foot travel as Waymo’s autonomous vehicles are today for driving distances. While SceneScout and Waymo diverge in methodology, the common goal—greater accessibility for disabled people—do undoubtedly converge.
It’s also worth mentioning SceneScout’s scope is similar to that of Apple Design Award winner Oko, as well as NYU’s Commute Booster app for navigating New York City’s subway system. Both pieces of software leverage AI to varying degrees in order to make travel (and transit) for accessible to Blind and low vision people. In a nutshell, the Commute Booster app is designed to rectify the myriad issues inherent to the so-called “middle mile”—the oftentimes treacherous part of one’s journey between departure and destination, which can be really tricky for Blind people to navigate successfully.
Overcast Update Adds Double Tap on Apple Watch
Over the weekend, I came across this Mastodon post in which it was announced popular indie podcast client Overcast, version 2025.6, received a “minor update” that includes not only the boilerplate big fixes and performance improvements, but also one accessibility enhancement of particular note: Double Tap on Apple Watch. The feature, which debuted with Apple Watch Series 9 and Apple Watch Ultra 2 in 2023, is available in the watchOS app to start and stop Play/Pause. Overcast is made by Marco Arment.
The update is described by Arment as “not an exciting one, but good nonetheless!”
The update grabbed my attention for a few reasons. For one thing, Double Tap is, like the iPadOS pointer feature which was redesigned in iPadOS 26 such that the pointer is a Mac-like arrow rather than a circle, one feature whose origins trace back to AssistiveTouch. As with the pointer, it’s my understanding from sources the Double Tap accessibility feature—designed to help those with limited mobility, motor-wise—was handed off to the broader software engineering team within Apple Park to massage it into something with more mainstream sensibility. That Overcast users can now use Double Tap makes playback controls more accessible not only if one’s hands are full, but also if tapping the Watch’s screen would otherwise be burdensome or untenable altogether. For another reason, the advent on Double Tap in Overcast’s Watch app is yet another strong show of solidarity with the disability community by Arment. He’s been a staunch ally of disabled people dating back to days as Instapaper’s keeper, so this latest Overcast only strengthens his resolve in this regard. I interviewed Arment back in 2018 for a story for MacStories’ coverage of the App Store turning 10. He told me over email in part he considers working on accessibility “a peer to other aspects of my design and structure, such as the colors I choose or how interfaces are laid out.” Lastly, Arment has spoken positively of Apple’s new Speech APIs on ATP and his enthusiasm portends well for Overcast getting transcripts sometime in the not-too-distant future.
Overcast’s 2025.6 update is available now on the App Store.
Entrust Product Design Chief Mark Opland Talks the European Accessibility Act, More in Interview
A couple of weeks ago, a landmark law spearheaded by the European Commission went into effect. It’s called the European Accessibility Act (EAA) and it mandates digital goods be accessible to people with disabilities. The June 28, 2025 timeframe was the cutoff date for member states in the European Union to reach compliance, but the law was officially passed back in 2019. The deadline was imposed so as to achieve synchronicity concerning digital accessibility spanning the European Union. Companies whose products shipped to customers prior to June 28 have a grace period of 5 years to comply with the EAA. Hardware has double the time, with 10 years’ grace.
A cogent primer on the EAA was posted in late January on the AccessibleEU website.
One company that’s been thinking a lot about the EAA is Entrust. A digital identify firm based in the United Kingdom, Entrust offers a suite of products and services which cater to financial institutions like banks. Entrust builds everything from ID verification technology to encryption tech and more. In an interview conducted last week via videoconference, Entrust’s vice president of product design, Mark Opland, encapsulated his company’s scope as “[offering] an enormous amount of products and services to financial institutions, but really centered around identity and security.”
When asked about why accessibility matters to him and his team, Opland explained it has been “a huge part of the way we’ve built products for a long time,” adding accessibility has been personally pertinent for the better part of 15 years. To raise awareness for accessibility, he told me, not only aligns with his value system, but helps Entrust “deliver more successful products and services into the industry.” Accessibility, Opland went on to say, isn’t viewed as a “constraint” for Entrust; rather, the company views it as an opportunity to innovate and thus better build its business. Accessibility can be, and often proves to be, “an enabler of innovation” for Entrust, Opland said.
“If we fundamentally approach design problems and product problems thinking about the largest possible user base in mind, we ultimately build products that are more successful,” he said of Entrust’s philosophy on prioritizing accessibility in its work.
As to the EAA, Opland said the legislation is a directive aimed at “[making] a wide range of products and services more accessible to people with disabilities,” adding the European Union considered the things people used day-to-day in an effort to contribute to the betterment of society and wanted to find a way to “[encourage] greater inclusion and breaking down the barriers across the European Union for all people.” The EAA, he continued, touches myriad industries and, as such, while compliance to the EAA is compulsory, the byproduct of it is what Opland characterized as enabling businesses to “tap into a much larger customer base.” He pointed to a large bank in the United Kingdom who reported its total addressable market increases by more than 10% when they build products with accessibility in mind. “For the European Accessibility Act and the European Union, it’s not only about providing access, but about building their GDP and increasing the [gross domestic product] for all their member states,” Opland said.
For Entrust’s part, Opland made crystal clear his main job as it relates to the EAA is to ensure the company enters into, and then maintains, compliance with the law. Entrust must follow the law’s legal structure and, more pointedly, “we can’t be building and shipping products anymore that are not accessible.” Opland was forthright in telling me he cares not about being the “accessibility police” and running around into people’s offices internally to enforce abiding by the EAA. Instead, he told me the company has spent lots of time leading up to last month’s deadline auditing and doing remediation. Moreover, Entrust has focused its energies on prioritizing advocacy and evangelism with the goal of what Opland said is “building a culture of continuous improvement.”
“Our goal is to make sure every team at Entrust, whether it’s Human Resources or an engineering team, is focused on making sure they’re better this quarter than they were the last quarter and better next quarter than they were this quarter,” Opland said. “That advocacy has us out of the cycle of managing accessibility from audit to audit, and seeing the job is being done when we earn our accessibility accreditation. This focus on continuous improvement means it’s top of mind for everyone in the company and has now become part of our DNA… that’s been the secret to our success over time.”
Opland acknowledged coming into compliance with any sort of law has its challenges, but in context of the EAA, the economic and social benefits can make the headaches worth it. Especially from a social justice perspective, he said “it’s been fantastic” to work with the law at Entrust because it aligns with both his personal and the company’s institutional values. The main theme that threaded my conversation with Opland is that greater accessibility vis-a-vis the EAA is two-pronged: it benefits people obviously, but it also benefits businesses. The more people one markets to, the bigger its bottom line can become. Understanding those principles takes education, and Opland told me it can be challenging unto itself to teach people how to make accessibility happen. The EAA, as a law, and accessibility standards like WCAG aren’t necessarily congruent with one another; Opland said they “aren’t always black and white… in some places they’re gray.” Entrust understands “there’s always a tradeoff between usability and security,” according to Opland, which isn’t always a question with a black-and-white answer.
“What we’ve discovered is the more you have those conversations, the more you dig in and the more you learn, the stronger and more resilient you become,” Opland said of the company’s learnings. “Accessibility is a unique challenge in that there is often quite a lot of subjectivity and just a huge spectrum in human ability. There isn’t just sort of a one-size-fits-all solution that’s going to allow me to wave a magic wand to make everything accessible. I think it’s just a constant cycle of learning and improving.”
Entrust has been working on accessibility for close to a decade, or 9 years, now. This gave Opland and team a lot of runway in terms of comfort and confidence when the company felt the looming EAA deadline. The work on compliance, he told me, had been an 18-month effort into understanding EN 301–549—which is linked to the EAA—WCAG, and the EAA itself. Companies like Entrust who are generally concerned with the aforementioned WCAG standards, Opland told me, are “in a really good position to be compliant with the law with the exception of a few slightly more specific directives.”
“If you’ve been focused on WCAG, you set yourself up really well,” Opland said. “We’ve had a pretty big head start and have been positioned pretty well to be compliant.”
Opland is optimistic the EAA will help make accessibility more top of mind and more present in products. The European Union, he told me, has set a standard because to EAA applies to anybody who wants to do business in the Union, so products and websites must meet the new regulation. In the United States, Opland pointed to the Americans with Disabilities Act as greatly improving the quality of life for the disability community, but conceded there is more work yet to be done. He hopes businesses everywhere “continue to invest” in accessibility for the people—and for their business.
As to the future, Opland and Entrust are committed to walking the righteous path.
“Our hope is we are continuing to build products and services that enable more and more people to enact with their communities, to enact with the businesses around them, [and] to have more opportunities and greater advantages in the lives they lead,” he said of Entrust’s view of its future work. “There’s something really meaningful and deep in doing so. Identity is such a great vehicle to help advance underrepresented folks in all stations of life, and accessibility is one important aspect of that. If you track back to Entrust’s mission, it perfectly aligns with our mission. It perfectly aligns with our growth as a business. We were working on [accessibility] long before the law mandated we do it. We’ll continue to invest in accessibility, whether the law continues to mandate it, so it just aligns perfectly with our mission, with our values, and with our business.”
How Type to siri trumps courtesy and convenience
MacRumors contributor Aaron Perris posted on X today Apple has started airing a new Apple Intelligence ad which highlights Type to Siri and ChatGPT. The 15-second spot, which also features iPhone 16 Pro on T-Mobile’s network, takes place in a workplace elevator. As of this writing, the video isn’t (yet?) on Apple’s official YouTube channel.
I wouldn’t typically cover the advent of a new Apple commercial, but this particular one merits an exception. During the Apple Intelligence portion of last year’s WWDC keynote, senior vice president of software engineering Craig Federighi talked up Type to Siri as a feature of convenience: it’s a mode by which people can quietly interact with the virtual assistant so as not to be disruptive of others. The reality is Type to Siri is not an all-new feature; it’s existed as an accessibility feature on iPhone and iPad since iOS 12 in 2018.
This context matters greatly in the grand scheme. It is extremely noteworthy that Apple “graduated” what was once an ostensibly esoteric, niche assistive technology and expanded upon it so as to become more mainstream. Despite Federighi’s message to the masses that Type to Siri is about courtesy and convenience, the truth is the feature’s benefits for accessibility remain bountiful. Yes, courtesy and convenience are important factors, but Type to Siri is a great feature whereby a Deaf or hard-of-hearing person or, in my case, someone with a speech delay can interact with Siri with complete fidelity without the voice component. That isn’t at all trivial or ancillary to Apple’s core messaging. The overarching point is Type to Siri illustrates yet again that accessibility oftentimes is an incubator for innovation—it’s something Apple rarely, if ever, gets lauded for by those who comprise the mainstream technology commentariat.
As I alluded in the previous sentence, Type to Siri stands not alone. The pointer feature in iPadOS began life as an AssistiveTouch feature, of which Apple’s Sarah Herrlinger told me years ago “isn’t your typical cursor.” My understanding has long been the company’s Accessibility team handed off the AssistiveTouch feature to the broader iPadOS software group so they could massage it into something meant for more mass adoption. Likewise, the Double Tap feature on Apple Watch germinated as an AssistiveTouch feature in watchOS, was then similarly made over for broader applications. Many popularized modern technologies—audiobooks, speech-to-text, et al—were invented by disabled people for their unique needs, then adopted by the able-bodied masses for their enjoyment. As Dr. Victor Pineda told me last year, the disability community is chockfull of technologists out of sheer necessity. Technology makes the world more accessible to people like Dr. Pineda (and yours truly). Last December, Apple used its precious holiday ad space to highlight the hearing aid feature on AirPods Pro. My understanding is the ad, called “Heartstrings,” was the first time the company used an accessibility feature in the holiday campaign—and for good reason. It shows the profundity of assistive technologies truly being for everyone, with earbuds everyone uses every day. It’s a rare example of people being able to have their cake and eat it too.
So yeah, Type to Siri is highly significant—especially so, again, in a TV commercial.
Max Says ‘Sinners’ will Stream in Black American Sign Language in ‘Groundbreaking’ First
In a press release published on Monday, popular video streamer Max announced the Ryan Coogler-helmed movie Sinners will stream in Black American Sign Language (BASL) alongside the original version in the United States on Independence Day this Friday, July 4th. Max touts the release is “groundbreaking” while noting it “marks the first time a streaming platform will exclusively debut a film interpreted in BASL.”
The BASL interpreting of Sinners will be done by Nakia Smith. A trailer is on YouTube.
“The release of Sinners with BASL is a major step forward in accessibility, representation, and visibility in streaming. BASL is a distinct dialect of American Sign Language (ASL) with its own dynamic history and unique grammar, signing space, rhythm, facial expressions, and cultural nuances,” Max said in its announcement. “For the first time, the Black Deaf community will have streaming access to a more immersive experience in their language. Max subscribers, who sign in ASL but are unfamiliar with this dialect, will also be able to follow along with this interpretation.”
Max, like its peers in Apple and Netflix, reaffirmed its support of disabled people.
“Accessibility within streaming is not a one-size-fits-all approach. Our goal at Max is to make these great stories accessible to all audiences in a way that is authentic to the content and the communities we serve,” Naomi Waibel, senior vice president of global product management at Warner Bros. Discovery, said in a statement included in Max’s press release. “Sinners with Black American Sign Language is an example of how culturally nuanced access can enrich the viewing experience for our audiences.”
Today’s news comes after Max announced a similar initiative this past March to stream The Last Of Us in ASL. Elsewhere, the National Hockey League, which has a TV deal with Warner Bros. Discovery-owned TNT, has aired highly successful “NHL × ASL” broadcasts in partnership with PXP. They’ve proven so successful with fans the league earned Sports Emmy nominations for its work in furthering accessibility and inclusivity.
Max announced in May it’ll mercifully rebrand itself (back) to HBO Max “this summer.”
Accessibility Amplifies Apple Music’s first decade
Apple on Monday issued a press release in which the company celebrates Apple Music’s 10th birthday by sharing some big announcements about the service. The headliner is a new three-story campus based in Culver City, which Apple says sprawls more than 15,000 square feet and houses two radio studios and a 4,000 square foot soundstage. The new campus is designed to “anchor a global network of creative hubs” in other cities such as Berlin, Nashville, New York, Paris, and Tokyo, according to Apple.
“Apple Music Radio has always been a home for storytelling and artistry, serving as a space for bold conversations and surprising moments,” Rachel Newman, co-head of Apple Music, said in a statement included with today’s announcement. “With this new studio, we are furthering our commitment to creating a space for artists to create, connect, and share their vision.”
Amongst the other news is the advent of a playlist Apple calls Replay All Time, which the company describes as “a special version of the annual Replay experience that allows listeners to see and stream the songs they’ve played the most since joining Apple Music.” Replay All Time can be found in the Home tab in the Music app, Apple said.
As with Apple Podcasts, Apple and music have been constant in my everyday digital life since getting my first-ever Apple product, the original iPhone, in 2007. Until Apple Music debuted in 2015. I spent a lot of money buying songs and albums in iTunes and synced my music via cable to myriad iPods and iPhones of various vintage. Those purchases remain available to me today, of course, along with the streaming content Apple Music provides. From an accessibility standpoint, the “all-you-can-eat” model of streaming Apple Music is great because I no longer need to fiddle with a physical cable to sync data, which involves somewhat tortuous tests of my lackluster hand-eye coordination. It’s also easier on my wallet too, since I needn’t budget money on individual albums from my favorite artists. Likewise, that the iPhone subsumed the iPod’s functionality—the Music app in what was then known as iPhone OS 1 was literally named iPod—means I have an “all-in-one” device and needn’t carry a separate iPod along with a cell phone, a setup I contemplated prior to getting the iPhone. It’s more accessible for me to carry one small object than two (admittedly small) objects, especially when you have relatively compromised muscle tone in your hands and less strength overall to accommodate weightiness. Moreover, from a software standpoint, it’s meaningful how the Music app supports accessibility features such as VoiceOver, Dynamic Type, and most recently, Music Haptics. It makes the listening experience more accessible—and thus more enjoyable—that I can follow along to the words in a song in the app’s Lyrics View on the Now Playing screen, for example. And once again, with the nature of streaming and technologies like iCloud sync, I can move from my iPhone to my iMac and beyond and have all my music ready when I’m ready to listen.
I look forward to using Apple Music into 2035 and beyond.