Steven Aquino Steven Aquino

Apple Marks Global Accessibility Awareness Day With Preview of Accessibility Nutrition Labels, Magnifier for Mac, More Forthcoming Features

Ahead of Global Accessibility Awareness Day (GAAD) later this week, Apple on Tuesday announced what it calls “powerful accessibility features” for its expanse of operating systems. The company says the new software is slated for release “later this year,” but it doesn’t take a Kremlinologist to surmise these looming enhancements are obviously for iOS 19, visionOS 3, et al, when the updates ship in the fall. Notably, the 2025 edition of Apple’s annual GAAD announcement is special: this year marks 40 years that the company has worked on building assistive technologies for its disabled customers.

“At Apple, accessibility is part of our DNA,” Tim Cook, Apple’s chief executive officer, said in a statement. “Making technology for everyone is a priority for all of us, and we’re proud of the innovations we’re sharing this year. That includes tools to help people access crucial information, explore the world around them, and do what they love.”

There are two headliners this year: Accessibility Nutrition Labels and Magnifier for Mac. For the former, the Nutrition Labels are effectively identical to the privacy ones currently available; Apple says the accessibility labels are designed to give users “a new way to learn if an app will be accessible to them before they download it, and give developers the opportunity to better inform and educate their users on features their app supports.” The Accessibility Nutrition Labels will be available on the App Store worldwide, with Apple noting “developers can access more guidance on the criteria apps should meet before displaying accessibility information on their product pages.” As to the latter, the longstanding Magnifier app for iOS and iPadOS is making its way to macOS this year. Its implementation is clear in inspiration, as Apple essentially took the building blocks for Continuity Camera on iOS and tvOS to make Magnifier for Mac. The company boasts the feature will be a boon to people with low vision (like yours truly) to understand the physical world more accessibly. It’s one thing to describe it, but it’s another thing entirely to see it; to that end, Apple has made a video showing a person with albinism using Magnifier for Mac, with their iPhone clipped to their MacBook’s display, taking notes in a college classroom during a lecture. Magnifier for Mac integrates with another new feature this year, called Accessibility Reader, which, with Magnifier, will “[transform] text from the physical world into a custom legible format.”

The coming advent of Accessibility Nutrition Labels is a huge step towards facilitating greater awareness—and accountability—of accessibility and the disability community writ large. It’s a sentiment shared by American Foundation for the Blind president and chief executive officer Eric Bridges, who’s quoted in Apple’s GAAD press release.

“Accessibility Nutrition Labels are a huge step forward for accessibility,” he said in a brief but pointed statement. “Consumers deserve to know if a product or service will be accessible to them from the very start, and Apple has a long-standing history of delivering tools and technologies that allow developers to build experiences for everyone. These labels will give people with disabilities a new way to easily make more informed decisions and make purchases with a new level of confidence.”

Elsewhere, Apple announced Live Captions are coming to Apple Watch, Zoom is coming to Vision Pro, a new Name Recognition feature, and much more. Beyond the forthcoming software updates, the company is also celebrating GAAD with accessibility-oriented material across its plethora of retail and digital properties. For instance, the company has shared a behind-the-scenes look of the new Apple TV+ film Deaf President Now. The documentary, out this Friday, chronicles the 1988 student protests which compelled Gallaudet University to appoint its first-ever Deaf president despite existing for more than 120 years at that point in time. I posted an interview with Nyle DiMarco and Davis Guggenheim last week, who told me all about the film and the protests’ deeply-felt cultural significance to the deaf and hard-of-hearing community.

My friend Stephen Hackett shared an especially astute, and poignantly so, perspective on today’s accessibility news out of Apple Park. He writes on his 512 Pixels blog “in a timeline where a lot of folks have …complicated… feelings about Apple, seeing the company to continue [improving] access to technology for everyone is still great.”

Apple is neither above criticism nor a monolith. There are plenty like Sarah Herrlinger.

“Building on 40 years of accessibility innovation at Apple, we are dedicated to pushing forward with new accessibility features for all of our products,” Herrlinger said, who serves as the company’s senior director of global accessibility policy and initiatives. “Powered by the Apple ecosystem, these features work seamlessly together to bring users new ways to engage with the things they care about most.”

Somewhat fortuitously, today’s news out of Cupertino is complemented by a report from Rolfe Winkler of The Wall Street Journal that Apple is purportedly hard at work on developing brain-controlled interfaces, or BCIs, to assist people with motor disabilities. In his own story on Winkler’s reporting, 9to5 Mac’s Chance Miller notes researchers believe BCIs have profound potential to “revolutionize” the use of computers by people coping with ALS, for example. Miller also writes Apple is expected to “add broader support for BCIs” to its longstanding Switch Control feature at some point this year.

Read More
Steven Aquino Steven Aquino

How ‘RoboGobo’ Puts Limb Loss In the limelight

Back in March, I wrote about the Apple TV+ children’s series Wonder Pets: In the City and interviewed its creator, Jennifer Oxley. As I wrote, the show’s premise chronicles the adventures of a group of classroom pets in a New York City school who, when school is out and night falls, morph into heroes and travel the globe in their “Jetcar” to rescue their fellow animals—all the while singing in operatic style. In a nod to disability inclusion, Oxley told me, amongst other things, two of the show’s characters, a snake and an elephant, cope with limb differences and visual impairments, respectively.

I bring up Wonder Pets: In the City because my mind immediately went to it when I was approached earlier this month about covering RoboGobo. The Disney Jr. series, created by Chris Gilligan and streamable on Disney+, has a similar conceit to Wonder Pets: In the City insofar as it’s about a group of heroes—some of them coping with disabilities—banding together to solve problems using robotics. Disney describes RoboGobo as “superheroes who fight villains and rescue pets in peril,” led by boy genius Dax. Disney seemingly has the copyright cornered on “the rescue pets who rescue pets” slogan for RoboGobo, but it’d befit Oxley’s Wonder Pets crew just as aptly in a conceptual sense.

“We were working from this notion that aspirational heroes can come in all shapes, sizes, colors, and abilities,” Gilligan said about RoboGobo in a recent interview with me done via videoconference. “The decision to give [Dax] a limb difference was organic. It was one of these things like, ‘Why not?’ Why wait for a specific episode to introduce a character with disability? Why not have the protagonist be [disabled]? It just felt right for the story. It also felt like something that hadn’t been explored before in a way I was looking to do with this show. The possibilities are exciting as a result of that [choice].”

Dr. Nava Silton, a psychology professor at Marymount Manhattan College who worked on RoboGobo, explained she “really loves” how the show’s central figure, the aforementioned Dax, has a limb difference. That the main protagonist has a disability, she added, is “such a boon for representation” because more often that not, characters with disabilities are relegated to ancillary status in terms of storylines and general visibility. Dax being the focal point of RoboGobo, Dr. Silton said, is of crucial import precisely because it bucks convention; Disney was highly intentional in not only using sensitive language, but in its pursuit to ensure authenticity regarding the look and feel of Dax’s prosthetic arm. Even the stump and, impressively, gait of Joe the Jaguar was considered in an episode, “Jumpin’ Jaguar,” where he’s scared to get down from a tree.

“I thought all those things were really special with this particular series,” Dr. Silton said.

Piggybacking on Dr. Silton’s sentiments, Gilligan explained to me the creative team went so far as to source footage of actual jaguars living with limb differences in order to animate their physical traits and movements properly for the show. They found one of a cub, which matches up demographically with Joe’s character, with Gilligan saying “we made sure we got that all right… all those specific [details],” he said. “We always checked to make sure ‘Are we doing this right? Are we representing this correctly?’”

“It was done in such a nice, comprehensive, and sensitive way,” Dr. Silton added.

Dr. Silton prepared a 2-hour educational presentation on limb difference for roughly 18–20 people in advance of production. She said “they all sat there and paid attention” to the myriad ways people use prosthetics, and which situations demand which tools. Attendees, she said, “asked such insightful questions” and she consulted with Gilligan extensively throughout the production process. It was a definite imperative for Dr. Silton and team to especially get Dax’s prosthetic arm as correct as possible; the primary goal was to make it “look as much like a prosthetic, but also something toyetic that can be used in terms of consumer products or those types of things,” she said.

Dr. Silton continued: “[We tried] to ensure we could have a powerful story while also representing the authenticity of limb difference and to model sensitivity for typically-developing individuals or others who might know a bit less about limb difference.”

For his part, Gilligan heaped praised unto Dr. Silton, telling me the team feels “very lucky” to work with her. She not only is a subject matter expert, he said, she’s a “storyteller and creative person.” Moreover, Gilligan emphasized that while the representational aspect to RoboGobo is a focal point, so too is remembering the essence of the show’s existence lies in its entertainment value. He recounted being inspired by something he read in a 2010 National Geographic piece about prostheses and neural impulses. (The latest episode of 60 Minutes features a similarly fascinating story from Anderson Cooper.) According to Gilligan, he wanted to “do something interesting like that” in RoboGobo and cited Dax’s prosthetic having a flip-up mechanism for shooting his robo-discs as a manifestation of said desire. All told, Gilligan called these creative elements the end result of deep collaboration between innumerable people; it was work that put the team in “a great place”—and uniquely so.

"it was special… it took wonderful teamwork,” Dr. Silton said of the efforts.

When asked about corporate support, Gilligan told me Disney has been “1000% supportive” of himself and the rest of the RoboGobo team. He said the company’s been “extremely excited” about the show, adding “I think [Disney] loved the initial thinking from the get-go… along the entire journey, they’ve been nothing but supportive of us.”

“They’re into it,” Gilligan said glibly of Disney’s reaction towards RoboGobo.

Of course there’s an entertainment element to it, but Gilligan and Dr. Silton told me they hope RoboGobo carries with it an undercurrents of the importance of empathy, perseverance, and open-mindedness, messaging-wise. They want children—and their families—to see everyone is literally built differently, with Dr. Silton wanting audiences to “not be afraid” of the challenges they are faced with in life. Such an idea sits at the core of Dax’s outlook on life and in the messaging he tries to convey to his cohorts.

“We wanted to develop empathy,” Gilligan said.

Beyond Disney, the response to RoboGobo has been positive. Dr. Silton said the most striking piece of feedback has to do with people expressing gratitude for putting disability at the forefront instead of obscuring it. She also pointed to children role-playing Dax during pretend play and using various props for his prosthetic, which not only illustrates empathy but something Dr. Silton found “extremely heartwarming.” She recalled having a conversation with some children about the “Take a Leap and Try” song and what it means to them. The engaging responses from the young children, Dr. Silton told me, “was such a wonderful way to start an exciting conversation about all of us coming into the world with strengths and challenges and, no matter what we have, we all have that opportunity to try and reach our goals, even if they’re hard [and] even if they’re difficult. It was an exciting jumping-off point for a wonderful conversation.”

“Whether individuals with limb differences or typically-developing—or hopefully both—I think everyone’s getting something special [by watching RoboGobo],” Dr. Silton said.

As to the future, Gilligan and Dr. Silton expressed similar sentiments about being proud of the work each as put into making RoboGobo a reality. Both are especially proud of the representational gains, with both saying they want to keep the show going as long as possible and keep telling more stories. The future is bright with possibility for Dax and friends—with Dr. Silton saying she would love a movie version to happen someday.

“[RoboGobo] sets the stage beautifully to show the world that you don’t just have to break that it’s not only incorporating disability into your show—it’s about making it the the main protagonist present with that disability,” Dr. Silton said of the show’s impact on viewers. “[The character] could really be an incredible anchor for a wonderful show—a show that really has tremendous take-home [lessons] for kids and adults alike.”

Read More
Steven Aquino Steven Aquino

Upcoming ‘Deaf President now’ Movie Offers Poignant ‘Snapshot’ of Deaf history, culture

When I asked Nyle DiMarco to share his thoughts on the forthcoming Apple TV+ documentary Deaf President Now in an interview earlier this week, the film’s producer didn’t shy away from exulting its deep cultural resonance to Deaf people everywhere.

“This [event] really is one of the most important civil rights movements of our time, and we definitely don’t want to let this critical history be erased,” DiMarco said. “This is something everyone should know about, and I think the idea of the film is to inspire a younger generation—the teens of today—to feel empowered [and] to take charge, and to take their power back and be proud of their identity, as well as to push for more Deaf representation. Gallaudet University set us up well for that; not only that, but I think staying strong in protest… this is a protest that gave rise to the passage of the [Americans with Disabilities Act].” (The ADA would be signed into law in 1990.)

Deaf President Now, with a running time of close to 2 hours and out a week from today, May 16, chronicles what Apple describes as “eight tumultuous days in 1988” at the aforementioned Gallaudet during which four students led a revolution that would change the course of history. The school describes the so-called “DPN protest” as a “watershed moment” that led to the appointment of the first-ever Deaf president in school history. The 🤯 emoji is apropos here: an institution devoted to the higher learning of Deaf and hard-of-hearing people, then 124 years old, must resort to protest in order to elect a leader that looks like them—someone who’s part and parcel of them. DPN, Gallaudet writes, has henceforth become “synonymous with self-determination and empowerment for Deaf and hard-of-hearing people everywhere.” The DPN protest began on March 6, when a hearing woman named Elizabeth Zinser was named Gallaudet’s 7th president over two Deaf candidates in Irving King Jordan and Harvey Corson. Jordan eventually would assume the presidential post following the protests.

DiMarco expounded further on the motivations behind making Deaf President Now, telling me when he broke into Hollywood a decade ago, there existed no Deaf people working as directors, producers, and/or writers. For too long, DiMarco added, the stories of his community were “co-opted” by the hearing; he was hellbent on changing that narrative. It was a feeling which compelled him to start his own Deaf-led company. Its North Star, he explained to me, would be to always “[tell] those stories [by] empowering Deaf creatives to get behind the camera and share their voice.”

“Growing up, I saw Deaf characters and storylines represented in TV and media, and while it was good to have representation, it was never done authentically,” DiMarco said. “I think Hollywood didn’t understand what the Deaf experience could look like, because we weren’t invited into those rooms where those stories were being crafted. The first project I wanted to work on was Deaf President Now. This was more than just the appointment of a Deaf president to the university—this was a way to show the world that we’re a community… we have a language and what all of that [culture] looks like.”

Deaf President Now director and producer Davis Guggenheim, who’s hearing, admitted he was one who didn’t get Deaf culture, nor was he aware of the history of the DPN protest—this ironically coming from a history major in college. What’s more, Guggenheim worked at ABC News’ Nightline news program in 1986; Guggenheim worked there during the show’s heyday when Ted Koppel anchored the anchor desk.

“I’m embarrassed to say I didn’t know about this [DPN protest] story,” Guggenheim said. “It’s meaningful to me… I’ll never fully understand Deaf culture, so it was a privilege to be invited [to make Deaf President Now] by Nyle to tell this story together.”

Guggenheim spoke with humility about the film’s meaning for the masses.

“We both realized the story must be understood from both audiences: a Deaf audience and a hearing audience, and that the neglect [of the history] from the hearing side,” he said. “If you’re Deaf, most people know this story. If you’re hearing, most people don’t know this story. For me, [working on Deaf President Now] was correcting history.”

Guggenheim went on to say, citing the present-day unrest over our country’s Constitution and our values system, and the ever-hostile sociopolitical climate, the DPN protest beautifully illustrates the notion that “collective action can work big,” adding “young people can organize, work together and succeed in protest.” He also stressed his belief that the film, and protest generally, both are things “we need right now” and said audiences hopefully feel similarly when they watch Deaf President Now.

When pressed on his aforementioned ignorance regarding the DPN history, Guggenheim acknowledged having good intentions “isn’t enough.” Before making Deaf President Now, he felt complacency in the concept he was a good, well-intentioned human—but said his prior attitude was “wrong-headed” in retrospect. Moreover, these lessons extended to the production work on the film, with Guggenheim saying it was DiMarco who (kindly) checked him, and offered to teach, when his good intentions run amuck. However “gracious and generous” DiMarco was, the experience was an enlightening one for Guggenheim—it helped him confront his prior ignorance.

For its part, DiMarco told me Apple TV+ always felt like “the obvious home” for Deaf President Now. He said the company “has always been on the forefront of accessibility and empowering the Deaf and disabled community to tell their own stories and to be independent.” Apple, he reiterated, was “a perfect home” for DiMarco and team.

“We’re incredibly thankful to [Apple] for championing this beautiful film,” he said.

When asked about pre-release feedback to Deaf President Now, DiMarco excitedly noted it’s been nothing but positivity all the way down. He said the most crucial response is that of his own community, telling me the people he’s spoken to say they’re confident Deaf President Now will have “great influence.” DiMarco further noted he feels “very lucky” to treat this history lesson with the utmost respect possible. Deaf President Now has a timeliness to it, he added, telling me emphatically “I think a lot of our Deaf audience is feeling very inspired [and] ready to take up the fight again.”

“I don’t think we’ve [communally] ever been as proud to be Deaf as we are in this moment,” DiMarco said. “That means everything to me.”

Guggenheim expressed gratitude for the ability to make documentaries, telling me the greatest thing about them is they intimately take everyone—filmmakers and viewers alike—into worlds unknown. He pointed to a shot in the beginning of the movie in which the camera moves from a noisy ambient environment into Gallaudet’s opened gates, where everything suddenly is “more quiet.” However quiet, Guggenheim emphasized Gallaudet is “rich and beautiful and complex.” The wider world deserves to know that.

“It’s such a great job I have [as a director and producer] to be able to open my mind and heart to something new,” Guggenheim said.

Both DiMarco and Guggenheim said they wish for everyone to watch Deaf President Now and educate themselves by getting glimpses of the Deaf and hard-of-hearing community and, pointedly, of their unique culture. That’s what DiMarco said he and his creative team is “pushing for.” He added Hollywood is rife with umpteenth remakes which bear little originality, saying Deaf President Now ‘s approach is “totally new.” Guggenheim concurred, saying to me “we love [the movie] and want people to see it.”

As to the future, DiMarco said he aspires to build on Deaf President Now. Significant though it is, he said the DPN movement really is “a snapshot in history” and there are many more stories to tell. DiMarco has dreams of inviting more hearing people to learn about the Deaf world, whether through more documentaries, scripted series, or other opportunities that come his way. “I hope we can continue to tell incredible Deaf stories and bring Deaf creatives behind the camera and making our history being known [and] making this more than a moment, but a movement that can’t be stopped,” he said.

Guggenheim is down to ride with DiMarco on the journeys to follow.

“What I’m learning now is it’s not enough to make a film,” Guggenheim said. “You have to figure out how to carry [Deaf President Now] so everyone can see it. I believe it’s a beautiful film and hope people see it and be moved, but not everyone’s going to know about it. Not everyone’s going to see it. I want to do everything I can to get see this movie, because Nyle is a big champion of it. I want to see this movie seen far and wide.”

Read More
Steven Aquino Steven Aquino

Gallaudet University, Coke Team Up for New ‘We Want to teach the world to sign’ Ad

Gallaudet University president Robert Cordano posted on LinkedIn this week about a newly-released ad campaign created in collaboration between the school and Coca-Cola. The nearly 90-second spot is available to watch on Gallaudet’s YouTube channel.

I’ve embedded the ad below. As I wrote on X, the spot warms my CODA-filled heart.

Gallaudet, established in 1864 amidst the Civil War and based in Washington DC, is the world’s first and only collegiate institution exclusively for Deaf and hard-of-hearing people. (Gallaudet does admit a small number of hearing students—most of them being CODAs—as the proverbial exceptions to the rule.) Despite its preeminence, it’s worth noting Gallaudet was founded under the horrifically ableist and dehumanizing “National College for the Deaf and Dumb,” with the current name being taken in 1894.

I’ve covered Gallaudet at close range several times in the last few years. Its football team, the Bisons, plays with adaptive helmets developed with 5G technology from AT&T that houses a digital display through which the quarterback receives play calls from coaches on the sideline. As to Cordano, I profiled her back in 2022 for my old Forbes column about her career and purview of the school. Cardano, who’s known as “Bobbi” by those who love her, is noteworthy for being Gallaudet’s first Deaf woman, and first member of the LGBTQ community, to assume the perch in the university’s catbird seat.

Read More
Steven Aquino Steven Aquino

The story of how a mass shooting’s Tragedy is as much About Accessibility as Humanity

As an avowed news nerd, particularly locally and on PBS, it was a huge, if bittersweet, thrill for me to be contacted late last year by folks associated with FRONTLINE—one of my favorite shows—to discuss their coverage of the October 2023 mass shooting in Lewiston, Maine. The event, which killed 18 people and wounded 13 more, is notable not merely because it was yet another mass shooting, but because it’s known to be the largest mass shooting involving members of the Deaf community in American history.

My pals at FRONTLINE ran a documentary back in December called Breakdown in Maine about the Lewiston shooting, perpetrated by a former Army Reservist, which examined not only the shooter’s brain injury that spurred the violence, but also the (in)accessibility of the Deaf and hard-of-hearing community in the small New England town to receive crucial status updates on the incident. FRONTLINE joined journalistic forces with the Portland Press Herald and Maine Public to produce the 54-minute film.

Erin Texeira is the senior editor for FRONTLINE and leads its local journalism initiative. In an interview, she explained her team works with 3–5 local news outlets per year on “deep investigative projects” that otherwise wouldn’t be tackled due to a scarcity of resources. She reiterated the Lewiston shooting is regarded as the largest mass casualty event to hit the Deaf and hard-of-hearing community in American history, adding a large focus of the teams’ coverage centers on “the struggles for accessibility and emergency services and basic information and support in the aftermath of the shootings.” As everyone considered the content, Texeira said it became clear the reporters could do justice by the victims, and the audience, by thinking how “we could bring this work to the very community [Deaf people in Lewiston] we were reporting on.”

Raney Aronson-Rath, editor-in-chief and executive producer of FRONTLINE, told me in a concurrent interview to Texeira’s the Lewiston massacre coverage comes in two parts: the aforementioned film and a podcast. One of the most personally striking things about the shooting to Rath is learning from Deaf and hard-of hearing people in the community express exasperation and frustration over the aftermath, telling me their biggest beef was with how hard it was to communicate with first responders and other emergency personnel. Rath called this a critical part of the story; in the spirit of the press holding people accountable, she said it was imperative to find out what the “actual breakdowns” were in the shooting’s wake such that these disabled people couldn’t reliably get the information they needed and were entitled to know. In a show of empathy and earnestness for inclusivity, Rath said her team didn’t want to exclude the Deaf community as part of their reporting; in other words, they didn’t want to leave them hanging communicatively as they were the night of the incident. To that end, the FRONTLINE team enlisted Donna Danielewski and team at Boston-based GBH to help.

Danielewski, who serves as the station’s executive director of accessibility, said the Deaf community in Lewiston is relatively small and the journalists had to approach the interview process with delicacy because a bunch of strangers are essentially asking people to relive their trauma by retelling it. GBH, according to Danielewski, “has a strong history of media accessibility,” but they learned a lot through the process. She and her charges wanted to represent these people in a thoughtful way, which Danielewski said required overcoming some “new hurdles” along the way. Overall, though, Danielewski told me her team learned “a lot” during the production process.

Rath said the director of Breakdown in Maine, James Blue, spent time in Lewiston and with some of the victims’ families. Rath saw video of people talking, with ASL interpreters in tow, and approached Texeira with the idea that Lewiston could make for one of the local journalism projects. It represented a “powerful moment,” Rath said,” to expose the general public about a side of a mass shooting—disability and accessibility—that oftentimes isn’t told yet should be told. Texeira concurred, saying Rath “made a great case” and noted all the legwork came together “organically” to, in the end, produce something Texeira proudly called “a beautiful episode” of FRONTLINE.

As to the local boots on the ground, Maine Public deputy news director Susan Sharon told me she and her colleague in reporter Patty Wight, both based in Lewiston, attended shooting-related press conferences, known as “pressers” to those in the news business, and noted “early on” Deaf people were in the audience and tried to get others to interpret for them. Like the FRONTLINE crew, Sharon and Wight saw the parallels between the shooting itself and the collateral damage it did to a group of people normally overlooked. Sharon called it “a learning experience” for everyone involved in her newsroom, telling me Lewiston has a small but close-knit deaf community. The tragedy, horrific though it was, exposed another tragedy: the lack of access for Deaf people to get info in an accessible manner. Sharon called it “an education for us.”

For her part, Wight said four people from the town’s Deaf community were killed and said it stood out because it “seemed like a lot.” She recounted one press conference in particular during which an ASL interpreter had to stand atop a desk because officials showed maps of the search area for the shooter, and the interpreter was “frustrated” at trying so hard to get the information conveyed to the Deaf members of the audience. Wight talked to the person following the presser and was told the Deaf community was “desperate” to get any and all information from authorities. They felt left out, she added, which is what piqued Wight and team’s interest. There have long been systemic issues in effective communications between the predominantly hearing public and the Deaf and hard-of-hearing community and, according to Wight, the shooting exponentially magnified the issues that, despite festering for eons, have gone ignored.

Deaf family members, already on pins and needles with anxiety, had to wait a long time to get any confirmation on a loved one’s status. Not only did the literal relay of information take time, but then it took time for logistics to happen to get an interpreter in place. And on top of that, the accuracy of everything had to be vetted to ensure everything was correct and nothing was lost in translation. Notably, English is not the primary language for the majority of Deaf people in this country, whether in Maine or Montana; American Sign Language is, yet ASL interpreters unfortunately and frustratingly “weren’t always visible” during the slew of news conferences, Wight said.

Captions, she added, “didn’t cut it” in terms of accuracy. These people needed ASL.

Accessibility-wise, Maine has a shortage of ASL interpreters. In general, it isn’t the Deaf or hard-of-hearing person’s responsibility to supply their interpreter; the onus should fall on the institution or agency. The problem is, of course, most people don’t realize this. Wight went on to tell me many of the Deaf people she interviewed in Lewiston lamented this issue, noting, again, English is decidedly not their native language. “I think people in the community are frustrated there’s so little knowledge about their needs,” Wight said. “These issues are not new. They’ve been going on for decades.”

There’s been a learning curve for news stations and officials at press conferences to, for instance, make concerted efforts to feature ASL interpreters prominently in camera shots so Deaf people can see them. The shooting, Wight and her colleagues told me, has had a byproduct of raising more awareness of the importance for accessibility—especially with more training of people staffing hospitals, law enforcement, and others. Rath told me many people used an iPad at hospitals in order to communicate with staff.

As I went through the interview process of my own, what struck me the most about the conversations I had with my media industry peers is the subtext behind Breakdown in Maine. To wit, of course there will be discussion of the shooter and their motivation(s), especially in terms of mental health. Of course there’s a story about equity of access for a group of disabled people. Even greater than that, however, is the story of awareness. These are a (admittedly small) sample of my peers, living 3,000 away from me, who learned a helluva lot themselves about a segment of the disability community and, consequently, of accessibility. This is notable because it means awareness goes both ways; holding truth to power is one thing, but the journalists here holding themselves to account by earnestly wanting to learn and do right by authenticity and empathy is quite another. As someone 12 years into this news racket, perpetually lamenting the lackluster treatment of the disability community by the mainstream media, to hear people of my own professional kind speak with such humility was damn refreshing. My beat may be technology, but it’s stories like Breakdown in Maine that give me hope that the able-bodied masses are—finally!—starting to give the people like me, who exist at at the margin’s margin, our overdue credit. “Nothing about us without us,” indeed.

The “ripple effects,” as Danielewski described them, are many—and benefit everyone.

"I think conversations that are now happening at FRONTLINE are different… that always happens [here],” Rath said of the many lessons learned by working on Breakdown in Maine. “The more we do and the more we innovate, the more we realize we need to do and innovate. It’s absolutely super inspiring doing this and the learning curve was steep for everybody who worked on it, other than probably [Danielewski] and her team, for those of us at FRONTLINE. It’s great for us as a national series, with so many people who watch us and see us, to be thinking actively [about communicative equality] like this.”

“It’s been an expansive thinking moment for us in public media around our reporting,” said Mark Simpson, Maine Public’s director of news and public affairs.

The efforts GBH, Maine Public Media, et al, did for their story, Danielewski told me, “builds on itself” until “our few snowflakes turn into a pretty impressive snowman.”

My conversations for this story occurred several months ago, ahead of Breakdown in Maine airing on TV. But today comes an announcement from FRONTLINE that it has released ASL-interpreted videos of the “Breakdown: Turning Anguish into Action” podcast series which complements the film. In its press release, FRONTLINE says it worked with captioning and signing language company Partners Interpreting on the project, noting record took place at GBH’s studios. At its core, the work makes what’s ostensibly an exclusionary medium for those with little-to-no hearing into something that’s accessible and inclusive—the last especially poignant given the immense popularity of podcasts nowadays. That “Breakdown” has an ASL component means, despite the emotional subject matter, nonetheless means everyone can enjoy it.

“Podcasts are a vital and growing part of the American news ecosystem, yet they remain mostly inaccessible to the millions of Americans who are deaf and hard of hearing,” Texeira said in a statement for the announcement. “We’re proud that, through innovative storytelling, we can now bring this important investigative project to all audiences. And we hope this inspires much more accessibility in journalism.”

It’s worth mentioning the reaction to the Lewiston shooting has parallels to the inaccessibility of California’s Text-to-911 service. My friend Candice Nguyen, an investigative reporter at NBC Bay Area, wrote about this issue in 2023. She wrote, in part, the ostensibly convenient service “disproportionately” impacts those in the Deaf community, as well as victims of violence. The fundamental issue is the 911 system is architected on voice calls—which are incongruent to the needs of many Deaf people. Both the Lewiston shooting and Text-to-911 are prime examples of the exclusionary machinations of the emergency response system. If you’re someone who can’t hear or speak—or, in my case, speaks with a speech disability like a stutter—emergencies are prone to be even more stressful because getting help is inaccessible. Put more cynically, the breakdowns in access highlight how society is unbuilt for the disabled.

Audio of “Breakdown” is available on Apple Podcasts and Spotify. Video is on YouTube.

Read More
Steven Aquino Steven Aquino

Google’s New ‘Simplify’ tool Makes Reading Comprehension On the web More accessible

A report from Abner Li at 9to5 Google this week brings news Google has added a new feature to its iOS app designed to simplify verbiage. The tool, befittingly called “Simplify,” is available for people to use when they come across complex language.

“When viewing a Search result or Discover article in the Google app, highlighting text that ‘uses jargon or technical concepts you’re not familiar with’ will reveal a new ‘Simplify’ option in the ‘More actions’ panel (alongside Search and Translate),” Li writes in describing how Simplify works. “This opens a sheet with a ‘new, simpler version of the text, helping you quickly understand a new concept so you can keep reading.’”

Google’s Simplify functionality is built atop Gemini 1.5 models, which is described as “specifically designed for minimally-lossy (high-fidelity) text simplification.” Li notes the company isn’t merely summarizing or explaining; rather, the software’s job is to clearly and concisely convey ideas without errors or omissions. Moreover, Li writes Google conducted research that eventually found simplified text proved “significantly more helpful” than the original version. Google tested Simplify across numerous domains, including aerospace, finance, law, literature, medical research, and more.

Li’s story gives an example of Simplify at work with biomedical text (emphasis Li’s):

  • Original: The complex pathology of this condition involves emphysematous destruction of lung parenchyma, diffuse interstitial fibrosis, changes in the composition of lung immune cells, increased production of immunomodulatory factors and the prominent remodeling of pulmonary vasculature

  • Simplified: This complex condition involves damage to the lung tissue from emphysema, a disease that damages the air sacs in the lungs, and widespread scarring of the lung tissue, called fibrosis. The immune cells in the lungs change, and the body makes more immunomodulatory factors, substances that control the immune system. The blood vessels in the lungs also change a lot.

In an accessibility context, Google’s new Simplify tool should be a boon for those who cope with intellectual disabilities. The stripped-down text not only makes for easier reading in terms of cognitive load, it boosts comprehension because it’s plainspoken and unpretentious. These factors ultimately go a long way to making Google Search more accessible when looking up information on the web. Relatedly, I’ve found Apple’s “Summarize” command within Safari to not only be spot-on in terms of accuracy, but it also provides cogent, easily digestible overviews of written work—including my own stories here. For all Apple Intelligence’s struggles over the last several months, the “Summarize” tool has worked impressively well in my (admittedly anecdotal) testing.

Read More
Steven Aquino Steven Aquino

Netflix Unveils Substantial App Redesign, Calling It An ‘Innovative New TV Experience’

Netflix on Wednesday introduced a refreshed design of its television and mobile apps that the company says offers users “a simpler, easier, and more intuitive design” designed with the goal of helping everyone “easily find something great to watch.” The Bay Area-based company’s work was described in detail by two executives: chief product officer Eunice Kim and chief technology officer Elizabeth Stone. “The new Netflix TV experience is still the one you know and love—just better,” Kim said.

Netflix’s design teams settled on a look that isn’t necessarily unique generally. The revamped user interface is anchored by a top row of tabs, with engaging visuals taking up most of the screen. Notably, there are callouts within title artwork conveying information such as “Emmy Award Winner” or “#1 in TV Shows.” Netflix also boasts about oft-used shortcuts like My List were heretofore “somewhat hidden” having more prominence, adding the design gives users robust real-time recommendations as well.

Overall, Netflix proudly touts its new homepage as featuring “a clean and modern design that better reflects the elevated experience you’ve come to expect on Netflix.”

On the mobile side, Netflix says it’s testing “a vertical feed filled with clips of Netflix shows and movies to make discovery easy and fun.” Users can tap to watch something immediately, add it to their watchlist, and/or share a link with family and friends. The vertical feed is reminiscent of how Instagram Reels or TikTok have classically worked.

Netflix has a video introducing its new homepage on YouTube (embedded below).

From an accessibility perspective, the redesigned homepage feels like a win. I’m especially heartened by the shortcuts—particularly to My List—as the current design involves a good amount of visual and mental gymnastics to find at times. Likewise in terms of cognition, that the main navigation is positioned at the top of the screen gives users a better, more concrete understanding that you go to the top to move and change views. Netflix’s redesign reminds of what Amazon did to Prime Video last year. Both designs are conceptually similar, especially with menu items anchoring the top of the screen in the TV apps. Accessibility-wise, Netflix and Prime Video have similar gains.

One bit of news related to today’s announcement. I asked Netflix about a status update regarding the short-lived integration with Apple’s TV app and a spokesperson told me via email it indeed was “a bug.” Nonetheless, hope springs eternal—the code exists!

Netflix’s redesign will be rolling out worldwide “in the coming weeks and months.”

Read More
Steven Aquino Steven Aquino

Inside the ‘Donkey Hodie’ Team’s Efforts to Go Even harder on Disability Representation

I wrote last year about the PBS Kids educational game called Cousin Hodie Playdate. The game, available on the network’s website and its games app, is designed to help young children develop their emotional intelligence by paying attention to social-emotional cues such as body language and verbalization. The title takes its source material from the canonical animated TV series Donkey Hodie, aimed at preschoolers.

Cousin Hodie Playdate is produced by Fred Rogers Productions and Curious Media.

Last month, PBS Kids introduced a new character to the Donkey Hodie ensemble in Jeff Mouse. He was born with congenital muscular dystrophy and uses an electric wheelchair to get around. Jeff Mouse was inspired by the real-life experiences of Jeff Erlanger, who, in 1981 appeared on Mister Rogers’ Neighborhood to do a duet with Rogers of “It’s You I Like.” The eponymous Fred Rogers Productions tapped the team at nonprofit organization Disability Belongs to serve in an advisory capacity, with Jeff Mouse being voiced by actor Jay Manuel. Manuel, who stars in Jay and Pamela on TLC, copes with Osteogenesis Imperfecta Type 3 (OI) and, like Jeff Mouse, uses a power chair for mobility. OI, colloquially known as brittle bone disease, is a genetic condition whereby bones easily fracture—oftentimes with no clear cause. Symptoms can be mild to severe, the most extreme of which bringing a myriad of complications as a result.

“We want to keep introducing characters who reflect and represent our audience who have different points of view and experiences. Understanding someone else’s point of view is part of building empathy, and older preschoolers are learning to do that through recognizing and naming not just their own feelings, but the feelings of others,” said Donkey Hodie co-executive producer Kristin DiQuollo in a recent interview with me conducted over email. “Introducing a character with a physical disability felt like something we could do thoughtfully and successfully with puppetry.”

DiQuollo explained the addition of Jeff Mouse “brings a unique perspective” to the show’s cast, adding “he helps show how we can do the same thing different ways.” She pointed to a line from Jeff Mouse in which he says in part “there are some things that I can’t do, but there are a lot of things I can do.” The line, DiQuollo told me, is a reference to a quote by Erlanger, who said “it doesn’t matter what I can’t do—what matters is what I can do.” DiQuollo and her colleagues also worked alongside Samuel Krauss, who advised the team on building Jeff Mouse’s character—including biographical details such as being born with muscular dystrophy and needing a power chair to get around. Krauss’ input including giving consideration to Jeff Mouse’s movement in the show’s Someplace Else environment. Specifically, DiQuollo said Jeff Mouse has “global limb and trunk weakness” in his extremities, also noting his wheelchair features a center-turning radius and smooth movement. Moreover, Disability Belongs connected with mobility company Permobil; according to DiQuollo, Permobil brought a demo wheelchair to the team’s Chicago-based art department, where the team created a chair for Jeff Mouse based on the model and Jeff Mouse’s puppet. DiQuollo added Krauss and Disability Belongs helped “capture the whimsical nature of Someplace Else while also ensuring a relatable representation of a power wheelchair user.” Additionally, Manuel, DiQuollo said, spoke with Jeff Mouse’s puppeteer, Stephanie D’Abruzzo, so the pair could “talk through Jeff’s movements, like how it would look when Jeff’s chair goes over certain surfaces, or how he would move his arms.”

“The size and weight of Jeff Mouse and his wheelchair were designed to give the puppeteers the ability to make precise and realistic movements of the character,” said David Rudman, co-creator and executive producer of Donkey Hodie and co-founder of Spiffy Pictures, in a short statement provided to me for this story. “The performers operate all of the Donkey Hodie puppets with their arms raised and since the characters do not have a surface to stand on, we needed to ensure that we were able to move the wheelchair in a true to life way as if it were actually rolling on the ground.”

The work the team put in for authenticity’s sake reflects an ethos on inclusivity.

“Like all our pals do in other stories, Jeff leads the day’s adventure, and his ideas contribute to the team and help solve the problem at hand,” DiQuollo said. “He introduces the idea that they can do the same thing different ways, and in the end, that strategy is what helps them all climb the Rainbow Tree to find the hee-hee hider moth.”

She continued: “Our show celebrates friendship, joy, and what makes us unique. Jeff is the latest friend we’ve introduced who has a unique perspective and way of experiencing the world, which is true of all our characters.”

DiQuollo shared about a playful Easter egg. In the episode where Jeff Mouse appears for the first time, the accompanying music is an arrangement paying homage to the aforementioned “It’s You I Like” tune Erlanger and Rogers sang four decades ago.

DiQuollo told me bringing Jeff Mouse to life on Donkey Hodie took “many people” at Fred Rogers Productions and Spiffy Pictures. She keenly credited a laundry list of contributors like writers, post-production teams, and much more. She called everyone involved with the program as “thoughtful, creative, fun, funny, and deeply respectful of the world we’re building and the legacy we’re building on,” adding the cumulative efforts were integral to making Jeff Mouse “a joyful new part of our neighborhood, and I hope viewers love meeting Jeff and watching our show as much as we love making it.”

Donkey Hodie episodes with Jeff Mouse are available to stream now, free of charge.

Read More
Steven Aquino Steven Aquino

Microsoft Announces new Developer Tool meant to make ‘a more accessible web’

In a post published on the Windows blog, Microsoft’s Evelynn Kaplan and Patrick Brosset shared news on Monday of a newly-released developer tool called ARIA Notify. The public API is touted as “[making] web content changes more accessible to all users.” The tool is “an ergonomic and predictable way to tell assistive technologies (ATs), such as screen readers, exactly what to announce to users and when.”

ARIA Notify is available today as an Origin Trial beginning with Microsoft Edge 136. It can be used locally by enabling the --enable-blink-features=AriaNotify feature flag.

“ARIA Notify is designed to address scenarios where a visual change that’s not tied to a DOM [Document Object Model] change and not accessible to assistive technology users, happens in the page. Examples include changing the format of text in a document, or when a person joins a video conference call,” Kaplan and Brosset write of the API’s raison d'être. “Developers have come to rely on ARIA live regions as a limited workaround. ARIA Notify is an imperative notification API that’s designed to replace the usage of ARIA live regions, and overcome its limitations, in these scenarios.”

Kaplan and Brosset explain Microsoft’s motivation for releasing ARIA Notify stems from the challenges faced by Blind and low vision people to identify user interface changes they didn’t initiate on their own volition. ARIA live regions, the post goes on to say, “are the only mechanism available today that communicates dynamic content changes to users of assistive technology.” The problem, of course, is these ARIA live regions are “tightly coupled” with DOM elements predicated on the notion visual changes happen within the UI of a webpage. Many changes, however, happen without the DOM being modified. Kaplan and Brosset cite several examples of this occurrence, including a user in a text-editing field and using the Ctrl+B keyboard shortcut to embolden text. In this case, the person still should get audible conformation of the event occurring despite “no UI elements [being] used by the user, and the DOM didn’t necessarily change.”

Microsoft posted an ARIA Notify explainer and encourages feedback via its GitHub repo.

Read More
Steven Aquino Steven Aquino

How Mobile Apps are ‘failing’ users with disabilities and why Accessibility matters

Last month, software development firm ArcTouch released findings of a study which examined the state of accessibility in mobile apps. The recent report, which is called the State of Mobile App Accessibility Report (SOMAA), is espoused by ArcTouch as an assessment of “the accessibility of 50 leading Android and iOS apps across five industries: Food & Delivery, Payments, Fitness, Shopping, and Streaming.” The SOMAA report was put together with disability-led accessibility platform company Fable.

At 30,000 feet, the SOMAA report paints a rather bleak picture of disability inclusion.

“Our analysis of app accessibility reveals a concerning reality: The vast majority of apps are failing users with disabilities,” ArcTouch wrote of the high-level takeaway on the SOMAA site. “72% of those who rely on any of the four assistive technologies we tested may have a poor or failing app experience in at least one step of a typical user journey.”

ArcTouch’s head of accessibility Ben Ogilvie explained to me in a recent interview awareness, which is the core of reports like the SOMAA, is “certainly on the rise” and noted accessibility is increasingly part of conversations surrounding technology. What the SOMAA report lays bare, Olgilvie told me, is the implementation of accessibility in mobile software “needs more help.” Apropos of this story being published in May, Olgilvie also said accessibility’s amplification in the tech industry’s mainstream consciousness can partly be attributed to annual Global Accessibility Awareness Day celebration. GAAD, as the occasion is colloquially known in the community, falls on the third Thursday of each May; this year’s occasion is slated for next Thursday, May 15.

One of two men behind GAAD’s creation is Joe Devon. A web developer by trade, Devon served as an advisor to ArcTouch as the company was putting together its SOMAA report. Devon is quoted on the SOMAA site, saying in part that investing in accessibility isn’t merely the right thing to do morally speaking, it actually is “what’s best for your business and brand.” The rationale is perfectly logical: the more accessible a company’s product(s) are, the more a user base—and sales—will grow and, importantly, diversify. As the world’s largest marginalized and underrepresented group, disabled people comprise a lot of potential users for businesses to target and market towards.

Devon expressed excitement for the SOMAA report largely because, as the co-founder of GAAD, the report’s existence means accessibility will be relatively top of mind for more than a single day of the year. Still, he shared an anecdote in which he saw someone on social media jokingly post that once GAAD comes and goes every year, the disability community can “look forward to 364 days of global accessibility oblivion.”

Ogilvie, who worked at Apple in the 2000s and whose immediate family copes with disability, said the main driver for creating the SOMAA report was to provide insight on the accessibility of mobile apps, akin to how WebAIM looks at websites. There isn’t much software which measures accessibility, he added, so the work must be done manually. “We decided we needed that data, so we decided to look at it and start doing that research as far as where things are in accessibility for mobile apps,” Ogilvie said.

Ogilvie called the findings from said research “disappointing but unsurprising.”

Both Ogilvie and Devon said there are companies who do right by accessibility, as well as those who create ad campaigns that admittedly “tug on heartstrings,” according to Devon, but are otherwise nothing more than patronizing lip service. Many corporations will produce flashy, poignant videos that go onto websites with zero captioning or audio descriptions, Devon added. Then there are companies whose internal teams want to do right by the disability community, but gets unsupported by other people within the organization. The report doesn’t put anyone in particular on blast out of professional courtesy, but Ogilvie said “you could tell the apps that had teams that were consistently working towards building things accessibly and going beyond the minimums and thinking about the user experience throughout the entire user journey.”

Devon concurred, saying accessibility support is “all over the map.” He said to be successful, it boils down to “having a champion” for disability and the disability community from within. The most ardent supporters of accessibility, Devon said, are companies who people have been personally affected by disability in one way or another. That’s a big reason why accessibility awareness has been on an upswing in recent years, but Devon acknowledged the increase in traction isn’t necessarily commensurate with an increase in desired results vis-a-vis the new SOMAA data.

If the numbers don’t improve, that’s a 5-alarm fire for Olgilvie and Devon.

Ogilvie noted a perceptual problem in which third-party developers oftentimes will presume a company like Apple for instance, whose work in the accessibility area is nigh universally renowned, makes their platform(s) accessible “out of the box” so individual apps needn’t be made accessible by their own devices. VoiceOver, Apple’s screen reader for Blind and low vision people, does provide app makers a lot “for free” in terms of the API, especially with labels. Still, it’s up to the developer to dig deeper and put in the work to customize VoiceOver such that it works fluidly with their specific software.

“There are a number of misconceptions around accessibility when it comes to mobile,” Ogilvie said. “Our report disproves that you can’t just leave it to the platform and assume it’s accessible by default. You do have to pay attention and put in the work.”

As to feedback and the future, Ogilvie said companies who have read the SOMAA report have had a positive reception. He said they have lauded the report as “a clear call to action to drive change,” which is precisely what the report was born to do. Olgilvie said it’s supposed to “spark those conversations [around accessibility] and give companies some kind of benchmark to measure against as a place to start.” Ogilvie said the team hasn’t settled on a cadence for future versions of the SOMAA report, but said keenly “my hope is we will be able to produce this with some regularity.”

Fore his part, Devon said the future of mobile accessibility will be fascinating to watch unfold—especially as ever-burgeoning technologies like artificial intelligence grip the industry with an even tighter hold. It’s too early to tell, he noted, whether AI will help making mobile apps more (or less) accessible. That said, Devon said if the SOMAA report can carry influence on companies, “then they’ll put in an effort to make sure [to prioritize accessibility] because they’re the builders of apps… they’re tool builders.”

“If we make a push for [companies] to make sure the code they push out is accessible, [the prioritization of accessibility] will get better,” he said.

Read More
Steven Aquino Steven Aquino

Example No. 5,350,000 Why Big News Outlets Must Invest in Disabled Writers

Late last week, New York Magazine published a deeply reported story by Ben Terris on Pennsylvania Democratic senator John Fetterman. The piece, headlined “All By Himself,” is an astounding fit of journalism; Terris and his editor(s) deserve the utmost kudos for their work. At the same time, the piece also is soberingly illustrative of society’s collective fear of disability and of the disability community writ large. On Twitter/X, New York Magazine described Fetterman as “an erratic senator who has become almost impossible to work for, and whose mental health situation is more serious and complicated than previously reported,” adding the senator’s staffers—and his family, particularly his wife Gisele—“now question his fitness to be a senator.”

“All By Himself” depicts people in Fetterman’s orbit being understandably concerned for his mental health and wellbeing, but it’s more. In actuality, it shows fear of disability.

It’s a notion I readily picked up on early in my reading of the story, with Terris writing about Fetterman’s need to use captioning features on his iPhone to help compensate for his audio-processing disability stemming from a stroke suffered in 2022. It should not be alarming in the slightest, even for a sitting public official, to use their iPhone (or Android phone, for that matter) and use accessibility features to make conversations more accessible. At his core, Fetterman is a man who had a stroke and had his cognition affected accordingly; if he is indeed on a “recovery plan,” as Terris says, then of course Fetterman is going to use technology as part of it. If Fetterman weren’t a notable public figure, his care routine would be normalized by his medical team and his family and friends. He very likely would be encouraged to use captioning on his iPhone in order to bolster his comprehension of conversations he holds with other people. The whole point of accessibility, whether digital or tangible, is to provide disabled people—and make no mistake, Fetterman is disabled—access to the world. To say that Fetterman used an accommodation in order to participate in an on-the-record interview is to insinuate accessibility software is inherently bad or a canary in the coal mine.

I’m not going to argue mental fitness for those holding public office. Whether or not Fetterman is cognitively capable of serving his constituents is beyond the scope of this piece. There is a cogent argument to be made for his decreased cognition affecting his policy stances, not to mention his general demeanor. But the enduring vibe of Terris’ story is unmistakably dour: people are scared shitless of disability. That is the most important takeaway from “All By Himself” in my view; it reinforces the idea, however unstated, that disability is a fate worse than death. As a lifelong disabled person, the aforementioned vibe is bothersome because the implicit lesson is that people with disabilities somehow are “lesser” humans that ought to be pitied as the moribund, sorrowful lot we are under the guise of ostensible “concern” for one’s mental health. To reiterate, there can be spirited debate over how much intellectual disabilities can, or should, affect a sitting senator; what’s clear, however, is Fetterman is getting the same type of coverage President Biden received following his disastrous debate last June.

Put another way, the mainstream media showed yet again it does not know how to cover disability with dignity. As I noted earlier, it shows society does not like disability.

As I said at the outset, Terris’ story is an exemplar of great reporting. At the same time, though, he would have been better served by including color from neurologists and other stroke survivors. Talk about what happens to the human brain when a stroke happens and paint a general picture of the recovery process. However great the reporting may be, Terris unwittingly (and unsurprisingly) doubled down on the harmful stereotypes which plague the disability community by casting Fetterman as a broken, feeble man incapable of caring for himself—let alone those he’s elected to represent.

In a journalistic context, “All By Himself” serves more evidence that newsrooms sorely need to invest in hiring reporters who are disabled. Disability, across politics or technology or any other vertical, sees a pittance of the robust coverage like other facets of social justice in gender, race, and sexuality. Terris’ story is proof positive of such a sentiment, as Fetterman is positioned, once again, as a “lesser” person and politician.

However compromised his faculties, Fetterman deserves better on Capitol Hill.

Read More
Steven Aquino Steven Aquino

Brief Followup on OLED TVs and Accessibility

Last night, I started rewatching another of my favorite shows in For All Mankind. The series, available on Apple TV+, is set within an alternate universe in which the Soviet Union won the space race and landed on the moon before the Americans. A Russian cosmonaut, Alexi Leonov, was the first human to set foot on the moon instead of the American astronaut Neil Armstrong. For All Mankind has run for four seasons thus far, with a fifth being green-lit alongside a spinoff series. I’m eagerly awaiting Season 5.

If you’re into space stuff—and good drama—you should check out For All Mankind.

As much as I adore For All Mankind for its entertainment value, I admittedly had an ulterior motive for wanting to rewatch it. As I recently wrote, I’ve had two LG OLED televisions come my way since the beginning of the year as replacements for two TCL QLED sets (one being mini-LED). The new TVs represent my first foray into OLED on massive screens, as I’ve long had experience with OLED on devices like my iPhone, Apple Watch, and in recent times, my iPad Pro. I’ve appreciated the black levels and contrast on those smaller displays, but there’s nothing like experiencing OLED on a expansive display like on a television’s. With no hyperbole, it’s been revelatory for me.

I have a 77-inch LG C3 in the living room and a 48-inch LG B4 in my office. The former is a 2023 model, while the latter is from 2024. Both work with aplomb, and I’m so happy.

As I said last month, OLED is to TVs what Retina was to iPhone 4 back in 2010. Once you see them, you can’t go back—the picture quality is just too pristine, too captivating.

It turns out, For All Mankind is the perfect type of show, what with dealing with blasting off into outer space, to help OLED flex its considerable visual muscle. Chief among it is contrast and black levels; both are astounding on OLEDs, due largely to the fact OLEDs are capable of pixel-level control. There is not one whit of blooming or a “halo effect” during scenes where the NASA astronauts are in space—it’s just pitch black. Likewise, OLEDs have the ability for infinite contrast because of the corresponding perfect blacks. What this means is, everything on screen is set off beautifully, and in incredible fidelity, because of the rich colors and, once again, the OLED’s ability to control its output at the pixel level. All told, what this means in an accessibility context—in my experience, anyway—is the picture quality is so good that it makes watching TV shows and movies more accessible—and, arguably more importantly, more enjoyable. I’ve noticed myself feeling far less eye strain and fatigue when watching something. It isn’t often that I gush about a piece of technology—OLED isn’t without its warts, mind you—but I am lovestruck right now with my TVs. They make watching content so much fun.

In the name of OLEDs not being perfect, it’s worth mentioning my biggest beef with using my TVs these last few months is they aren’t nearly as bright as my previous sets. They aren’t dim by any stretch, but the ABL, or auto-brightness limiter, on the B4 is particularly aggressive at tamping down brightness levels to save from burn-in. The eagerness of the ABL is a prime reason why I remain deeply intrigued in a mini-LED TV like Sony’s Bravia 9. The TV is the company’s flagship, not one of its OLEDs, and is renowned by reviewers for its extreme brightness and, most notably, its OLED-like blacks. What’s more, the Bravia 9 comes in an 85-inch size; if I had one wish, I’d want it to be an upgrade to the 83-inch version of the C3. I definitely want bigger in the future.

But yeah, OLED TVs are spectacular—as is For All Mankind. Go watch it tonight.

Read More
Steven Aquino Steven Aquino

If Apple Eventually Raises Prices, the biggest loser Will be Shoppers needing Accessibility

I’ve long instituted a running gag of sorts for when Apple’s earnings calls happen.

I usually post this GIF to Twitter/X and cheekily say it comprises my reporting of the call.

Today’s call, however, merits more than a pithy post. Specifically, Apple CEO Tim Cook answered a question from an analyst about potential price hikes in response to President Trump’s tariffs plan. As of this writing, Apple’s prices are holding steady, but the company did concede today it expects to incur $900 million in costs in consequence of the tariffs this quarter. To reiterate, Apple has, for now at least, chosen to eat these costs—because they can—rather than pass them onto the buying public.

“Obviously, we’re very engaged on the tariff discussions,” Cook said when asked about potential changes to Apple’s price list. “We believe in engagement and will continue to engage. On the pricing piece, we have nothing to announce today. I’ll just say that the operational team has done an incredible job around optimizing the supply chain of the inventory, and we’ll obviously continue to do those things to the degree that we can.”

The reason I’m covering today’s earnings call with more zeal is, of course, accessibility. Namely, it’s worth pointing out that (a) Apple products already are priced at a premium; and (b) even the remotest of possibilities Apple decides to raise its prices on account of the tariffs will have negative effects on legions of disabled people. This matters a lot; I’ve written before about the attainability of Apple gear, as well as how most in the disability community don’t make much money, if at all, to comfortably afford said Apple products. Moreover, both factors are worth underscoring because of the collateral damage: to wit, it’s entirely plausible any price hikes from Cook and team puts vital assistive technologies out of reach for a not-insignificant swath of people in the disability community. Someone wanting the least expensive iPhone, the new 16e for instance, for its robust roster of accessibility features could well have to postpone their purchase because even the ostensibly “cheapest” iPhone is beyond their wallet’s ken.

It’s true not all disabled people live impoverished; on the contrary, Apple employs innumerable people with disabilities who are presumably highly well-compensated for their labor. The salient point simply is that the vast majority of disabled people in this country (if not worldwide) aren’t so financially privileged—that shouldn’t be forgotten.

Rising costs obviously hurts everyone’s pocketbooks, but the effects oftentimes are more painful for those who hail from marginalized and underrepresented communities. In other words, although accessibility seemingly has nothing to do with the proverbial bean-counters within Apple Park, it’s times like this that illustrate how accessibility, one way or another, pervades itself into every aspect of human life if you’re disabled.

Apple reported revenue of $95.4 billion, a 5% increase year-over-year, for Q2 2025.

Read More
Steven Aquino Steven Aquino

Google Gemini could Be Coming to Apple Intelligence later this year, report says

Emma Roth and David Pierce of The Verge co-bylined a story published on Wednesday in which they report Google CEO Sundar Pichai said it’s his hope Google Gemini comes to Apple Intelligence as an optional model by the end of the year. The comment came during Pichai’s time on the stand giving testimony for Google’s search monopoly trial.

Pichai noted Apple chief executive Tim Cook told him during a recent meeting that Apple Intelligence would get more support for third-party AI models “later this year.”

“The integration would presumably allow Siri to call on Gemini to answer more complex questions, similar to the integration that Apple launched with OpenAI’s ChatGPT,” Roth and Pierce wrote of the ramifications of a potential Apple-Google deal over Gemini. “Apple senior vice president Craig Federighi hinted at plans to build Gemini into its Apple Intelligence feature last June, when the AI service was first announced.”

I attended the “fireside chat,” held after last year’s WWDC keynote, during which moderator iJustine—whom, incidentally, I interviewed in 2023—asked Federighi and then-AI boss John Giannandrea about Gemini and Apple Intelligence. It was during this discussion when Federighi said it’s Apple’s goal to “enable users ultimately to choose the models they want”—which could be Gemini for a not-insignificant swath of users.

I typically don’t write stories couched around what churns out of the Apple rumor mill, but do make exceptions for accessibility’s sake. This is one such occasion, as Gemini has supplanted OpenAI’s ChatGPT as my preferred generative AI tool. The Gemini app has become so integral to my digital doings, in fact, that it has earned a permanent place on my iPhone’s Home Screen and on my iMac’s Dock. In my experience, I’ve found Gemini to be mostly good at reliably giving me good information; it does get things wrong and hallucinates, but that’s to be expected for any such tool. Design-wise, I prefer the Gemini user interface to that of ChatGPT’s. I find the former more humane and dynamic, whereas the latter feels staid and utilitarian. As to how Gemini does as an assistive technology, I find it has subsumed (albeit not entirely) traditional web searches. Rather than endure umpteenth search results in a browser window, which requires good amounts of visual and motor energy, Gemini does the grunt work for me and collates everything into a single space. It’s a shining example of generative AI as accessibility; many disabled people can find conventional Google searches taxing in many respects, especially when doing deep research for essays or other projects. That Gemini exists means conducting said research becomes much more accessible—and expedient. Although many educators lament the proclivity of students nowadays to lean on generative AI for their schoolwork, the reality is to put one’s weight on something like Gemini isn’t (always) an indicator of laziness or, worse, academic dishonesty. On the contrary, it’s downright shrewd, not to mention empowering, to spot generative AI’s strengths in accessibility and take advantage of them accordingly.

Roth and Pierce note Pichai’s comment lends further credence to rumblings that Gemini indeed is coming to Apple Intelligence sooner than later. Their report makes mention of news from February in which “Google” is listed in Apple Intelligence-related code found within an iOS 18.4 beta. As Apple Intelligence currently stands, Siri will ask users if they wish to use ChatGPT in answering complex questions that are beyond the virtual assistant’s ken. Presumably, Gemini could one day do that as well, assuming someone has chosen it as their desired third-party model over the default ChatGPT.

News of Pichai’s comment regarding Gemini was first reported by Bloomberg.

Read More
Steven Aquino Steven Aquino

Pocket Casts App Adds Support for Generated Transcripts in Latest Update

Popular cross-platform podcast client Pocket Casts announced this week the app now supports transcripts with the 7.85 update. The company notes the feature is available on iOS and Android for Plus and Patron members, with Pocket Casts touting the “powerful” update that “makes engaging with your favorite podcasts easier than ever.”

Pocket Casts stresses it still supports transcripts supplied by individual podcasters, but says the generated ones are intended to “[expand] access by automatically generating them for new episodes from the most-followed podcasts.” The generated transcripts are searchable too, with Pocket Casts instructing users in its announcement to access the transcript by tapping the Message icon located in the Now Playing screen’s toolbar.

I don’t use Pocket Casts on iOS, but this is a notable development nonetheless. While it’s great to hear Pocket Casts will maintain support for manually supplied transcripts, generated versions cater to an obvious issue: not every podcast—perhaps the majority of the most popular shows—supports transcripts at all. In an accessibility context, this can make popular news shows like The New York Times’ The Daily inaccessible to many—especially obviously to those who are Deaf or hard-of-hearing. Podcasts, like music, is a medium steeped in the presumption everyone can hear (or hear well). Thus, podcasts are logically inherently exclusionary—which makes sense on one hand, but with the rise of technology’s presence and power, can help turn the status quo on its head. To wit, that Pocket Casts is now automatically generating transcripts makes podcasts accessible to people who heretofore couldn’t enjoy them like everyone else. This is, of course, predicated on the notion the transcript, like many captions, isn’t crap.

Put another way, transcripts are to podcasts what haptic feedback is to music.

Despite Marco Arment being a longtime friend, I switched from using his Overcast as my preferred podcast player to using the stock Apple Podcasts app on my iPhone and iMac. I did so largely because of the immense accessibility transcripts provide me; Apple announced support for transcripts a little over a year ago, which is when I made the decision to change over. I still adore Overcast for myriad reasons—not the least of which because Arment is a staunch ally of the disability community and prioritizes accessibility in his app. What could sway me to return to Overcast is, if come WWDC, Apple announced a “TranscriptKit” API for App Store developers (like Arment) to hook up to their apps. Pocket Casts seemingly has built their own framework, but an API officially blessed by Apple would go a long way to not only helping users, but help app makers for whom rolling their own is beyond their technical ken for whatever reason(s).

Read More
Steven Aquino Steven Aquino

Waymo, Toyota Announce Partnership aimed at ‘Advancing Autonomous Driving’

Earlier this week, Waymo and Toyota jointly put out an announcement in which the two companies detail a “strategic partnership” forged in the name of “advancing autonomous vehicle deployment.” The companies described the partnership as “built on a shared vision of improving road safety and delivering increased mobility for all.”

Notably, Waymo and Toyota are focused on “personally-owned” autonomous vehicles.

“Toyota has long advanced research and development in support of a zero-traffic-accident vision, guided by a three-pillar approach that integrates people, vehicles, and traffic infrastructure,” the Japanese automaker said of its work in a statement included in the announcement. “Automated driving and advanced safety technologies play a central role, exemplified by the development and global deployment of Toyota Safety Sense (TSS)—a proprietary suite of advanced safety technologies. TSS reflects Toyota’s belief that technologies have the greatest impact when they are made widely accessible. Through this new collaboration, the companies aim to further accelerate the development and adoption of driver assistance and automated driving technologies for POVs, with a continued focus on safety and peace of mind.”

For its part, the Alphabet-backed Waymo said in part it is “building a generalizable driver that can be applied to a variety of vehicle platforms and businesses over time,” adding the joint venture with Toyota will “begin to incorporate aspects of its technology for personally owned vehicles.” Moreover, co-chief executive officer Tekedra Mawakana said in a statement for the announcement Waymo aspires to be “the world’s most trusted driver” and noted the decision to work with Toyota is a manifestation of shared values—particularly towards the ideal of “expanding accessible transportation.”

From an accessibility perspective, what captivated me to cover this Waymo × Toyota news is the concept of personally-owned autonomous vehicles. As I’ve noted many times before, I’ve covered Waymo at extremely close range over the last few years and have been a Waymo One user here in San Francisco before the app became publicly available. As someone whose vision is so impaired it precludes me from obtaining a driver’s license, the advent of Waymo has been a life-changing revolution of the first order. As much as I’m a proponent of richly-funded public transit systems, Waymo’s presence here in the city means I needn’t navigate crowded busses or deal with overly chatty Uber and Lyft drivers. More pointedly, I needn’t have to lay myself at the mercy of family and friends to effectively be my personal chauffeur. By contrast, Waymo allows me to move about town with agency and autonomy because a car is just a few taps away on my iPhone. What’s more, the nerd in me adores the technological advancements that make Waymo possible—down to ostensibly minor amenities such as the door unlocking automatically when someone approaches the waiting vehicle.

Technical wares aside, what really and truly endears Waymo to me is the accessibility of it. Waymo makes transport more accessible to me. It affords me opportunities to assert my independence as a person with disabilities. It’s something I’ve written about before, on numerous occasions in fact, but which are always worth repeating. Waymo, and its ilk, aren’t beyond reproach; there’s always room for improvement. The salient point is simply the advent of fully autonomous vehicles has been a revelation for myself and others in the Blind and low vision community. It’s neither trivial nor can be overstated.

Of course, I don’t own the Jaguar SUVs I ride around in with Waymo. If I’m praising Waymo for its accessibility prowess, its zenith—the mountaintop—would be personally-owned vehicles. Obviously, this wouldn’t be Waymo proper; to the conceit of the Toyota partnership, it would be even more life-altering to buy a car based on Waymo’s technologies. I wouldn’t need Waymo at all because I have one of my very own. The economics, not to mention the legal logistics, of a Blind person buying a car surely need deep consideration. A lot of the conversations that need to happen will force legislators to confront the systemic ableism around Blind people and “driving” because autonomous vehicles are decidedly just that: autonomous. For the purpose of this piece, however, my focus is on the practical ramifications. To wit, if Waymo today affords me agency and autonomy in transport, having my own car tomorrow sends that concept into the stratosphere. I’ll turn 44 come September, so I may well be into my senior years by the time so-called “POVs” become feasible. I’ve long since made my peace with neither having the ability to get a license nor buy a car—but I still dream of it.

Waymo and Toyota want to turn my dream, and that of others like me, into a reality.

Read More
Steven Aquino Steven Aquino

Unity Announces Unity for Humanity Grant Winners, Including Honors for accessibility

San Francisco-based Unity earlier this week announced the 2025 winners of its Unity for Humanity Grant. The company’s announcement was shared in a blog post published on Monday and bylined by Kevin Truong, Unity’s senior program manager for grants.

According to Truong, this year Unity recognized 10 winners and 3 honorable mentions. The recipients spanned 9 countries, with the winning projects addressing “complex global challenges” which are aligned with the UN Sustainable Development Goals.

“We received a record number of applications, demonstrating the growing global demand for support in using real-time 3D to drive positive change,” Truong wrote of this year’s Unity for Humanity Grant. “To meet this increased demand, Unity’s Social Impact Team added an additional $100,000 to the prize pool from the Unity Charitable Fund, bringing the total to $600,000 USD. Funding can be allocated towards the development of the project, building a working prototype, or marketing and distribution.”

Amongst the honorees are apps made for accessibility. One such winner, Jubilee Studios’ Small Talk ASL App, is educational software aimed at teaching American Sign Language (ASL). Unity describes Small Talk ASL as featuring “high quality animation and interactive gameplay [which opens] a door for children and adults, both hearing and Deaf, to communicate in ASL.” Moreover, Unity said Small Talk ASL is a “visually captivating project with a clear social impact,” adding Jubilee Studios nabbed Unity’s all-new Judges’ Choice Award for “receiving the highest marks from this year’s judges.”

Other winners include Benvision: Melody Meets Mobility, which uses spatial audio cues in order to enable Blind and low vision people to independently move about their world, and was a finalist and honorable mention last year. And Prosthetics Beyond Borders is a mixed-reality platform which Truong says “[uses] gamification, VR, and AI-driven simulations to help individuals with disabilities adapt to assistive technologies like prosthetic hands and legs.” Unity lauded the app as a “truly impressive medical innovation that helps individuals use their prosthetic limbs,” adding Prosthetics Beyond Borders offers “customizable training, interactive games, and real-world scenarios to enhance motor skills, confidence, and mental well-being.” Unity also noted the app endeared judges due to it being “inclusive and accessible” because it serves disabled people who live in rural and war-torn areas where support is scarce.

Unity is a widely-used cross-platform game engine, debuting at WWDC 2005. The software is also used by Google for its DeepMind projects, from which Gemini grew.

Prosthetics Beyond Borders uses Unity for its underlying infrastructure.

“We are using Unity to develop both the MyoLink solution and our VR training platform. Unity is integral to our project as it enables us to create interactive, immersive environments for prosthetic training and real-time muscle feedback,” Mohamed Dhaouafi, CEO of Cure Bionics, said about his company’s winning app in a statement for Unity’s announcement. “For MyoLink, Unity helps visualize muscle signals and provide real-time biofeedback, enhancing the training experience. For the VR solution, Unity is used to design realistic simulations that help users practice adapting to prosthetic devices in diverse environments, improving mobility, confidence, and independence.”

Read More
Steven Aquino Steven Aquino

Netflix Adds ‘Dialogue-Only Captions’ Option for Even more accessible Binge-Watching

My friend Emma Roth at The Verge reported last week that Netflix introduced a new subtitles captions option specifically designed to only display spoken dialogue. The Bay Area-based company describes the feature as a “new way to experience subtitles.”

Roth notes the dialogue-only captions are available exclusively in English for Netflix originals for now, but noted a Netflix spokesperson confirmed to The Verge the company is “actively exploring ways to expand this option to existing titles over time.”

According to Netflix, 50% of Americans watch content with captions or subtitles “most of the time”; in fact, the company said “it’s a habit we see reflected on Netflix too,” as it said “nearly half” of all viewing hours in the United States happen with either captions or subtitles. That data, Netflix said, was the driving force behind what it called “making the experience even better for members.” Netflix says its new dialogue-only caption feature is debuting with the release of the fifth and final season of the thriller series You.

David Pogue wrote about the ever-growing use of captions for CBS News last year.

The dialogue-only mode should, in theory, make watching stuff on Netflix more accessible to those who are hearing and thus don’t need the bracketed metadata with descriptions of ambient sounds. Likewise, focusing on only dialogue can make action easier to follow for those with visual and/or intellectual conditions because the option lacks the aforementioned extraneous detail that adds complexity and cognitive clutter.

The advent of Netflix’s dialogue-only captions comes soon after the company announced a “more multilingual” experience with expanded localization. And yes, once more with feeling, I must point out that captions and subtitles are not one and the same.

Read More
Steven Aquino Steven Aquino

Recent Meta Ray-Bans Updates make smart glasses Even more alluring for accessibility

Meta late last week announced several updates to its Ray-Ban smart glasses that the Menlo Park-based tech titan says gives the eyewear “new styles and AI updates.” The Spring-focused updates are intended to “supercharge the season,” according to Meta.

While everyone cares about fashion and style, aesthetics aren’t what piqued my interest here. What caught my eye—no pun intended—was the software side of the upgrade story. The headlining feature is Meta’s Live Translation functionality, previously available only in “early access,” is rolling out to everyone. Live Translation is localized in English, French, Italian, and Spanish, with Meta noting “when you’re speaking to someone in one of those languages, you’ll hear what they say in your preferred language through the glasses in real time, and they can view a translated transcript of the conversation on your phone.” Live Translation works even in airplane mode.

Elsewhere, Meta reports Ray-Ban wearers soon will have the ability to send and receive DMs and more from Instagram, which complements existing capabilities through Messenger and WhatsApp. Regarding music, Meta says the Ray-Ban glasses now support expanded access to services like Apple Music, Spotify, and others. Users can ask Meta AI to play a specific song or playlist right from their glasses, with the caveat “as long as your default language is set to English,” according to Meta. What’s more, Meta says users in the United States and Canada will gain the ability to converse with Meta AI right from their sunglasses. Meta’s smart assistant has the ability to “see what you see continuously and converse with you more naturally.” Lastly, Meta says beginning this week, users will able to talk with Meta AI about what it is they’re seeing, replete with real-time responses. This feature is spiritually very similar to Google Lens.

Meta sent me a pair of the original Ray-Bans a couple years ago, which I wrote about alongside Amazon’s Echo Frames. It’s admittedly been some time since I wore either pair with regularity—especially since, in the case of the Echo Frames, I don’t wade knee-deep in the Alexa ecosystem. The Ray-Bans are more agnostic in their allegiances. Nonetheless, I treated both devices much like the inexpensive (and dumb) drugstore sunglasses I’ve bought for years: I wear them to keep the sun out of my eyes.

As someone who is deep in the Apple ecosystem, I’m hearted by this weekend’s news Apple’s Ray-Bans competitor reportedly is “getting closer to becoming a reality.” According to Apple scoopster extraordinaire Mark Gurman of Bloomberg, the company’s glasses aren’t “close to being ready yet,” but writes the idea behind them is to “turn the glasses into an Apple Intelligence device [by analyzing] the surrounding environment and feed information to the wearer, though it will stop well short of true augmented reality.” Apple’s smart glasses would be right up my alley as someone who’s all-in on its ecosystem and, even more crucially, its support for accessibility.

Apple’s aspirations mirror what Meta does with its Ray-Bans, and is fascinating to ponder from an accessibility standpoint. Take the Live Translation functionality, for example. It can be way more accessible to hear aural translations of language in a hands-free way; this could be important for people with limited motor skills who might have trouble holding their iPhone to, say, use Apple’s Translate app to make the conversion(s). Likewise, using smart glasses can make navigation much more accessible largely due to its hands-free nature. You needn’t have to look down at your phone as you hold it to know where you’re going. There are other applications, but suffice it to say, smart glasses flash a ton of potential as an assistive technology.

Read More
Steven Aquino Steven Aquino

Why Choosing Prepared Garlic is a ‘curb cut’ to greater accessibility in the kitchen

This piece is only loosely tech-related, but does fit nicely with the “curb cuts” theme.

Last night, I came across a video on YouTube (embedded below) from one of my favorite creators in Kenji López-Alt. He demonstrates various ways to prep garlic, as well as discusses whether prepared garlic—be it pre-peeled or pre-chopped—is worth one’s investment at the supermarket. López-Alt’s video is good as usual: informative and unpretentious. His takeaway on the prepped garlic options, however, is what inspired this article. López-Alt advises that, although pre-chopped garlic will work in recipes, he doesn’t recommend using it. It lacks the punchiness inherent to freshly-chopped garlic.

As someone who’s an ascribed lifelong foodie, and as someone who was accepted to culinary school many moons ago, I’m of two minds about López-Alt’s video. On one side, I get the predisposition to preferring fresh ingredients because the logic dictates fresher is better. There is no question peeling and chopping garlic will lend better flavor to dishes than any of the convenient alternatives—even the pre-peeled cloves, since as López-Alt notes, manufacturers first blanch the garlic to make the papery skin easier to remove. On the other side, however, what López-Alt’s recommendations (predictably) miss is, of course, accessibility. In a disability context, the reality is not every disabled person who likes to cook is able to prep garlic the “proper” way you learn at the aforementioned culinary school. Maybe someone has limited dexterity in their hands. Maybe they have low muscle tone. Maybe they’re arthritic. As to the pre-peeled garlic from well-known companies such as Christopher Ranch, I’ve given serious consideration to paying a premium for it at the grocery store due to my own lackluster fine-motor skills. While I intellectually know how to break apart the cloves from the head and get the skins off, my fingers—not to mention my low vision—oftentimes won’t cooperate in getting the mise en place all ready timely and efficiently. Thus, pre-peeled garlic would prove a more accessible alternative whilst still being a relatively fresh product. To choose it isn’t about laziness or taking a shortcut. It’s about accessibility.

Accessibility is crucial—and too often authorities like López-Alt just gloss over it.

Despite being a lifelong foodie, it’s always struck me how ableist the food industry writ large can be in its mindset. Chefs preaching the gospel about making everything homemade because it tastes better and it’s “easy” to do. Likewise, these same chefs write cookbooks and do television shows in which they claim “everyone” can make meals in 15 or 30 minutes. Again, the food nerd in me understands the messaging being telegraphed with both sentiments. The problem is, of course, these ideals utterly fail at acknowledging that (a) not everyone is literally able to prep and cook; and (b) more practically, not everyone has a kitchen wherein they can comfortably prep and cook their food. It’s not realistic; it’s predicated on the notion most people aren’t disabled.

And yet, disabled people are human and need to eat for sustenance like anyone else.

(To be clear, I’m not at all insinuating López-Alt is ableist. I’m saying the food world is.)

The fact of the matter is the ostensibly convenient pre-prepped garlic options that López-Alt explored are lifesavers in terms of accessibility. They may be the only option for disabled garlic lovers (like me). The tradeoff is clear—lesser flavor for usability’s sake—but those are the choices people must make because pre-prepped garlic is as inclusive as it is convenient. To put it in technological terms, using, for instance, the bags of pre-peeled garlic for accessibility is akin to using my Apple devices at maximum brightness. The obvious consequence of my choice is worse battery life, and I’m cognizant of such a Faustian bargain, but my vision requires the brightest screens in order for my iPhone and iMac to be accessible in everyday use. Period. Full stop. Fin.

Personally, I won’t ever buy jarred pre-chopped garlic—but someone else may need it!

But I’ll absolutely get the pre-peeled stuff for accessibility reasons, taste be damned.

Read More