Steven Aquino Steven Aquino

Unity Game Engine Gets Native Support for Mac, Windows Screen Readers In Looming Update

Ian Carlos Campbell reported for Engadget last week Unity has updated its popular game engine such that the tool natively supports screen readers on both macOS and Windows. The feature is available in the Unity 6000.3.0a5 alpha, according to Campbell.

“Unity previously offered APIs for both Android and iOS’ built-in screen readers in its Unity 6.0 release, but hadn’t yet added support for Windows Narrator or macOS VoiceOver,” Campbell wrote. “With this new alpha and its eventual release as Unity 6.3, developers creating games with Unity will have access to a native screen reader in all of the engine’s major platforms. Considering how popular Unity is as a game engine, that could vastly improve the accessibility of future games.”

Campbell cites a report by Can I Play That? in which Marijn Rongen writes in part Unity’s news is significant for Blind and low vision gamers because it can “make it much easier (and cheaper) to include this essential accessibility feature for Blind players in games built with Unity,” adding “too often otherwise accessible games are still unplayable without sighted support, simply because they miss narration for the menus.”

“Supporting screen readers from Unity helps developers take this important step towards better Blind accessibility in their games,” Rongen said.

Campbell notes most game makers build their own screen-reading software, which he says oftentimes is “resource-intensive for developers to implement.” Again, last week’s news is big because Unity has built in support in its engine. In his story, Campbell shares a comment from Steve Saylor, a creator and accessibility consultant, who wrote on Bluesky Unity’s built-in support for screen readers means “the heavy lifting is done for you, and the cost of time [and] resources now is significantly lower.”

This isn’t the first time I’ve covered Unity’s accessibility efforts. In April, I wrote about the company’s 2025 Unity for Humanity Grant winners, which included those working towards greater inclusivity in gaming. Amongst the titles recognized by Unity included Jubilee Studios’ Small Talk ASL App, as well as Benvision: Melody Meets Mobility.

Read More
Steven Aquino Steven Aquino

YouTube TV Makes ‘NFL Sunday Ticket’ More accessible this season with new monthly plan

I was watching yesterday’s Chargers-Chiefs matchup from São Paulo, Brazil when, at one point, a commercial aired in which Jason Kelce is seen crying endlessly because NFL Sunday Ticket now can be had on a month-to-month basis. The spot grabbed my attention because Sunday Ticket historically has been expensive; a season pass for YouTube TV subscribers is $276. Opting for the monthly plan, however, the price is $85.

Obviously, the news struck me as being a more accessible way for a disabled football fan—someone who must count their pennies but also lives and breathes for Sundays—to keep up with the NFL by making Sunday Ticket more economically feasible. Better to break up the cost month-to-month than be forced to plunk down nearly $300 all at one time. What’s more, as with YouTube TV proper, Sunday Ticket can be cancelled anytime. In an accessibility context, flexible payment options—paying smaller amounts of money over a longer period—means those in the disability community potentially can have more “fun” with what disposable income is available to them because, in Sunday Ticket’s case, NFL games can be enjoyed for mere morsels relative to its total cost.

As a devout YouTube TV subscriber—with the 4K Plus option, natch—I’m feeling mighty tempted by Sunday Ticket’s new pay-as-you-go option. I watch the NFL every week during the season, and loved watching the aforementioned Chargers-Chiefs game on my 77-inch LG C3 OLED TV. The picture quality was spectacular whilst the sheer bigness of the screen made watching the action (as well as reading the graphics) eminently more accessible. A nice touch for last night’s game was YouTube TV replacing the stock circular playhead in the scrubber with the Brazilian flag (🇧🇷) to pay homage to the event. YouTube TV does this with other programming too—e.g., The Simpsons has a donut while Law & Order: SVU uses a gavel—and Google is to be commended for incorporating whimsy into the software’s user interface design. My only gripe with YouTube TV is they don’t carry MLB Network and NHL Network to complement NFL Network and NBA TV.

YouTube TV acquired Sunday Ticket in 2023, for which Google pays $2 billion annually.

Los Angeles beat Kansas City, 27–21. The newly-engaged Travis Kelce caught a touchdown pass. He finished the game with 2 catches for 47 yards in the loss.

Read More
Steven Aquino Steven Aquino

How AI Is Making Job Interviews Accessible to all

A cogent argument could be made that, at its core, Ribbon is all about accessibility.

On its website, the Toronto-based company bills itself as “the #1 AI recruiter” and says it’s “built for recruiters to effortlessly interview at scale.” The AI part comes in by way of Ribbon’s software providing insights about job candidates to human recruiters, while candidates reap the benefits of being able to sit through an interview on their terms.

Ribbon has a video demonstrating its software on YouTube.

Ribbon’s applicability to accessibility is right there on its website. Take a scroll down the homepage a bit and you’ll find a few callouts of statistics and testimonials, where Ribbon boasts about its software saving more than 10 hours per week per recruiter whilst seeing an 60% increase in times to hire. Indeed, a user named Elena McGuire testifies to Ribbon’s efficacy by saying in part it has “has helped [their team at Thrive] make our hiring process faster and better.” The moral here is simple: Ribbon leverages artificial intelligence to make recruiters’ jobs not merely more efficient—it makes it their work accessible too by taking on much of the grunt work associated with the position. If you’re someone with a disability who works in recruiting, that AI could help you, say, identify skillsets and the like means less scanning and typing—which could be burdensome in terms of visual and/or motor acuities depending upon the condition(s).

Ribbon’s co-founder and CEO is Arsham Ghahramani. His background is in AI research, back in the days when AI was described as “machine learning” and, as he told me during a recent interview via videoconference, an area in which “not a lot of people were super interested in this space.” The impetus for Ribbon’s mission lies in the workings of another startup, Ezra, where previously Ghahramani led AI and would meet his Ribbon co-founder in Dave Vu. It became apparent, Ghahramani explained to me, there were problems to be solved for all involved in the hiring space. The ways in which most companies hire, he said, are (a) oftentimes inflexible about catering to people’s needs and tolerances; and (b) personal interviews are increasingly important given applicants oftentimes employ chatbots to assist them as they’re seeking employment.

“The example I always give is if everyone uses the same version of ChatGPT to create the perfect résumé for one role, it becomes really hard to discern who is the right candidate for this [job],” Ghahramani said of Ribbon’s goal. “In that world, interviewing someone becomes more important than ever, but also harder on both sides. We started Ribbon with the idea that let’s make it easier for candidates [by allowing] them to interview anytime, on their own terms, and under on much pressure as well. Then also, let’s make it easier for companies to understand who’s the right match for jobs.”

“Ribbon is an AI interviewer that can interview people for almost any role,” he added.

Besides his history working in AI before it was crazy cool, Ghahramani has another facet to his background that helps inform his work on Ribbon. Like yours truly, he’s a lifelong stutterer. He called accessibility something that’s “always been close to my heart,” adding his stutter has been at the forefront of his mind “in every interview I’ve ever taken.” Job interviews, he told me, are “high pressure environments” for the candidate, which is exacerbated for people coping with a speech delay. However unfair, the coldhearted reality is interviewers judge candidates in large part on their presentation, speech-wise. Ribbon, according to Ghahramani, gives more control back to candidates such that it becomes “a much friendlier environment” for them. By leveraging artificial intelligence, users can rest assured “you’ll get the same experience every time.”

Ghahramani noted Ribbon does its best to stay faithful to other aspects of accessibility, such as ensuring adherence to the WCAG standards for best practices on the web.

Ghahramani’s lived experience as a stutterer gives context to Ribbon’s raison d’être because, as he told me, recruiters obviously want applicants to be at their best during interviews. The problem, of course, is a high-stakes situation like a job interview often induces stress and, commensurately, often worsens someone’s fluency and intelligibility. It then becomes a lose-lose proposition: companies can’t pick up laborers because the people aren’t looking their best. In this sense, then, Ribbon is providing an assistive technology insofar as, as Ghahramani said, it’s a “huge advantage” to be able to complete an interview on your own terms. He pointed to users of the platform who report Ribbon is “not the same as a regular interview” because technology can accommodate for a wider swath of people. To wit, Ghahramani said it isn’t merely stutterers who benefit; indeed, people who are neurodivergent also see gains in terms of “saying the wrong thing [and] wishing you could redo.” All told, he believes Ribbon is offering “a massive accessibility bump” for everyone—to recruiters and applicants alike—compared to the tried-and-true rigamarole of conventional job interviews.

Ghahramani believes having a speech disability puts you at a steep disadvantage.

“I’ve always felt, in the situations where you have to perform on the spot, that’s where [stuttering has] the biggest disadvantage,” he said of stuttering. “In my job now, I have to present to a lot of investors and pitch them. I’ve always felt [his stutter] is a disadvantage there. In job interviews, I’ve always felt the same, I’m making a giant assumption here, but I think the average person who has never had a stutter, and never had an issue with that, probably watches me stutter and thinks I’m dumber than them. I’m making, like, a massive leap there. I’ve always had that feeling, and I felt you have to overcome that [perception] in other ways and demonstrate there’s not something wrong with me, or I’m not dumber than you. It’s hard to fight that perception.”

When asked about how technology can enable greater accessibility, Ghahramani conceded my question, while good, lies in an area at the “edge of my knowledge.” Speculatively, however, he answered by telling me technology has the capacity to expose things to “a much wider range of people.” Ghahramani has never seen a speech-language pathologist for therapy, saying he probably would have had he parents in the United Kingdom possessed the resources to do so. This all is Ghahramani’s roundabout way of illustrating that his company’s technology has profound potential to help people. He believes people should have accessible ways of practicing for interviews, on both sides of the desk, and that Ribbon can be a conduit for said practice. Moreover, Ghahramani told me technology can help in “our understanding of speech and like cognition and the way the brain works.”

In terms of feedback, Ghahramani reiterated Ribbon’s conceit that it exists to make interview processes smoother and more streamlined for recruiters and applicants alike. It’s a virtuous circle: if companies are able to hire people with increased efficiency and rapidity, the job market as a whole is able to become fuller and less exasperating to navigate. Ghahramani said it’s a “paradox” that AI can be so humanizing, able to offer more intimate, personalized questions that a human may not think of because they’re overburdened with work and back-to-back meetings. Users, he added, have reported feeling like Ribbon “actually cared” about them and their candidacy; simply asking about one’s last big project can be lazy whereas Ribbon can surface more thoughtful questions that can lessen stressors which contribute to disfluency for stutterers.

As to the future for Ribbon, Ghahramani of course expressed the boilerplate enthusiasm regarding the continued growth of his startup. More poignantly, though, he expressed excitement for a future he and his team are building towards in which “you can get hired in under 24 hours for almost any role.” He envisions Ribbon users completed one, in-depth interview and using that as a template for a hundred others. The company, he said, believes recruiters and job-seekers will be happier and more productive in positions that are more finely tailored to their skills and interests. The job process can be improved by shortening it—thereby making it more accessible to boot— by taking the tedium (and the stress) out of it for both recruiters and applicants alike.

Read More
Steven Aquino Steven Aquino

‘What You’ll Miss When It’s Gone’

PBS public editor David Macy wrote a piece about the Corporation for Public Broadcasting’s (CPB) choice to cease operations in the wake of the Trump administration’s choice to cease funding the nonprofit organization. Trump’s behest stems from a desire of his (and his cronies) to “end wokeness,” as right-wingers call it.

As Macy noted, the CPB was established by way of the Public Broadcasting Act of 1967.

“By now, most Americans are aware that PBS and NPR have been ‘defunded,’” Macy wrote in the lede to his piece published late last week. “Judging from letters to the PBS Public Editor, viewers believe it’s ‘public media’ that’s had the rug pulled out from under it; Some say ‘Sesame Street’ or ‘[fill in with your favorite show]’ were defunded. Or, as some put it more harshly, ‘wokeness’ was defunded. Whether these personal messages to PBS were delivered with sadness or glee doesn’t matter. What interests me is what people think they will be losing (or getting rid of?). They are about to find out.”

As I wrote a month ago, that the CPB is no longer—and PBS and NPR is defunded—will have reverberating effects on the disability community. As I said in early August, I’ve covered myriad children’s programming from PBS Kids and have interviewed many executives and showrunners; that the federal government has pulled funding very much threatens not only PBS Kids’ ability to function—the consequences ultimately means less visibility of people like me. Of course I’m an adult, but the salient point is disabled people are showcased in series like Carl the Collector and its brethren. More poignantly, that disability is featured so prominently means able-bodied children (and their families) are exposed to the ways in which people are different and implicitly teaches viewers to be empathetic of such a reality. Likewise for children with disabilities, the feelings of heightened self-esteem are immeasurably important because they get to see characters who look like them on television. As Hollywood and media reckon with its historical moribund, woeful portrayals of disability and disabled people, that PBS Kids—as well as Apple TV+ and Netflix, for that matter—allocate a significant number of resources to furthering diversity and inclusion in this manner is absolutely non-trivial. That’s precisely why Trump’s mandate is so damning and disdainful—it puts the future of such vital productions in serious peril while also underscoring his (and his cronies’) desire to effect changes which only serve to better the lives of the white, wealthy, and abled people. The fact of the matter is not having to cope with disability, if not multitudes, is as much privilege as being a well-off white man. It’s ignored, but abled privilege is, and always has been, a real thing in this world.

The disability community indeed will miss the representation when it’s gone.

Read More
Steven Aquino Steven Aquino

Uber, Best Buy Team Up For Electronics Delivery

San Francisco-based Uber this week announced it has partnered with Best Buy on an initiative that allows shoppers to buy electronics from the retailer through UberEats and have the item(s) delivered to them. The service is available now and runs nationwide.

To mark the partnership’s launch, users will get $20 off the price of orders $60 or more using the promo code BESTBUY10. Additionally, Uber One subscribers (like yours truly) get $0 delivery fees and other perks. The promo ends on September 29, Uber says.

“From headphones and chargers to laptops, gaming gear, and small appliances, Best Buy customers will now enjoy the convenience of on-demand delivery or scheduled drop-offs, all within the Uber Eats app,” Uber said in its press release on Tuesday.

As someone who fairly regularly gets food and groceries delivered via UberEats, Uber’s collaboration with Best Buy only increases its value proposition. Especially from an accessibility standpoint, that a disabled person—who may not be able to venture out to their neighborhood Best Buy store, if one exists at all—could get a MacBook or AirPods or whatnot brought to their house on-demand is worth its weight in gold. It transcends sheer convenience into something much more impactful; it’s plausible, for instance, that someone needs new AirPods for a virtual appointment with their medical team. Instead of asking someone to run the errand of going to the Apple Store posthaste, or perhaps postponing the meeting so as to accommodate shipping of an online order, the earbuds could be brought to them prior to the meeting. It sounds trivial, but it’s really not—the reality is, as with using UberEats to grocery shop, not everyone can (or should) leave their home to do shopping for health and/or logistical reasons, or a combination thereof. Most people fancy Uber Eats as an amenity: it’s cool and nice to have, but perhaps not essential to everyday living. The flip side of that argument, however, is that what’s a nicety to some could very well be a lifesaver to others. And of course, the conduit for this service is the technological marvel known as the modern smartphone.

“Consumers today expect everything from groceries to gadgets to arrive at their doorsteps quickly and reliably,” Hashim Amin, Uber’s head of grocery and retail in North America, said in a statement included with the company’s announcement. “With this partnership, Uber Eats and Best Buy are making it easier than ever for customers to access the latest technology, whether it’s a necessity or something fun. We’re thrilled to help bring Best Buy’s trusted assortment into the on-demand economy.”

The Uber × Best Buy deal comes after Uber announced in July it was expanding the roster of SNAP-friendly retailers in order to make getting groceries more accessible.

Read More
Steven Aquino Steven Aquino

The Relay for St. Jude Fundraiser is happening Now

If you didn’t know, September is Childhood Cancer Awareness Month. My friends at the venerable Relay podcast network, co-founded by Stephen Hackett and Myke Hurley, hold an event every year for Childhood Cancer Awareness Month; the proceeds go towards research at St. Jude Children’s Research Hospital in Memphis, Tennessee.

The 2025 fundraiser is underway. As of this writing, $45,949 has been raised this far.

“Since 2019, the Relay Community has raised over $4 million for St. Jude,” reads Relay’s fundraiser webpage. “Treatments developed at St. Jude have helped push the U.S. childhood cancer survival rate from 20% to more than 80%. St. Jude won’t stop until no child dies from cancer—that’s why St. Jude shares its breakthroughs. Every child saved at St. Jude means doctors and scientists worldwide can use that knowledge to save even more children. Join Relay this September for Childhood Cancer Awareness Month to help give them more tomorrows.”

Since 2019, Relay has raised $4,109,514.51.

My ties with cancer, as well as with Relay, are snug. I’ve lost multiple people close to me (albeit adults) to cancer in my life, most notably my mother—she died from breast/skin cancer in 1998 when I was just 16. I’ve written about cancer before and how disabling it can be for the patient and others in their orbit. As to Relay, I’ve known Hackett and Hurley for over a decade; I’ve met both in person and consider them friends; Hackett once upon a time was a guest on an early iteration of my old Accessible podcast. Also, I was a guest on an old Relay show called Less Than Or Equal in August 2016. What’s more, Shelly Brisbin, whom I also know and who currently contributes to Six Colors, hosted an accessibility-themed show called Parallel—on which I also guested once—that was part of Relay for awhile before its 93-episode run ended back in June 2024.

Anyway, please give to Relay for St. Jude if you can!

Read More
Steven Aquino Steven Aquino

‘when will Waymo come to my city?’

Waymo on Friday posted an item to its Waypoint blog with a short update on its popularity and commensurate expansion plans for the future. In the new blog post, the Alphabet-owned company boasted about completing “hundreds of thousands of weekly fully autonomous trips and over 100 million miles of public road experience.”

“Today, the Waymo Driver can navigate new cities safely and faster, validated by our expansion from Phoenix to the San Francisco Bay Area, Los Angeles, Austin, Atlanta, and soon Miami, Washington, D.C., and Dallas,” Waymo wrote. “Operating in 5+ major U.S. cities and testing across the country and Tokyo has strengthened our system, creating a robust and adaptable Waymo Driver that can expand to new cities.”

As this story’s headline quotes, Waymo says the top question it gets from people is when its robotaxis will start rolling around the streets of their cities. I wonder that myself here in the Bay Area too, as there are parts of the region—namely, the East Bay—which currently isn’t served by Waymo. Uber and Lyft are entrenched there, of course, but not Waymo. In all honesty, I acknowledge the tone of the company’s blog post is self-congratulatory; it’s important to recognize the function of a company-run blog—by any company, not just Waymo—but its laudatory messaging is deserved in at least one respect. As I’ve written numerous times, I’m a huge fan of Waymo’s product for its accessibility; without any hyperbole, Waymo has utterly transformed how (and when) I get around as a lifelong member of the Blind and low vision community. I won’t rehash my platitudes here, but suffice it to say, the autonomous driving technology is ducking cool and speaks to my tech nerd soul, while the “taxi” essence of the service makes getting around my city of San Francisco eminently more accessible. In other words, while the journalist in me is keenly aware of skepticism surrounding safety and other issues—Waymo isn’t without its warts, after all—the more salient message for me is as a user who more or less is dependent upon third parties for transportation. That Waymo gives me back such strongly explicit agency and autonomy, which then boosts my historically low self-esteem to all-time highs, is neither trivial nor is it fanboyish.

Waymo’s app has earned its permanent place on my iPhone’s Home Screen.

Today’s news from Waymo is complemented by news from The Verge Waymo’s co-CEO Tekedra Mawakana—whom I’ve interviewed before—talked up the company’s aforementioned expansion ambitions on the latest episode of the Hard Fork podcast. “You’re going to start seeing our cars in a lot of cities,” Mawakana told hosts Kevin Roose and Casey Newton of Waymo’s growth. “If you think about our business in terms of scale, we’re currently giving hundreds of thousands of rides every week and, in all likelihood, by the end of next year, we’ll be offering around one million rides per week.”

Anyway, if Waymo is in (or will come to) your city and you haven’t yet tried it, do it.

Read More
Steven Aquino Steven Aquino

Walmart Offers Apple Services Deal with Purchase of Any Google TV-Powered Onn Streaming Box

I missed the news when it hit a month ago, but Ben Schoon at 9to5 Google reported at the time Walmart is offering a promotion whereby if you buy one of their house brand Onn streaming boxes, you get three months of Apple TV+ (and other Apple services) thrown in for free. Schoon specifically calls out the Onn 4K Plus box in his story, but the promotion does extend to the Onn 4K, Onn Full HD, and my favorite, the Onn 4K Pro too.

Schoon wrote the aforementioned 4K Plus is “the most affordable and best value in Google TV hardware”; priced at just $44.73 as of this writing, he wasn’t wrong. For my money, however, I highly recommend “splurging” by instead opting for the 4K Pro. Its price is exactly the same and you get twice the storage (16GB vs. 32GB) and an extra gig of RAM (2GB vs. 3GB), plus an Ethernet port and the box serves as a smart speaker when you’re not watching anything. I wrote about the Onn 4K Pro earlier this month, saying in part Walmart’s box “pleasantly surprised” me. Moreover, the 4K Pro is an inexpensive entry point into Google TV; as I also said, although I prefer tvOS on the Apple TV 4K overall for its polish and especially its performance, Google has imbued its TV platform with lots of thoughtful touches and functionality I yearn for Apple to someday adopt for their own software. Apple TV+ indeed is available on Google TV, with all the same great content that make subscribing well worth it despite the recent price hike to $13/month.

Besides TV+, the Onn deals include 90 days of Apple Music and Fitness+ as well.

Read More
Steven Aquino Steven Aquino

Apple Ships Xcode Beta with Claude, GPT-5 Support

My pal Chance Miller reports for 9to5 Mac Apple on Thursday shipped Xcode 26 Beta 7 to developers. The release is noteworthy because it includes support for AI models in Claude and GPT–5. Miller writes using artificial intelligence to assist with programming software helps developers “write code, fix bugs, access documentation, and more.”

GPT–5 is the default model in Xcode. Users can opt for GPT–4.1, according to Miller.

“ChatGPT in Xcode provides two model choices. ‘GPT–5’ is optimized for quick, high-quality results, and should work well for most coding tasks,” Apple says in describing Xcode’s newfound AI integrations. For difficult tasks, choose ‘GPT–5 (Reasoning),’ which spends more time thinking before responding, and can provide more accurate results for complex coding tasks.”

Notably, Miller (as well as Jason Snell at Six Colors) says developers can run their own models locally on their Mac and even “use API keys from other AI providers,” Miller said.

I wrote about the accessibility implications of this feature last week, so it’s good to see the rumor become reality. As I said then, that programmers have the ability to lean on, say, Anthropic’s Claude for writing code and referencing documentation has the potential to be a de-facto accessibility feature for developers with disabilities. In very much the same way chatbots like ChatGPT can make web searches more accessible by doing the grunt work, so too can it help in coding. It’s important to emphasize the distinction here; leaning heavily into AI in this way isn’t cheating or lazy. On the contrary, it’s downright savviness—it leverages what’s arguably a computer’s greatest strength: automation. More pointedly in accessibility terms, using AI in coding makes what could plausibly be inaccessible in ways much more inclusive. Of course vigilance is necessary for spotting errors or hallucinations, but the genuine good AI can do as assistive technologies is neither trivial nor a viewpoint seen from rose-colored glasses.

Read More
Steven Aquino Steven Aquino

How A Robot Makes the Mundane Meaningful

My close friend and tech press compatriot Joanna Stern of The Wall Street Journal is currently on book leave, as she’s busy reporting and writing a book (due out next year) on artificial intelligence. As good friends do, I’ve been keeping tabs on Stern’s progress by reading her newsletter—which is totally worth subscribing to—and today’s edition really piqued my interest. In her piece, Stern chronicles her time having a robot, built by a startup called 7X Robotics, in her basement that (slowly) helps with folding laundry.

On its website, 7X Robotics bills itself as “[making] Windows standalone software to make your robot do autonomous home tasks like wash dishes and fold clothes,” adding the laundry product has the ability to “[fold] three t-shirts one after the other with no human aid.” The company has posted a short demonstration of the robot on YouTube.

Logistics notwithstanding—Stern writes in part 7X Robotics hopes to “package this up and start selling it soon”—the company’s laundry robot has immense applicability to accessibility, disability-wise. As a disabled person who copes with visual and motor conditions, it isn’t at all difficult for me to imagine how a mainstream consumer version of this product could make folding laundry eminently more accessible. Three t-shirts today isn’t that impressive, but that belies the point; the point is somebody like me, with lackluster hand-eye coordination and partial paralysis due to cerebral palsy, could have the robot assist with me a task that would otherwise be burdensome and inaccessible. Moreover, it’s a prime example of the genuine good AI can do as assistive technologies. As with chatbots like ChatGPT and Gemini helping with web searches and essay research, for example, this laundry robot—or 7X Robotics’ dishwashing one—can help with household jobs that many disabled people can’t (or shouldn’t) do. Maybe you’re someone with arthritis. Maybe you have chronic fatigue. Maybe you can’t always remember the correct way the fold the kitchen towels. The resonance runs deep, transcending sheer coolness or convenience. Although it’s true there does exist a slew of on-demand wash-and-fold services to sign up for that offer accessibility gains too, they don’t take away from a future—however far-flung it feels in the present—where a disabled person could get one of 7X Robotics’ robots to fold their clothes in their home.

For more on robotics and accessibility, I wrote last year about Amazon’s Astro.

Read More
Steven Aquino Steven Aquino

Google Translate Receives Live Translation, Language Learning Features In Recent Update

Abner Li reports for 9to5 Google this week Google has updated its Google Translate app with a live translation feature, as well as a new Duolingo-like language learning feature.

The real-time translation features obviously uses Gemini models.

"The new ‘Live translate’ capability lets you have a ‘back-and-forth conversation in real time with audio and on-screen translations,’” Li wrote last week in describing Google Translate’s new functionality. “After launching the new interface, select the languages you want, with 70 supported: Arabic, French, Hindi, Korean, Spanish, Tamil, and more. With a thread-based interface, Google Translate will ‘smoothly’ switch between the pairing by ‘intelligently identifying conversational pauses,’ as well as accents and intonations. Meanwhile, advanced voice and speech recognition models have been trained to isolate sounds and work in noisy, real-world environments.”

The live translation feature is available on iOS and Android to users in the United States, India, and Mexico. Google has posted a video to YouTube showing off the functionality.

From an accessibility perspective, live translation is helpful insofar as people who are visual learners—essentially, the inverse of audio versions of Google Docs—can follow along with the scrolling text in the Google Translate app as someone is speaking. The bimodal sensory input could make conversations in a foreign language more accessible both ways: linguistically and disability. What’s more, there are many neurodivergent people for whom reading a textual version of someone’s words is more accessible than aurally comprehending an unfamiliar language. Put another way, that modern smartphones are powerful enough to be able to generate real-time translations means more accessible and comprehensible conversations; a person needn’t look at a guidebook or stumble through words or use gestures, which can be socially awkward.

The same argument applies to the language learning feature in Google Translate. To wit, Google has smartly provided options for listening and speaking words, phrases, and/or sentences such that a person can choose the modality that is most accessible to them depending on their needs and learning style(s). They’re de-facto accessibility features.

Read More
Steven Aquino Steven Aquino

Google Docs Gets New Text-to-Speech Feature

Earlier this month, Abner Li reported for 9to5 Google Gemini has gained the ability to read the contents of Google Docs with a new AI-based text-to-speech feature. The functionality arrives a few months after it was announced by Google back in early April.

“On the web, go to the Tools menu [in Google Docs] for a new ‘Audio’ option in-between Voice typing and Gemini. Tapping ‘Listen to this tab’ will open a pill-shaped player, with the duration noted. You can move this floating window anywhere on the screen,” Li wrote of how the audio feature works. “Besides play/pause and a scrubber, available controls include playback speed and changing the ‘clear, natural-sounding voices.’”

Li adds editors are able to “add an audio button anywhere in the document.”

Li hits on the accessibility angle to this recent news, quoting Google’s rationale that using the audio feature can be beneficial for “[wanting] to hear your content out loud, absorb information better while reading, or help catch errors in your writing.” Indeed, that Google Docs—which I hate but know is immensely popular—can be read aloud should be a real boon for people who may retain information better as aural learners. Likewise, it’s also true many neurodivergent people find spoken material more accessible to grasp than merely reading the textual version. Relatedly, I’ve noticed an increasing number of news outlets, namely Forbes and The Verge, including a button on a story’s page to turn it into an audio version. It effectively turns the article in an audiobook or podcast, the former of which having roots in the disability community. Still, given the mainstream popularity of both mediums, audio-based versions of Google Docs—even if it’s limited to a particular section of the overall document—it makes perfect sense to want to lean in strongly with audio production vis-a-vis AI.

The Google Docs audio feature is available only in English and on the web for now.

News of the Google Docs enhancement is complemented by news Microsoft Excel has gained support for Copilot AI to assist in filling out information for users’ spreadsheets.

Read More
Steven Aquino Steven Aquino

A bit About The Cracker barrel Controversy

I was reading my friend John Gruber’s post about the Cracker Barrel controversy just now when it occurred to me there’s an accessibility angle to Cracker Barrel. It isn’t pertinent to the brouhaha itself—that the company’s rebrand is supposedly “woke”—but rather to the experience of eating at a Cracker Barrel restaurant as a Blind person.

Back in December 2014, when my journalistic career was nascent, I took my first-ever big trip with my partner and her mother. We flew cross-country to Florida, in part to visit the tourist traps in Disney World and Kennedy Space Center, as well as some friends of theirs who lived in the “Sponge Capital of the World” known as Tarpon Springs. The trip was also memorable for being only the second time since 2002 that I flew in an airplane and slept in a hotel. (Take a quick glance at my Flighty history and you’ll notice I’ve done much more flying in the years since. Last year alone, I flew 17 times! This year? So far, 0.)

Anyway, about Cracker Barrel. One night somewhere during the trip, we had dinner there. Upon sitting down at our table, I was especially delighted to discover Cracker Barrel offered a large print version of the menu for guests who request one. At the time, my iPhone 6 Plus didn’t have a bespoke Magnifier app as iOS does now; because of this, I was grateful for the large print menu because, obviously, I could actually see what I wanted to order. More pointedly, I was tickled to get the large print menu because no other place had one before. To this day, that Florida Cracker Barrel remains the only restaurant I’ve ever been to that had an option for a more accessible menu. I don’t know if it existed due to a display of empathy or the restaurant’s clientele of older people—maybe both?—but it was damn cool to learn as someone who copes with low vision. I left hoping to see other places offer a large print menu, but alas, never saw another one then and since. I imagine given the power of smartphones nowadays—cf. Magnifier on iOS—perhaps the costs incurred with printing a second menu, however great, isn’t the most economically prudent path for most eateries (especially small ones) to walk down.

For the record, I’m with Gruber in that Cracker Barrel’s new logo is nicely done.

Read More
Steven Aquino Steven Aquino

Revisiting the original Siri Remote for apple tV

I was doing some much-needed cleaning my living room this weekend when I stumbled across my OG Siri Remote. The much-maligned remote for Apple TV debuted with the first-generation Apple TV 4K (which I also still have) in September 2017. The Siri Remote looked perfectly good, if dead and a bit dusty. After wiping it off and charging the battery via the Lightning cable attached to my iMac, I decided to pair the old remote with my A15-powered Apple TV in said living room (attached to an 77-inch LG C3 OLED) and take a quick stroll down a technological memory lane. It was an experience I would describe as simultaneously enlightening and enraging—but undeniably fun/nerdy too.

First, the enlightening parts. For one thing, I think I prefer the trackpad on the OG Siri Remote to the trackpad on the new model. It feels easier to swipe and click because there’s more surface area; with the new remote, Apple surrounded the trackpad with a D-pad (which you can actually click, if desired in Settings) and the surface area feels smaller. I know the D-pad acts effectively as an iPod-like “click wheel” for certain actions in tvOS, but for my usage, I like the bigger, standalone trackpad on the original Siri Remote. The other controls on the Siri Remote are fine too, working just as well as their counterparts on the new remote. As a practical matter, I’d gladly trade the new version’s trackpad for that on the old one. It just feels nicer, akin to the Magic Trackpad I use at my desk with my Mac (I’ve switched from being a mouse user to a trackpad user).

Second, the enraging parts. First and foremost, it annoys me to no end that Apple omitted a Mute button on the OG Siri Remote. It was a curious design decision, especially considering most everyone wants to mute their audio at some point on another. Perhaps the company’s design team thought years ago pressing Pause was a good enough proxy, and they weren’t wrong per se, but Pause and Mute are bespoke functions for a reason: they exist to accomplish different tasks. Nonetheless, as someone who uses Mute a lot when, say, my partner wants to talk to me, playing around with the old Siri Remote reminded me how much I loathed not having a proper Mute button to quickly push. Elsewhere, I actually dislike the OG Siri Remote’s thinness. The remote, while svelte and sleek as a physical object, can be hard to hold (for me, at least) because its thin profile makes it such there’s less to grab onto. What’s more, the remote’s aluminum-and-glass composition makes it hard to grip, friction-wise; more often than not, I’ve inadvertently dropped the remote because it’s (a) super thin; and (b) a slippery sucker. As I wrote last week about Google Pixelsnap, hardware accessibility matters—this is the main reason I’m dubious the purported iPhone 17 Air coming next month will be right for me. To wit, like the OG Siri Remote, while I’m appreciative of cool industrial design, it’s true thinness can be nothing more than a parlor trick for people (like me) who have lower-than-average muscle tone. The aforementioned iPhone Air, as with the old Siri Remote, could be less accessible to carry and hold because, again, there’s less material to grab onto. It’s for this reason I insist on using cases with the thicker iPhone Pro Max phones because a case, besides adding protection, also crucially adds more friction and grip for my hand(s) to cling onto. On that note, I wonder if those inside Apple have considered such a thing given recent reports the company has contemplated offering an iPhone 4-like “bumper” case for the incoming iPhone Air.

A bugaboo for both generations of Siri Remotes: no backlit buttons.

On the whole, I find the new Siri Remote better for accessibility than the old one due to its discrete Mute button and more girth. The trackpad isn’t as nice as the old one’s, but I’ve grown accustomed to it. I hope a “Siri Remote 3” adds a larger trackpad and backlighting—wireless charging would also be helpful—but I’m happy to keep the OG remote around for emergencies. tvOS still supports it and it works great—if you like it.

Read More
Steven Aquino Steven Aquino

As Apple TV+ Gets Pricier, Its focus on furthering disability representation makes it worth the cost

Benjamin Mayo reported for 9to5 Mac earlier this week Apple has raised the price of Apple TV+ to $13 per month in the United States and “some international markets.” The price hike comes from the previous $10 a month. TV+ cost $5 per month at the start.

The yearly subscription price of $99, as well as Apple One pricing, remains the same.

Mayo notes Apple said in a statement TV+ has grown its roster of original content since the service’s launch in November 2019, adding Apple One is “the easiest way to enjoy all of Apple’s subscription services in one plan at the best value.” (As an Apple One subscriber myself, that sentiment isn’t the least bit blustery; it really is a good deal.)

Although $13/month is on the expensive side (if you don’t get Apple One, anyway), I’d argue the money is well worth it—especially if you, like yours truly, are interested in seeing the disability community earnestly and genuinely represented in film and television. As I’ve said numerous times, it’s extremely noteworthy how Apple took its product-focused ethos on accessibility and adapted it for Hollywood. That the company chose to invest in, say, Deaf President Now and its history-making director is a direct reflection of the company’s empathy for the empowerment of disabled people vis-a-vis technology. Of course Apple’s motives aren’t entirely altruistic, but as with its software, that it has imbued TV+ with similar sensibilities is important considering the historical portrayal of disabled people in Hollywood—not to mention society’s viewpoint writ large. Moreover, while it’s valid to criticize TV+ for having a lackluster back catalog of licensed content, a cogent argument could be made that Apple has assembled the deepest roster of disability-forward storytelling of anyone in the game. And that includes Netflix, what with titles like Deaf U and All the Light We Cannot See. TV+ is popular because of Ted Lasso and Severance, and rightfully so, but it nonetheless shouldn’t go unnoticed how impactful something like Deaf President Now is on its own.

I’ve covered most, if not all, of these disability-centric shows on TV+ in the past:

  • Best Foot Forward

  • CODA

  • El Deafo

  • Life By Ella

  • Little Voice

  • See

Even if you dislike their entertainment value, all these shows deserve acclaim for their representational gains. They shouldn’t be disregarded simply for being a “bad” show.

Apple TV+ is much more than merely The Severance Service—and I love that show.

Read More
Steven Aquino Steven Aquino

Nyle DiMarco makes history as first deaf director nominated for an emmy for ‘Deaf president now’

Brande Victorian at The Hollywood Reporter has a story on the site this week which features interviews with Nyle DiMarco and Davis Guggenheim, the two men who are primarily responsible for Deaf President Now. The Apple TV+ documentary, which was released onto the streaming service in mid-May, chronicles the events of the so-called “DPN4”—Tim Rarus, Bridgetta Bourne-Firl, Greg Hlibok, and Jerry Covell—who, in March 1988, spearheaded a protest over the selection of yet another hearing woman, Elizabeth Zinser, over a Deaf person to be president of Gallaudet University. The institution, established during the Civil War and based in Washington DC, is the world’s only college which caters (almost) exclusively to the Deaf and hard-of-hearing community.

(I say “almost” because Gallaudet does admit a small number of hearing students.)

Victorian’s piece is timely, as DiMarco has made history by becoming the first Deaf director to receive an Emmy nomination for Deaf President Now. Moreover, the film itself has been nominated for Outstanding Documentary. As far as talking points go, they’re pretty much exactly what I covered when I interviewed DiMarco and Guggenheim about Deaf President Now back in early May. To wit, both men told Victorian what they said to me: the film’s essence is truly about centering the Deaf experience and point of view. The DPN protest, as it’s colloquially known, is a part of Deaf history that’s well known to the community—but not to the wider, hearing world.

I’ve covered Gallaudet at close range over the last several years, including writing about its football team and, perhaps most apropos in context of DiMarco’s triumph in bringing Deaf President Now to life, profiling its current president Roberta Cordano back in 2022.

I. King Jordan became Gallaudet’s first Deaf president following the DPN protest.

Read More
Steven Aquino Steven Aquino

‘Pixelsnap’ makes Google’s Pixel More Accessible

“Imitation is the sincerest form of flattery” is how the old adage goes.

That’s the thought that immediately sprung to mind when I learned about Google’s announcement of Pixelsnap at this week’s “Made By Google” event in New York City where the company announced the Pixel 10 and related devices. Pixelsnap is not at all hard to understand: it’s almost quite literally Apple’s MagSafe technology, just optimized for Android. Pixelsnap supports Qi2, as well as the Magnetic Power Profile.

While it’s commonplace for iOS and Android fans to derisively mock one another for being late to the proverbial party, feature-wise, the reality is the “partisanship” is misplaced and, frankly, childish. As a practical matter, the advent of Pixelsnap on the new Pixel 10 phones is a huge win for accessibility. The argument is the exact same as it is for MagSafe; to wit, that both use magnets for charging means more accessible alignment and less fiddling with a USB-C cable. As I’ve written innumerable times, the genius of implementing magnets into the iPhone (and now Pixel) transcends sheer convenience because the truth is not everyone can so easily charge their phone. For disabled people with any amalgamation of cognitive/motor/visual conditions, even the ostensibly mundane task of plugging a USB-C cable into the port on the phone can be an arduous, inaccessible task. Moreover, it’s not easy for every person to successfully navigate even wireless charging, as missing the so-called “sweet spot” can make someone think their phone is charging when it isn’t. Put simply, charging is not something to take for granted—not everyone has the best hand-eye coordination.

That Google has introduced Pixelsnap as a new feature needn’t be ridiculed. Nobody really and truly cares Apple beat Google by introducing MagSafe back in 2020 with the iPhone 12 lineup. The salient point should be how Pixelsnap makes Google’s phones more accessible to Android users. Hardware accessibility matters too—an idea which becomes more prominent with the purported super-thin iPhone 17 Air as well as every foldable phone—and technologies like MagSafe and Pixelsnap exemplify that ideal. The sniggering over innovation only amplifies the noise and obscures what really matters.

The “Made By Google” event is available to watch on YouTube.

Read More
Steven Aquino Steven Aquino

Apple Releases ‘No Frame Missed’ Short Film

Apple on Wednesday posted a new short film to YouTube called No Frame Missed. The 5-minute video (embedded below) tells the stories of three people who cope with Parkinson’s disease and how they’ve been empowered to capture precious moments using iPhone accessibility features like Action Mode. Action Mode, along with Voice Control, provide essential functionality to disabled people like filmmaker Brett Harvey, featured in the film after being diagnosed with Parkinson’s at just 37 years old, who deal with tremors that destabilize video and makes using the iPhone’s touchscreen tough.

Action Mode debuted with the advent of 2022’s iPhone 14 lineup.

No Frame Missed isn’t the first time Apple has created material around Parkinson’s. To wit, my pal Zac Hall at 9to5 Mac writes today about the film the Apple TV+ series Shrinking, which stars Harrison Ford, sees Ford’s character coping with the condition. Moreover, Hall notes Season 3 of the show is set to feature Michael J. Fox, who’s lived with the disease since the ‘90s. For his part, Fox chronicles his own journey with Parkinson’s in another TV+ title called Still. The documentary, released in 2023, received seven Emmy nominations, winning four—including one for Outstanding Documentary.

Harvey’s made a documentary of his own, an almost 9-minute affair called Hand.

Elsewhere, I reported in 2023 about Apple Watch making “unmistakable progress” in helping identify Parkinson’s and offering patients a “glimmer of hope.” I also reported in 2022 about the Parkinson Foundation’s website getting a substantial new redesign.

As John Gruber writes in linking to No Frame Missed, stuff like this show Apple at its very best. While it’s obvious the company’s motivations aren’t entirely altruistic in nature—they’re a for-profit corporation after all, so it’s part marketing exercise—the salient message is that accessibility features enable disabled people to enjoy technology like anyone else. Accessibility truly is a core value to the company, and No Frame Missed is a refreshing palate-cleanser to less savory dishes appearing on Apple’s plate of late.

Read More
Steven Aquino Steven Aquino

Xcode Getting Claude AI Integration, Report Says

Marcus Mendes reports for 9to5 Mac this week Apple is readying Anthropic’s Claude AI system to natively integrate with Xcode. Mendes writes there are “multiple references” to Anthropic accounts found within Xcode 26 Beta 7, released to developers on Monday.

Specifically, there are mentions of Claude Sonnet 4.0 and Claude Opus 4 in Beta 7.

“This means that while ChatGPT remains the only model with first-party Xcode integration, the underlying support for Anthropic accounts is already in place, hinting that Claude integration could arrive sooner rather than later,” Mendes wrote on Monday. “To be clear, while developers have been able to plug in Claude via API, Apple seems to be moving toward giving Claude the same level of native Xcode integration as ChatGPT got during WWDC 2025.”

As Mendes notes, the Claude (and ChatGPT) integration is a manifestation of Craig Federighi’s comments during this year’s WWDC keynote. The Apple software chief said Apple was “expanding” its vision for Swift Assist. Announced at last year’s WWDC, Swift Assist is described by Mendes as “Apple’s answer to tools like GitHub Copilot: a built-in AI coding companion that would help developers explore frameworks, and write code.”

Crucially, Swift Assist never actually shipped after its announcement last year.

I’m not an app developer, but this Xcode-meets-AI news remains nonetheless fascinating from an accessibility perspective. I’ve long maintained the opinion that, at its best, artificial intelligence plays to computers’ greatest strength: automation. Why this is resonant from an accessibility angle is because, for developers with disabilities, that someone could plug into Claude—or ChatGPT, for that matter—and help them generate code snippets or ask about a particular API could very plausibly help make building software a more accessible endeavor. Take my own experience as a code spelunker, for example. It’s an anecdote I’ve shared before, but when I was building Curb Cuts, I used Google Gemini (as a web app-turned-Mac app) to help me with generating bits and bobs of CSS for the site’s backend. While I understand the fundamental elements of writing HTML and CSS, my practical skillset is decidedly at a novice level. More pointedly, I didn’t want to have to juggle a half-dozen tabs in Safari, all with Google searches on how to do certain things correctly. Thus, the allure of AI in this context is obvious: Claude (or whatever) can assist me by not only doing the research, but but by generating the necessary code. Not only is this automation convenient, it’s accessibility too because it saves me cognitive/motor/visual friction of finding answers, writing the code, etc. As I said earlier, using AI in this way is more than cool or convenient; it’s a de-facto accessibility feature. The argument for AI in Xcode is essentially the same for another Apple property in Shortcuts. Like Shortcuts, AI in Xcode takes what may well be multi-step tasks and consolidates them into a single step. Again, the big idea here is leveraging AI is playing to a computer’s greatest strength in automation. I wrote about Shortcuts and accessibility for MacStories.

(Cool postscript to that old story. It contains what’s perhaps the greatest single piece of copy I’ve ever written: “To paraphrase Kendrick Lamar, Shortcuts got accessibility in its DNA.” The sentiment is a reference to Shortcuts’ heritage; Workflow, which Apple acquired in 2017, won an Apple Design Award for accessibility two years prior at WWDC 2015. I interviewed the Workflow team about the app for TechCrunch a decade ago.)

Anyway, that Apple is reportedly preparing Xcode for Claude is yet another example of AI’s genuine good vis-a-vis accessibility. The nerds amongst us just think it’s cool because AI is the technology du jour, and it is, but accessibility matters a helluva lot too.

Read More
Steven Aquino Steven Aquino

ending Mail-In Voting is a canary in the coal Mine

President Trump took to Truth Social on Monday to post a screed about the evils of absentee voting. He writes, in part, he plans to “lead a movement” by way of executive order to abolish mail-in voting for the 2026 midterm elections. Trump falsely claims the United States is “the only country in the world” using mail-in ballots, adding other countries gave them up due to the “massive voter fraud” they encountered. In essence, he calls mail-in voting a “scam” and a “hoax” favored by Democrats to steal elections.

Legality and logistics aside, Trump’s viewpoint is an affront to accessibility.

Trump’s viewpoint doesn’t take into account the cruciality of mail-in ballots to the democratic process. While it’s true most Election Day coverage, whether local, state, or national, is augmented by umpteenth live shots of reporters at in-person polling places, the truth of the matter is those voters aren’t representative of the total electorate. The truth of the matter is, not every civically-inclined citizen has the ability to venture out of their homes to the nearest neighborhood polling place to exercise their civic duty, There are myriad conditions which preclude many in the disability community from leaving their homes very regularly—if they can at all. Especially for those who are immobile and/or otherwise homebound, mail-in ballots are an assistive technology. They’re a lifeline to the world. Disabled people (yours truly included) do vote. Voting machines are being made more accessible. Never mind Trump’s obvious partisan political stance; the reality is he (and his fellow enabler Republican cronies) will further restrict disabled people’s access to our voting rights. Boy, being even more disenfranchised sure is fun!

Along with the cuts to Medicaid and SNAP funding, the White House is showing yet another sign of aggression and disdain for the livelihoods of disabled people. They see mail-in ballots as a conduit to cheating rather than as an avenue towards accessibility. They crow about the American people voting for Trump, yet they want to take away a critical tool for how he gained his second term. It makes zero sense whatsoever, but makes one thing abundantly clear: Trump and team give no shits about the disability community. Were it feasible, I’d bet modern Republicans would push to institutionalize us as we were a century ago. We’re ostensibly worthless freeloaders who contribute little, if anything, of value to the betterment of society. Why should they cater to our needs and our voices vis-a-vis voting? Put another way, it’s an overt show of otherism.

Trump’s attack on absentee voting reinforces society’s contempt for disabled people.

Read More