Google Translate Receives Live Translation, Language Learning Features In Recent Update
Abner Li reports for 9to5 Google this week Google has updated its Google Translate app with a live translation feature, as well as a new Duolingo-like language learning feature.
The real-time translation features obviously uses Gemini models.
"The new ‘Live translate’ capability lets you have a ‘back-and-forth conversation in real time with audio and on-screen translations,’” Li wrote last week in describing Google Translate’s new functionality. “After launching the new interface, select the languages you want, with 70 supported: Arabic, French, Hindi, Korean, Spanish, Tamil, and more. With a thread-based interface, Google Translate will ‘smoothly’ switch between the pairing by ‘intelligently identifying conversational pauses,’ as well as accents and intonations. Meanwhile, advanced voice and speech recognition models have been trained to isolate sounds and work in noisy, real-world environments.”
The live translation feature is available on iOS and Android to users in the United States, India, and Mexico. Google has posted a video to YouTube showing off the functionality.
From an accessibility perspective, live translation is helpful insofar as people who are visual learners—essentially, the inverse of audio versions of Google Docs—can follow along with the scrolling text in the Google Translate app as someone is speaking. The bimodal sensory input could make conversations in a foreign language more accessible both ways: linguistically and disability. What’s more, there are many neurodivergent people for whom reading a textual version of someone’s words is more accessible than aurally comprehending an unfamiliar language. Put another way, that modern smartphones are powerful enough to be able to generate real-time translations means more accessible and comprehensible conversations; a person needn’t look at a guidebook or stumble through words or use gestures, which can be socially awkward.
The same argument applies to the language learning feature in Google Translate. To wit, Google has smartly provided options for listening and speaking words, phrases, and/or sentences such that a person can choose the modality that is most accessible to them depending on their needs and learning style(s). They’re de-facto accessibility features.
Google Docs Gets New Text-to-Speech Feature
Earlier this month, Abner Li reported for 9to5 Google Gemini has gained the ability to read the contents of Google Docs with a new AI-based text-to-speech feature. The functionality arrives a few months after it was announced by Google back in early April.
“On the web, go to the Tools menu [in Google Docs] for a new ‘Audio’ option in-between Voice typing and Gemini. Tapping ‘Listen to this tab’ will open a pill-shaped player, with the duration noted. You can move this floating window anywhere on the screen,” Li wrote of how the audio feature works. “Besides play/pause and a scrubber, available controls include playback speed and changing the ‘clear, natural-sounding voices.’”
Li adds editors are able to “add an audio button anywhere in the document.”
Li hits on the accessibility angle to this recent news, quoting Google’s rationale that using the audio feature can be beneficial for “[wanting] to hear your content out loud, absorb information better while reading, or help catch errors in your writing.” Indeed, that Google Docs—which I hate but know is immensely popular—can be read aloud should be a real boon for people who may retain information better as aural learners. Likewise, it’s also true many neurodivergent people find spoken material more accessible to grasp than merely reading the textual version. Relatedly, I’ve noticed an increasing number of news outlets, namely Forbes and The Verge, including a button on a story’s page to turn it into an audio version. It effectively turns the article in an audiobook or podcast, the former of which having roots in the disability community. Still, given the mainstream popularity of both mediums, audio-based versions of Google Docs—even if it’s limited to a particular section of the overall document—it makes perfect sense to want to lean in strongly with audio production vis-a-vis AI.
The Google Docs audio feature is available only in English and on the web for now.
News of the Google Docs enhancement is complemented by news Microsoft Excel has gained support for Copilot AI to assist in filling out information for users’ spreadsheets.
A bit About The Cracker barrel Controversy
I was reading my friend John Gruber’s post about the Cracker Barrel controversy just now when it occurred to me there’s an accessibility angle to Cracker Barrel. It isn’t pertinent to the brouhaha itself—that the company’s rebrand is supposedly “woke”—but rather to the experience of eating at a Cracker Barrel restaurant as a Blind person.
Back in December 2014, when my journalistic career was nascent, I took my first-ever big trip with my partner and her mother. We flew cross-country to Florida, in part to visit the tourist traps in Disney World and Kennedy Space Center, as well as some friends of theirs who lived in the “Sponge Capital of the World” known as Tarpon Springs. The trip was also memorable for being only the second time since 2002 that I flew in an airplane and slept in a hotel. (Take a quick glance at my Flighty history and you’ll notice I’ve done much more flying in the years since. Last year alone, I flew 17 times! This year? So far, 0.)
Anyway, about Cracker Barrel. One night somewhere during the trip, we had dinner there. Upon sitting down at our table, I was especially delighted to discover Cracker Barrel offered a large print version of the menu for guests who request one. At the time, my iPhone 6 Plus didn’t have a bespoke Magnifier app as iOS does now; because of this, I was grateful for the large print menu because, obviously, I could actually see what I wanted to order. More pointedly, I was tickled to get the large print menu because no other place had one before. To this day, that Florida Cracker Barrel remains the only restaurant I’ve ever been to that had an option for a more accessible menu. I don’t know if it existed due to a display of empathy or the restaurant’s clientele of older people—maybe both?—but it was damn cool to learn as someone who copes with low vision. I left hoping to see other places offer a large print menu, but alas, never saw another one then and since. I imagine given the power of smartphones nowadays—cf. Magnifier on iOS—perhaps the costs incurred with printing a second menu, however great, isn’t the most economically prudent path for most eateries (especially small ones) to walk down.
For the record, I’m with Gruber in that Cracker Barrel’s new logo is nicely done.
Revisiting the original Siri Remote for apple tV
I was doing some much-needed cleaning my living room this weekend when I stumbled across my OG Siri Remote. The much-maligned remote for Apple TV debuted with the first-generation Apple TV 4K (which I also still have) in September 2017. The Siri Remote looked perfectly good, if dead and a bit dusty. After wiping it off and charging the battery via the Lightning cable attached to my iMac, I decided to pair the old remote with my A15-powered Apple TV in said living room (attached to an 77-inch LG C3 OLED) and take a quick stroll down a technological memory lane. It was an experience I would describe as simultaneously enlightening and enraging—but undeniably fun/nerdy too.
First, the enlightening parts. For one thing, I think I prefer the trackpad on the OG Siri Remote to the trackpad on the new model. It feels easier to swipe and click because there’s more surface area; with the new remote, Apple surrounded the trackpad with a D-pad (which you can actually click, if desired in Settings) and the surface area feels smaller. I know the D-pad acts effectively as an iPod-like “click wheel” for certain actions in tvOS, but for my usage, I like the bigger, standalone trackpad on the original Siri Remote. The other controls on the Siri Remote are fine too, working just as well as their counterparts on the new remote. As a practical matter, I’d gladly trade the new version’s trackpad for that on the old one. It just feels nicer, akin to the Magic Trackpad I use at my desk with my Mac (I’ve switched from being a mouse user to a trackpad user).
Second, the enraging parts. First and foremost, it annoys me to no end that Apple omitted a Mute button on the OG Siri Remote. It was a curious design decision, especially considering most everyone wants to mute their audio at some point on another. Perhaps the company’s design team thought years ago pressing Pause was a good enough proxy, and they weren’t wrong per se, but Pause and Mute are bespoke functions for a reason: they exist to accomplish different tasks. Nonetheless, as someone who uses Mute a lot when, say, my partner wants to talk to me, playing around with the old Siri Remote reminded me how much I loathed not having a proper Mute button to quickly push. Elsewhere, I actually dislike the OG Siri Remote’s thinness. The remote, while svelte and sleek as a physical object, can be hard to hold (for me, at least) because its thin profile makes it such there’s less to grab onto. What’s more, the remote’s aluminum-and-glass composition makes it hard to grip, friction-wise; more often than not, I’ve inadvertently dropped the remote because it’s (a) super thin; and (b) a slippery sucker. As I wrote last week about Google Pixelsnap, hardware accessibility matters—this is the main reason I’m dubious the purported iPhone 17 Air coming next month will be right for me. To wit, like the OG Siri Remote, while I’m appreciative of cool industrial design, it’s true thinness can be nothing more than a parlor trick for people (like me) who have lower-than-average muscle tone. The aforementioned iPhone Air, as with the old Siri Remote, could be less accessible to carry and hold because, again, there’s less material to grab onto. It’s for this reason I insist on using cases with the thicker iPhone Pro Max phones because a case, besides adding protection, also crucially adds more friction and grip for my hand(s) to cling onto. On that note, I wonder if those inside Apple have considered such a thing given recent reports the company has contemplated offering an iPhone 4-like “bumper” case for the incoming iPhone Air.
A bugaboo for both generations of Siri Remotes: no backlit buttons.
On the whole, I find the new Siri Remote better for accessibility than the old one due to its discrete Mute button and more girth. The trackpad isn’t as nice as the old one’s, but I’ve grown accustomed to it. I hope a “Siri Remote 3” adds a larger trackpad and backlighting—wireless charging would also be helpful—but I’m happy to keep the OG remote around for emergencies. tvOS still supports it and it works great—if you like it.
As Apple TV+ Gets Pricier, Its focus on furthering disability representation makes it worth the cost
Benjamin Mayo reported for 9to5 Mac earlier this week Apple has raised the price of Apple TV+ to $13 per month in the United States and “some international markets.” The price hike comes from the previous $10 a month. TV+ cost $5 per month at the start.
The yearly subscription price of $99, as well as Apple One pricing, remains the same.
Mayo notes Apple said in a statement TV+ has grown its roster of original content since the service’s launch in November 2019, adding Apple One is “the easiest way to enjoy all of Apple’s subscription services in one plan at the best value.” (As an Apple One subscriber myself, that sentiment isn’t the least bit blustery; it really is a good deal.)
Although $13/month is on the expensive side (if you don’t get Apple One, anyway), I’d argue the money is well worth it—especially if you, like yours truly, are interested in seeing the disability community earnestly and genuinely represented in film and television. As I’ve said numerous times, it’s extremely noteworthy how Apple took its product-focused ethos on accessibility and adapted it for Hollywood. That the company chose to invest in, say, Deaf President Now and its history-making director is a direct reflection of the company’s empathy for the empowerment of disabled people vis-a-vis technology. Of course Apple’s motives aren’t entirely altruistic, but as with its software, that it has imbued TV+ with similar sensibilities is important considering the historical portrayal of disabled people in Hollywood—not to mention society’s viewpoint writ large. Moreover, while it’s valid to criticize TV+ for having a lackluster back catalog of licensed content, a cogent argument could be made that Apple has assembled the deepest roster of disability-forward storytelling of anyone in the game. And that includes Netflix, what with titles like Deaf U and All the Light We Cannot See. TV+ is popular because of Ted Lasso and Severance, and rightfully so, but it nonetheless shouldn’t go unnoticed how impactful something like Deaf President Now is on its own.
I’ve covered most, if not all, of these disability-centric shows on TV+ in the past:
Best Foot Forward
CODA
El Deafo
Life By Ella
Little Voice
See
Even if you dislike their entertainment value, all these shows deserve acclaim for their representational gains. They shouldn’t be disregarded simply for being a “bad” show.
Apple TV+ is much more than merely The Severance Service—and I love that show.
Nyle DiMarco makes history as first deaf director nominated for an emmy for ‘Deaf president now’
Brande Victorian at The Hollywood Reporter has a story on the site this week which features interviews with Nyle DiMarco and Davis Guggenheim, the two men who are primarily responsible for Deaf President Now. The Apple TV+ documentary, which was released onto the streaming service in mid-May, chronicles the events of the so-called “DPN4”—Tim Rarus, Bridgetta Bourne-Firl, Greg Hlibok, and Jerry Covell—who, in March 1988, spearheaded a protest over the selection of yet another hearing woman, Elizabeth Zinser, over a Deaf person to be president of Gallaudet University. The institution, established during the Civil War and based in Washington DC, is the world’s only college which caters (almost) exclusively to the Deaf and hard-of-hearing community.
(I say “almost” because Gallaudet does admit a small number of hearing students.)
Victorian’s piece is timely, as DiMarco has made history by becoming the first Deaf director to receive an Emmy nomination for Deaf President Now. Moreover, the film itself has been nominated for Outstanding Documentary. As far as talking points go, they’re pretty much exactly what I covered when I interviewed DiMarco and Guggenheim about Deaf President Now back in early May. To wit, both men told Victorian what they said to me: the film’s essence is truly about centering the Deaf experience and point of view. The DPN protest, as it’s colloquially known, is a part of Deaf history that’s well known to the community—but not to the wider, hearing world.
I’ve covered Gallaudet at close range over the last several years, including writing about its football team and, perhaps most apropos in context of DiMarco’s triumph in bringing Deaf President Now to life, profiling its current president Roberta Cordano back in 2022.
I. King Jordan became Gallaudet’s first Deaf president following the DPN protest.
‘Pixelsnap’ makes Google’s Pixel More Accessible
“Imitation is the sincerest form of flattery” is how the old adage goes.
That’s the thought that immediately sprung to mind when I learned about Google’s announcement of Pixelsnap at this week’s “Made By Google” event in New York City where the company announced the Pixel 10 and related devices. Pixelsnap is not at all hard to understand: it’s almost quite literally Apple’s MagSafe technology, just optimized for Android. Pixelsnap supports Qi2, as well as the Magnetic Power Profile.
While it’s commonplace for iOS and Android fans to derisively mock one another for being late to the proverbial party, feature-wise, the reality is the “partisanship” is misplaced and, frankly, childish. As a practical matter, the advent of Pixelsnap on the new Pixel 10 phones is a huge win for accessibility. The argument is the exact same as it is for MagSafe; to wit, that both use magnets for charging means more accessible alignment and less fiddling with a USB-C cable. As I’ve written innumerable times, the genius of implementing magnets into the iPhone (and now Pixel) transcends sheer convenience because the truth is not everyone can so easily charge their phone. For disabled people with any amalgamation of cognitive/motor/visual conditions, even the ostensibly mundane task of plugging a USB-C cable into the port on the phone can be an arduous, inaccessible task. Moreover, it’s not easy for every person to successfully navigate even wireless charging, as missing the so-called “sweet spot” can make someone think their phone is charging when it isn’t. Put simply, charging is not something to take for granted—not everyone has the best hand-eye coordination.
That Google has introduced Pixelsnap as a new feature needn’t be ridiculed. Nobody really and truly cares Apple beat Google by introducing MagSafe back in 2020 with the iPhone 12 lineup. The salient point should be how Pixelsnap makes Google’s phones more accessible to Android users. Hardware accessibility matters too—an idea which becomes more prominent with the purported super-thin iPhone 17 Air as well as every foldable phone—and technologies like MagSafe and Pixelsnap exemplify that ideal. The sniggering over innovation only amplifies the noise and obscures what really matters.
The “Made By Google” event is available to watch on YouTube.
Apple Releases ‘No Frame Missed’ Short Film
Apple on Wednesday posted a new short film to YouTube called No Frame Missed. The 5-minute video (embedded below) tells the stories of three people who cope with Parkinson’s disease and how they’ve been empowered to capture precious moments using iPhone accessibility features like Action Mode. Action Mode, along with Voice Control, provide essential functionality to disabled people like filmmaker Brett Harvey, featured in the film after being diagnosed with Parkinson’s at just 37 years old, who deal with tremors that destabilize video and makes using the iPhone’s touchscreen tough.
Action Mode debuted with the advent of 2022’s iPhone 14 lineup.
No Frame Missed isn’t the first time Apple has created material around Parkinson’s. To wit, my pal Zac Hall at 9to5 Mac writes today about the film the Apple TV+ series Shrinking, which stars Harrison Ford, sees Ford’s character coping with the condition. Moreover, Hall notes Season 3 of the show is set to feature Michael J. Fox, who’s lived with the disease since the ‘90s. For his part, Fox chronicles his own journey with Parkinson’s in another TV+ title called Still. The documentary, released in 2023, received seven Emmy nominations, winning four—including one for Outstanding Documentary.
Harvey’s made a documentary of his own, an almost 9-minute affair called Hand.
Elsewhere, I reported in 2023 about Apple Watch making “unmistakable progress” in helping identify Parkinson’s and offering patients a “glimmer of hope.” I also reported in 2022 about the Parkinson Foundation’s website getting a substantial new redesign.
As John Gruber writes in linking to No Frame Missed, stuff like this show Apple at its very best. While it’s obvious the company’s motivations aren’t entirely altruistic in nature—they’re a for-profit corporation after all, so it’s part marketing exercise—the salient message is that accessibility features enable disabled people to enjoy technology like anyone else. Accessibility truly is a core value to the company, and No Frame Missed is a refreshing palate-cleanser to less savory dishes appearing on Apple’s plate of late.
Xcode Getting Claude AI Integration, Report Says
Marcus Mendes reports for 9to5 Mac this week Apple is readying Anthropic’s Claude AI system to natively integrate with Xcode. Mendes writes there are “multiple references” to Anthropic accounts found within Xcode 26 Beta 7, released to developers on Monday.
Specifically, there are mentions of Claude Sonnet 4.0 and Claude Opus 4 in Beta 7.
“This means that while ChatGPT remains the only model with first-party Xcode integration, the underlying support for Anthropic accounts is already in place, hinting that Claude integration could arrive sooner rather than later,” Mendes wrote on Monday. “To be clear, while developers have been able to plug in Claude via API, Apple seems to be moving toward giving Claude the same level of native Xcode integration as ChatGPT got during WWDC 2025.”
As Mendes notes, the Claude (and ChatGPT) integration is a manifestation of Craig Federighi’s comments during this year’s WWDC keynote. The Apple software chief said Apple was “expanding” its vision for Swift Assist. Announced at last year’s WWDC, Swift Assist is described by Mendes as “Apple’s answer to tools like GitHub Copilot: a built-in AI coding companion that would help developers explore frameworks, and write code.”
Crucially, Swift Assist never actually shipped after its announcement last year.
I’m not an app developer, but this Xcode-meets-AI news remains nonetheless fascinating from an accessibility perspective. I’ve long maintained the opinion that, at its best, artificial intelligence plays to computers’ greatest strength: automation. Why this is resonant from an accessibility angle is because, for developers with disabilities, that someone could plug into Claude—or ChatGPT, for that matter—and help them generate code snippets or ask about a particular API could very plausibly help make building software a more accessible endeavor. Take my own experience as a code spelunker, for example. It’s an anecdote I’ve shared before, but when I was building Curb Cuts, I used Google Gemini (as a web app-turned-Mac app) to help me with generating bits and bobs of CSS for the site’s backend. While I understand the fundamental elements of writing HTML and CSS, my practical skillset is decidedly at a novice level. More pointedly, I didn’t want to have to juggle a half-dozen tabs in Safari, all with Google searches on how to do certain things correctly. Thus, the allure of AI in this context is obvious: Claude (or whatever) can assist me by not only doing the research, but but by generating the necessary code. Not only is this automation convenient, it’s accessibility too because it saves me cognitive/motor/visual friction of finding answers, writing the code, etc. As I said earlier, using AI in this way is more than cool or convenient; it’s a de-facto accessibility feature. The argument for AI in Xcode is essentially the same for another Apple property in Shortcuts. Like Shortcuts, AI in Xcode takes what may well be multi-step tasks and consolidates them into a single step. Again, the big idea here is leveraging AI is playing to a computer’s greatest strength in automation. I wrote about Shortcuts and accessibility for MacStories.
(Cool postscript to that old story. It contains what’s perhaps the greatest single piece of copy I’ve ever written: “To paraphrase Kendrick Lamar, Shortcuts got accessibility in its DNA.” The sentiment is a reference to Shortcuts’ heritage; Workflow, which Apple acquired in 2017, won an Apple Design Award for accessibility two years prior at WWDC 2015. I interviewed the Workflow team about the app for TechCrunch a decade ago.)
Anyway, that Apple is reportedly preparing Xcode for Claude is yet another example of AI’s genuine good vis-a-vis accessibility. The nerds amongst us just think it’s cool because AI is the technology du jour, and it is, but accessibility matters a helluva lot too.
ending Mail-In Voting is a canary in the coal Mine
President Trump took to Truth Social on Monday to post a screed about the evils of absentee voting. He writes, in part, he plans to “lead a movement” by way of executive order to abolish mail-in voting for the 2026 midterm elections. Trump falsely claims the United States is “the only country in the world” using mail-in ballots, adding other countries gave them up due to the “massive voter fraud” they encountered. In essence, he calls mail-in voting a “scam” and a “hoax” favored by Democrats to steal elections.
Legality and logistics aside, Trump’s viewpoint is an affront to accessibility.
Trump’s viewpoint doesn’t take into account the cruciality of mail-in ballots to the democratic process. While it’s true most Election Day coverage, whether local, state, or national, is augmented by umpteenth live shots of reporters at in-person polling places, the truth of the matter is those voters aren’t representative of the total electorate. The truth of the matter is, not every civically-inclined citizen has the ability to venture out of their homes to the nearest neighborhood polling place to exercise their civic duty, There are myriad conditions which preclude many in the disability community from leaving their homes very regularly—if they can at all. Especially for those who are immobile and/or otherwise homebound, mail-in ballots are an assistive technology. They’re a lifeline to the world. Disabled people (yours truly included) do vote. Voting machines are being made more accessible. Never mind Trump’s obvious partisan political stance; the reality is he (and his fellow enabler Republican cronies) will further restrict disabled people’s access to our voting rights. Boy, being even more disenfranchised sure is fun!
Along with the cuts to Medicaid and SNAP funding, the White House is showing yet another sign of aggression and disdain for the livelihoods of disabled people. They see mail-in ballots as a conduit to cheating rather than as an avenue towards accessibility. They crow about the American people voting for Trump, yet they want to take away a critical tool for how he gained his second term. It makes zero sense whatsoever, but makes one thing abundantly clear: Trump and team give no shits about the disability community. Were it feasible, I’d bet modern Republicans would push to institutionalize us as we were a century ago. We’re ostensibly worthless freeloaders who contribute little, if anything, of value to the betterment of society. Why should they cater to our needs and our voices vis-a-vis voting? Put another way, it’s an overt show of otherism.
Trump’s attack on absentee voting reinforces society’s contempt for disabled people.
Amazon Makes Returns more Accessible
Ryan Christoffel at 9to5 Mac wrote late last week about a notable change to Amazon’s iOS app: return codes can be added to Apple Wallet. As someone who returns things to Amazon fairly often, I feel it’s a subtle yet impactful change for greater accessibility.
“Amazon rolled out support for ‘Add to Apple Wallet’ buttons inside its iOS app over the last few weeks. As part of the return process, instead of the standard in-app return code for drop-off locations to scan, you can now opt to save that code to Apple Wallet,” Christoffel said of Amazon’s recent update. “Having the Amazon return code inside Wallet is nice because it shows all the details you’ll need, plus ensures you can pull up the code at just the right time without being reliant on a solid cellular connection.”
I haven’t noticed the “Add to Wallet” buttons, but I’m going to look out for them now.
My process for obtaining Amazon’s QR codes for returns typically involves three steps:
Search Apple Mail for the return email
Tap the link in said message
Keep my phone open to Amazon so as to keep the code handy at Whole Foods
As you probably can surmise, these steps don’t exactly comprise the most streamlined, accessible workflow. (It also doesn’t help search in the stock Mail app is pretty bad.) By contrast, having the ability to save the Amazon codes is a de-facto accessibility feature insofar as the Wallet app is available when my phone is locked. It removes the need for my 3-step journey to find the needed information and also traverse the equally bad user interface of the Amazon app. Beyond return codes, Christoffel also notes iOS 26 gives the Wallet app the capability to track Amazon orders. A nice addition, to be sure, but I’ll be sticking with using the widget from my preferred package-tracking app in Parcel.
The Tantalizing Tale of the ‘Tensor Robocar’
Andrew J. Hawkins wrote for The Verge this week about a mysterious Bay Area-based autonomous vehicle company called Tensor. The startup, which is headquartered in San Jose, is a self-described “leading agentic AI company” that this week announced what it hails as “the world’s first personally owned autonomous vehicle” in its Tensor Robocar. The company’s conceit is apt in an accessibility context, as Tensor said its overarching goal is to “[empower] individuals to truly own your autonomy.”
“When the world shifts… how will you move?” Amy Luca, Tensor’s chief marketing officer, said in a statement included in the company’s press release. “We are building a world where individuals own their personal AGI agents, enhancing freedom, privacy and autonomy. With Tensor, we’re introducing the world’s first personal Robocar, ushering in the era of AI defined vehicles. This isn’t a car as we know it. It’s an embodied personal agent that moves you. It’s time to own your autonomy.”
According to Hawkins, Tensor is affiliated with autonomous vehicle maker AutoX, which operates both in China and here in the United States. Moreover, Hawkins notes Tensor claims to have offices in Barcelona, Dubai, and Singapore. AutoX has been testing autonomous vehicles in San Jose and the surrounding area since 2016. Tensor is looking to launch in America, Europe, and the Middle East starting sometime next year.
After his story ran, Tensor spokesperson Lena Allen sent Hawkins a statement.
“Since its founding in 2016 in San Jose, AutoX has been an American company. Its new consumer brand, Tensor, is also headquartered in San Jose, California, with satellite offices in Spain, the UAE, and Singapore. Tensor focuses on the US, EU, and the GCC markets. As an independent private California startup, they’re controlled by their U.S. employees, with significant majority investment/ownership from the UK, Japan, Korea and US. In 2018, Auto X launched their autonomous delivery service in San Jose for over 1000 self-driving delivery orders, operating until Covid lockdown. In 2020, AutoX received the second-ever driverless AV testing permit in California,” Allen said in response. “In 2019, AutoX entered the China market as a foreign company in China. We managed to obtain local self-driving test permits alongside with other foreign companies, such as BMW, Tesla, and VW. During the pandemic lockdowns in the U.S., we launched a fully driverless robotaxi fleet in China. However, starting several years ago, AutoX began winding down its China operations; all operations under the AutoX brand in China have been divested, with all offices closed and operations shut down.”
She added: “The AutoX brand and its China operations have been fully discontinued. We have evolved into the Tensor brand to better reflect our renewed focus on delivering personalized, private, and autonomous technology for individual ownership.”
Hawkins’ report caught my attention because of what Tensor is seemingly trying to do: extend ownership of autonomous vehicles to Average Janes and Joes. This is tantalizing in an accessibility sense because, as I’ve argued in the past, autonomous vehicles represent both accessibility’s apex and artificial intelligence’s profound powers. More pointedly, however great services like Waymo, et al, are today, a tomorrow in which a Blind and low vision person (like yours truly) could actually purchase a autonomous vehicle from, say, Tensor, would be literally life-changing. Granted, Waymo operates 24/7, but it isn’t available everywhere just yet; to have my own self-driving car would mean my transportation options wouldn’t rest on the mercies of availability because I could just get in my car and go wherever I wanted, whenever. The reason autonomous vehicles reflect accessibility’s zenith is because the technology empowers disabled people who are precluded from driving regular cars with greater feelings of agency and autonomy. It instills grander feelings of self-esteem and self-worth by giving us the independence so many of us crave in a society where the disability community is more often than not looked down upon with patronizing, paternalistic, and infantilizing attitudes. I can’t sum it up any better than Lana Nieves, executive director of San Francisco’s Independent Living Resource Center, who told me in 2023 she’s bullish on driverless cars because, as an adult, “why shouldn’t I be able to go where I want to go?”
Of course, all is not rosy in this situation. Indeed, there will come a reckoning sooner or later involving legislation, costs, and the notion that people like Nieves and myself should be able to “drive” cars if we can’t see. Nonetheless, it’s heartening to notice fledgling companies like Tensor acknowledge the value of people actually owning the robots-on-wheels they ride in. It gives me hope for a much brighter future in this space.
Blood Oxygen Sensor Returns To U.S. Apple Watches
Apple on Thursday posted an update to its Newsroom site wherein it provides an update on the blood oxygen sensor in Apple Watch. The company says the functionality has returned for users in the United States, adding software updates—iOS 18.6.1 and watchOS 11.6.1, released today—reenables the dormant feature. The workaround comes amidst Apple’s ongoing litigation with a company named Masimo over Apple’s supposed patent infringement regarding the aforementioned pulse oximetry sensor.
“Apple will introduce a redesigned Blood Oxygen feature for some Apple Watch Series 9, Series 10, and Apple Watch Ultra 2 users through an iPhone and Apple Watch software update coming later today,” Apple said in its short announcement shared today. “Users with these models in the U.S. who currently do not have the Blood Oxygen feature will have access to the redesigned Blood Oxygen feature by updating their paired iPhone to iOS 18.6.1 and their Apple Watch to watchOS 11.6.1. Following this update, sensor data from the Blood Oxygen app on Apple Watch will be measured and calculated on the paired iPhone, and results can be viewed in the Respiratory section of the Health app. This update was enabled by a recent U.S. Customs ruling.”
Apple emphasizes the update pertains only to American Apple Watches. It does not affect “units previously purchased that include the original Blood Oxygen feature, nor to Apple Watch units purchased outside of the [United States],” the company said.
John Gruber has posted a good piece on the situation. Notably, he reports a source at Apple said today’s fix is known as “HQ H351038” but is “not yet publicly available” on the Customs and Border Protections’ Customs Rulings Online Search System website.
From an accessibility perspective, the restoration of the Apple Watch’s blood oxygen sensor is notable as monitoring one’s blood oxygen saturation is key for myriad respiratory issues. Indeed, lung conditions like asthma and pneumonia have the potential to lower blood oxygen levels, as do sleep-conditions such as sleep apnea. Coincidentally, Apple added sleep apnea tracking to watchOS 11 last September. The Health app on iPhone received a new metric the company calls “Breathing Disturbances” and users can track how elevated (or not) their breathing is during the night. The sleep apnea tracking is available on Apple Watch Series 9, Apple Watch Series 10 (which I have but never officially reviewed, alas), and Apple Watch Ultra 2.
Drive Program Alum Talks Experiencing the program, learning to drive, More In Interview
Last month, I posted a story featuring an interview with Dr. Christina Potter. An academic researcher and experimental psychologist by training, Dr. Potter works as coordinator of the Drive Program run by Miami-based Nicklaus Children’s Hospital. Established in 2023, the Drive Program exists to “prepare neurodiverse individuals for a driving exam” using a virtual reality headset. According to Dr. Potter, Nicklaus Children’s manager of IT and digital technologies, explained to me in part the “simple but powerful” impetus for the Drive Program was to “help young people, especially those who face challenges like autism, anxiety, or ADHD, to gain the confidence and skills they need to become safe and independent drivers.” The core problem the Drive Program sought to solve, she added, was conventional driving schools aren’t conducive to the needs of neurodivergent people, saying the schools “don’t offer the flexibility or patience or support that [neurodivergent people] really need to succeed.”
“We saw an opportunity to fill that gap in a way that aligned with with our mission at Nicklaus Children’s,” Dr. Potter said.
Fast-forward to this past week, I sat down for a brief interview with a young woman named Anna Mariani. Mariani, 24, is an alumnus of the Drive Program, having went through it herself a few years ago. When asked about her experiences being in the Drive Program, she explained the one thing she appreciated most about it was its slow pace; “I could do things in my own time… it was well-explained, all the things that you needed to do while driving and paying attention to all the things [on the road],” Mariani said.
“It was good for me to practice being in the car,” she added.
Mariani said she first learned of the Drive Program through CARD, or the Center for Autism and Related Disabilities, managed by the University of Miami. She described CARD as a program which offers services to people in the neurodivergent community, telling me it was they who recommended the Drive Program “to learn how to drive.”
Mariani doubled down on her effusive praise heaped onto the Drive Program.
“I think [the Drive Program] helps because the teachers and instructors were really patient with me… I was able to be coached into what I needed to do,” she said. “Also, the virtual reality aspect was really good because it helped me feel like I was actually in the car. So when I got in the [actual] car, it felt more natural and that helped me feel more confident when I was actually driving the car.”
For Mariani, the Drive Program helped her best prepare for driving independently.
“With practice, it feels a lot more comfortable,” she said. “At first it was little scary, but then I started doing it more, and now I’m more comfortable driving in a car in real life.”
Mariani went on to say she highly recommends the Drive Program to everyone who may benefit from it, adding the Program’s staffers were invested in helping her learn and well-trained. The Program overall, she added, is “really advanced.” Mariani noted she has encouraged a friend to enroll in the Drive Program and hopes they do so “soon.”
“I definitely think other people should give it a try if they’re nervous or they don’t know where to start when it comes to driving,” Mariani said in endorsing the Drive Program. “I think this is a good place to start [helping] others feel more comfortable.”
Airbnb Announces ‘Reserve Now, pay Later’ Service
San Francisco-based Airbnb on Thursday announced a new payment program it calls “Reserve Now, Pay Later” whereby users can defer payments for upcoming reservations. The company says Reserve Now, Pay Later affords guests “greater flexibility” by allowing them to put $0 money down upfront on all domestic bookings.
I learned of Airbnb’s initiative in a post on X by my friend Natalie Lung of Bloomberg.
“Available for listings with a moderate or flexible cancellation policy, guests don’t need to pay the full amount until shortly before the end of the listing’s free cancellation period. Cancellation policies selected by hosts remain unchanged, and because the payment from guests is always due before the free cancellation period ends, hosts have time to secure another booking even if a guest cancels,” Airbnb wrote in describing Reserve Now, Pay Later in its announcement. “This feature comes as new data reveals that today’s travelers are seeking more flexibility when it comes to booking a stay, particularly a group trip that requires arranging funds with friends or family.”
Notably, Airbnb mentions results of a survey of American travelers it conducted with Focaldata. Airbnb said 55% of respondents indicated they take advantage of flexible payment options, while 10% reported always opting for such services. Similarly, 42% said they have chosen to “[delay and miss out] on their preferred accommodation option because of time spent coordinating how to pay for their trip with co-travelers.”
Like with laptops, the foundational piece of this news from Airbnb is accessibility. I’ve covered the company extensively over the last five years or so, having interviewed numerous executives there, and the reality is the new Reserve Now, Pay Later service is yet another part of Airbnb’s work in accessibility. Granted, it isn’t expressly or overtly designed for the disability community’s sake. The truth is, however, as with Walmart’s discounted $600 M1 MacBook Air I wrote about yesterday, most disabled people are extremely, perpetually budget-conscious. The majority of us don’t make much money, so anything we can do to save a few bucks here and there is appreciated in both peace of mind and by our pocketbook. In Airbnb’s case, that a disabled person could delay payment on a reservation makes it such that travel becomes far more accessible than aspirational. Better still, people with disabilities can utilize the accessibility features Airbnb has empowered its hosts to offer guests. Although Airbnb positions Reserve Now, Pay Later as a measure of convenience for the mainstream, the fact of the matter remains accessibility, as ever, plays a central role in shaping its relevance and appeal.
Walmart Makes M1 MacBook Air More Accessible
Joe Rossignol reports today for MacRumors Walmart has begun selling the dearly beloved M1 MacBook Air for the low price (for MacBooks) of $599. The deal is for the laptop’s base configuration of 8GB RAM and a 256GB SSD in gold, silver, or space gray.
“In case you missed it, Walmart is currently offering the older but still very capable MacBook Air with the M1 chip for just $599 in the United States,” Rossignol wrote of the deal on Wednesday. “It seems like this deal began around Amazon’s four-day Prime Day event in early July, but it flew under our radar until a reader let us know about it today.”
As Rossignol notes, Apple discontinued the M1 Air last year when it added the then-new M3 models. Walmart announced it would carry the M1 Air (at $699) back in March 2024.
My reasoning for covering this news is, as ever, accessibility—quite literally. As Rossignol also notes, although the M1 chip is getting long in the tooth by technological standards—the M5 generation of Apple silicon is said to be on its way—the chip remains more than serviceable for everyday tasks like email, web browsing, word processing, and even light photo editing. From an accessibility standpoint, the value proposition of Walmart’s $600 MacBook Air is stratospheric; budget-conscious buyers, a lot which includes most people with disabilities, get a modern, eminently capable computer that’s small and lightweight to boot. For those who can’t afford the current (and admittedly better) $999 M4 Air, the M1 variety is, again, a veritable steal for hundreds of dollars less. Eventually, assuredly sooner than later, Apple’s M1 chip will be outmoded and obsolete—but that day assuredly is years away. Right now, today, the “low end” M1 MacBook Air could cogently be argued is Apple’s most accessible Mac, and in more ways than one. In other words, for those who prefer macOS to the Mac-like iPadOS 26—more on that from me soon—the inexpensive M1 MacBook Air a revelation.
News of the $600 Air comes amid rumors Apple is preparing a “real” low-cost MacBook powered by the A18 Pro chip that’s sitting inside my iPhone 16 Pro Max. The device is purported to come out either late this year or early next, according to multiple sources.
The M1 MacBook Air is available on Walmart’s website.
Redesigned Netflix App Rolling out to Apple TV
Ryan Christoffel reports for 9to5 Mac today Netflix has begun rolling out its redesigned app to Apple TV 4K users. The news comes months after the Bay Area-based company announced the design overhaul in May, during which chief product officer Eunice Kim described the new Netflix experience as “still the one you know and love—just better.”
“As spotted by users on Reddit, the new design seems to have launched with the latest tvOS app update,” Christoffel wrote on Wednesday. “If you’re not seeing it yet, make sure you’re running the latest version of the Netflix app.”
I got the design on the 2021 A12-powered Apple TV (running tvOS 18.6) in my office.
I covered news of the new UI when Netflix announced it, having attended a virtual briefing with the company a few days beforehand. As I wrote at the time, the design looks good—there’s a video on YouTube about it—and should prove to be more accessible than the old interface. I won’t rehash my thoughts on it here, but suffice to say feels like a win for accessibility; in the couple minutes I spent noodling around the new app prior to writing this story, I enjoyed it very much. At the very least, it’s a much prettier design than what I literally used yesterday. As I also said in the spring, Netflix’s new design is conceptually exactly akin to what Amazon did to Prime Video a year ago.
AirPods Reportedly Getting Live Translation Gesture
Marcus Mendes reports for 9to5 Mac this week a bit of new UI spotted in iOS 26 Beta 6, which was released to developers on Monday, suggests Apple is planning to enable real-time translation of live conversations on AirPods. The finding comes after the company announced live translations for FaceTime calls and more at WWDC in June.
“In today’s iOS 26 developer beta 6, we spotted a new system asset that appears to depict a gesture triggered by pressing both AirPods stems at once,” Mendes wrote of the new finding. “The image displays text in English, Portuguese, French, and German, and it is associated with the Translate app. For now, we can confirm it’s associated specifically with the AirPods Pro (2nd generation) and AirPods (4th generation).”
Mendes (rightly) notes using AirPods for translative purposes is “right up the wearable wheelhouse” for products like AirPods and Meta’s Ray-Bans. Indeed, from an accessibility standpoint, using earbuds (or glasses) for translation can be not only more discreet in appearance, but also more accessible in terms of not having to look at, say, Apple’s built-in Translate app while holding it. Such a dance may be hard, if not outright impossible, for those with suboptimal hand-eye coordination. Likewise, it’s highly plausible things like languages are more intelligible for people who are auditory learners or perhaps are neurodivergent. Whatever the case, Mendes is, again, exactly right to posit using wearables for translation is a perfect use case for the technology. Moreover, Mendes is also reasonable in his speculation this feature may have been kept under wraps because Apple plans to make it part of the iPhone 17 software story.
On a related topic, that AirPods are purported to gain a new gesture serves as a good reminder to give a brief shoutout to another AirPods gesture: the head gestures for accepting or declining calls. Much to my chagrin, I get a ton of spam calls every day, which I normally ignore and let go to voicemail. When I’m wearing my AirPods, however, the aforementioned head gestures as a de-facto accessibility feature; instead of reaching for my phone to tap a button, I can merely shake my head to send those spam calls away. To use natural, nigh universally understood methods of nonverbal communication in this manner is genius—and it’s accessible too. Rather than search the abyss of my pocket(s) to hurriedly find my phone and take action on an incoming call, I easily can nod or shake my head as necessary. It’s undoubtedly convenient, as well as technically cool, but it’s also accessibility. Using head gestures to decide on phone calls alleviates a helluva lot of friction associated with using my phone for that.
Yet one more reason to choose AirPods over something like my Beats Studio Buds+.
Google Gives Gemini New ‘Guided Learning’ Mode
Not to be outdone by OpenAI and ChatGPT, Google has given Gemini a new “Guided Learning” mode. The news came earlier this week from Jay Peters at The Verge.
CEO Sundar Pichai detailed Guided Learning in a post for Google’s Keyword blog.
“Answers from the Guided Learning mode can include things like images, videos, and interactive quizzes,” Peters said in his story. “The company worked with students, educators, researchers, and learning experts to ensure the mode is “helpful for understanding new concepts and is backed by learning science,” according to Pichai.
Google’s conceit with Guided Learning is similar to OpenAI’s insofar as the goal is to not give answers to students as though Gemini were a highfalutin answer key. Rather, Peters’ dek says the goal is much more pedagogical: Guided Learning aims to “[help] you work through problems” instead of unhelpfully give them the answers. From an accessibility perspective, the conceit between Gemini’s Guided Learning and ChatGPT’s Study Mode is the same in that both can be counted on to present information in a single space. This can be helpful for people with various cognitive disabilities where keeping track of myriad aids such as flashcards can, somewhat counterintuitively, become problematic. Chatbots can coalesce lots of information.
Once more I say, chatbots are more useful than merely being conduits for cheating by disengaged students. Study-oriented features can make learning more accessible.
Guided Learning comes amid OpenAI’s high-profile announcement of its newest model, called GPT–5. CEO Sam Altman described it as “the smartest model we’ve ever done.”
‘Ode to the EarPods’
Basic Apple Guy, purveyor of well-made wallpaper, likes Apple’s wired earphones.
“Don’t get me wrong, I am still very much on team AirPods, but I have increasingly found use cases and situations where having a pair of good olde wired EarPods has proven quite useful,” he wrote in a new blog post. “They don’t need charging, they work with just about anything, and they’ve quietly aged into a little slice of tech nostalgia.”
Prior to 2016, when AirPods debuted alongside the iPhone 7 and 7 Plus, I spent many, many years using various incarnations of Apple-branded earphones. From the 30-pin iPod connector to the 3.5mm jack to Lightning, I’ve used them all across various iPods and iPhones. A major reason I found AirPods so revelatory almost a decade ago (!) lies in their cord-free nature. However long I used Apple’s cabled earphones, the biggest frustration with them accessibility-wise, was untangling the cord. The image of a rat’s nest cable Basic Apple Guy included in his piece is scary enough for Halloween; I tried so hard to keep the cord untangled, mostly without success. My hand-eye coordination is bad enough that I’d spend what felt like eons trying to untangle the cable, a task which always involved the most colorful expletives known to humankind. Thus, the advent of AirPods freed me from such torture. From a practical perspective, I also agree with Basic Apple Guy’s fondness for the EarPods’ remote. While the gestures/stem control on AirPods is fine, I’ve never particularly enjoyed the sensation of pressing or swiping close to my ear. I tolerate it, but it’s sensory input that doesn’t at all feel good.
Between my slew of AirPods in my office, all of which span myriad generations and surnames, and my Beats Studio Buds+, I’ve surely no shortage of wireless earbuds to use; if something happens to one pair, I easily can reach for a backup. For travel purposes, however, I’ve made a point to have a set of the $19 USB-C EarPods as a emergency earphone solution in case my AirPods die or, worse, get lost or stolen.
The EarPods are an inexpensive safety blanket—a great addition to my tech travel kit.