Steven Aquino Steven Aquino

Drive Program Alum Talks Experiencing the program, learning to drive, More In Interview

Last month, I posted a story featuring an interview with Dr. Christina Potter. An academic researcher and experimental psychologist by training, Dr. Potter works as coordinator of the Drive Program run by Miami-based Nicklaus Children’s Hospital. Established in 2023, the Drive Program exists to “prepare neurodiverse individuals for a driving exam” using a virtual reality headset. According to Dr. Potter, Nicklaus Children’s manager of IT and digital technologies, explained to me in part the “simple but powerful” impetus for the Drive Program was to “help young people, especially those who face challenges like autism, anxiety, or ADHD, to gain the confidence and skills they need to become safe and independent drivers.” The core problem the Drive Program sought to solve, she added, was conventional driving schools aren’t conducive to the needs of neurodivergent people, saying the schools “don’t offer the flexibility or patience or support that [neurodivergent people] really need to succeed.”

“We saw an opportunity to fill that gap in a way that aligned with with our mission at Nicklaus Children’s,” Dr. Potter said.

Fast-forward to this past week, I sat down for a brief interview with a young woman named Anna Mariani. Mariani, 24, is an alumnus of the Drive Program, having went through it herself a few years ago. When asked about her experiences being in the Drive Program, she explained the one thing she appreciated most about it was its slow pace; “I could do things in my own time… it was well-explained, all the things that you needed to do while driving and paying attention to all the things [on the road],” Mariani said.

“It was good for me to practice being in the car,” she added.

Mariani said she first learned of the Drive Program through CARD, or the Center for Autism and Related Disabilities, managed by the University of Miami. She described CARD as a program which offers services to people in the neurodivergent community, telling me it was they who recommended the Drive Program “to learn how to drive.”

Mariani doubled down on her effusive praise heaped onto the Drive Program.

“I think [the Drive Program] helps because the teachers and instructors were really patient with me… I was able to be coached into what I needed to do,” she said. “Also, the virtual reality aspect was really good because it helped me feel like I was actually in the car. So when I got in the [actual] car, it felt more natural and that helped me feel more confident when I was actually driving the car.”

For Mariani, the Drive Program helped her best prepare for driving independently.

“With practice, it feels a lot more comfortable,” she said. “At first it was little scary, but then I started doing it more, and now I’m more comfortable driving in a car in real life.”

Mariani went on to say she highly recommends the Drive Program to everyone who may benefit from it, adding the Program’s staffers were invested in helping her learn and well-trained. The Program overall, she added, is “really advanced.” Mariani noted she has encouraged a friend to enroll in the Drive Program and hopes they do so “soon.”

“I definitely think other people should give it a try if they’re nervous or they don’t know where to start when it comes to driving,” Mariani said in endorsing the Drive Program. “I think this is a good place to start [helping] others feel more comfortable.”

Read More
Steven Aquino Steven Aquino

Airbnb Announces ‘Reserve Now, pay Later’ Service

San Francisco-based Airbnb on Thursday announced a new payment program it calls “Reserve Now, Pay Later” whereby users can defer payments for upcoming reservations. The company says Reserve Now, Pay Later affords guests “greater flexibility” by allowing them to put $0 money down upfront on all domestic bookings.

I learned of Airbnb’s initiative in a post on X by my friend Natalie Lung of Bloomberg.

“Available for listings with a moderate or flexible cancellation policy, guests don’t need to pay the full amount until shortly before the end of the listing’s free cancellation period. Cancellation policies selected by hosts remain unchanged, and because the payment from guests is always due before the free cancellation period ends, hosts have time to secure another booking even if a guest cancels,” Airbnb wrote in describing Reserve Now, Pay Later in its announcement. “This feature comes as new data reveals that today’s travelers are seeking more flexibility when it comes to booking a stay, particularly a group trip that requires arranging funds with friends or family.”

Notably, Airbnb mentions results of a survey of American travelers it conducted with Focaldata. Airbnb said 55% of respondents indicated they take advantage of flexible payment options, while 10% reported always opting for such services. Similarly, 42% said they have chosen to “[delay and miss out] on their preferred accommodation option because of time spent coordinating how to pay for their trip with co-travelers.”

Like with laptops, the foundational piece of this news from Airbnb is accessibility. I’ve covered the company extensively over the last five years or so, having interviewed numerous executives there, and the reality is the new Reserve Now, Pay Later service is yet another part of Airbnb’s work in accessibility. Granted, it isn’t expressly or overtly designed for the disability community’s sake. The truth is, however, as with Walmart’s discounted $600 M1 MacBook Air I wrote about yesterday, most disabled people are extremely, perpetually budget-conscious. The majority of us don’t make much money, so anything we can do to save a few bucks here and there is appreciated in both peace of mind and by our pocketbook. In Airbnb’s case, that a disabled person could delay payment on a reservation makes it such that travel becomes far more accessible than aspirational. Better still, people with disabilities can utilize the accessibility features Airbnb has empowered its hosts to offer guests. Although Airbnb positions Reserve Now, Pay Later as a measure of convenience for the mainstream, the fact of the matter remains accessibility, as ever, plays a central role in shaping its relevance and appeal.

Read More
Steven Aquino Steven Aquino

Walmart Makes M1 MacBook Air More Accessible

Joe Rossignol reports today for MacRumors Walmart has begun selling the dearly beloved M1 MacBook Air for the low price (for MacBooks) of $599. The deal is for the laptop’s base configuration of 8GB RAM and a 256GB SSD in gold, silver, or space gray.

“In case you missed it, Walmart is currently offering the older but still very capable MacBook Air with the M1 chip for just $599 in the United States,” Rossignol wrote of the deal on Wednesday. “It seems like this deal began around Amazon’s four-day Prime Day event in early July, but it flew under our radar until a reader let us know about it today.”

As Rossignol notes, Apple discontinued the M1 Air last year when it added the then-new M3 models. Walmart announced it would carry the M1 Air (at $699) back in March 2024.

My reasoning for covering this news is, as ever, accessibility—quite literally. As Rossignol also notes, although the M1 chip is getting long in the tooth by technological standards—the M5 generation of Apple silicon is said to be on its way—the chip remains more than serviceable for everyday tasks like email, web browsing, word processing, and even light photo editing. From an accessibility standpoint, the value proposition of Walmart’s $600 MacBook Air is stratospheric; budget-conscious buyers, a lot which includes most people with disabilities, get a modern, eminently capable computer that’s small and lightweight to boot. For those who can’t afford the current (and admittedly better) $999 M4 Air, the M1 variety is, again, a veritable steal for hundreds of dollars less. Eventually, assuredly sooner than later, Apple’s M1 chip will be outmoded and obsolete—but that day assuredly is years away. Right now, today, the “low end” M1 MacBook Air could cogently be argued is Apple’s most accessible Mac, and in more ways than one. In other words, for those who prefer macOS to the Mac-like iPadOS 26—more on that from me soon—the inexpensive M1 MacBook Air a revelation.

News of the $600 Air comes amid rumors Apple is preparing a “real” low-cost MacBook powered by the A18 Pro chip that’s sitting inside my iPhone 16 Pro Max. The device is purported to come out either late this year or early next, according to multiple sources.

The M1 MacBook Air is available on Walmart’s website.

Read More
Steven Aquino Steven Aquino

Redesigned Netflix App Rolling out to Apple TV

Ryan Christoffel reports for 9to5 Mac today Netflix has begun rolling out its redesigned app to Apple TV 4K users. The news comes months after the Bay Area-based company announced the design overhaul in May, during which chief product officer Eunice Kim described the new Netflix experience as “still the one you know and love—just better.”

“As spotted by users on Reddit, the new design seems to have launched with the latest tvOS app update,” Christoffel wrote on Wednesday. “If you’re not seeing it yet, make sure you’re running the latest version of the Netflix app.”

I got the design on the 2021 A12-powered Apple TV (running tvOS 18.6) in my office.

I covered news of the new UI when Netflix announced it, having attended a virtual briefing with the company a few days beforehand. As I wrote at the time, the design looks good—there’s a video on YouTube about it—and should prove to be more accessible than the old interface. I won’t rehash my thoughts on it here, but suffice to say feels like a win for accessibility; in the couple minutes I spent noodling around the new app prior to writing this story, I enjoyed it very much. At the very least, it’s a much prettier design than what I literally used yesterday. As I also said in the spring, Netflix’s new design is conceptually exactly akin to what Amazon did to Prime Video a year ago.

Read More
Steven Aquino Steven Aquino

AirPods Reportedly Getting Live Translation Gesture

Marcus Mendes reports for 9to5 Mac this week a bit of new UI spotted in iOS 26 Beta 6, which was released to developers on Monday, suggests Apple is planning to enable real-time translation of live conversations on AirPods. The finding comes after the company announced live translations for FaceTime calls and more at WWDC in June.

“In today’s iOS 26 developer beta 6, we spotted a new system asset that appears to depict a gesture triggered by pressing both AirPods stems at once,” Mendes wrote of the new finding. “The image displays text in English, Portuguese, French, and German, and it is associated with the Translate app. For now, we can confirm it’s associated specifically with the AirPods Pro (2nd generation) and AirPods (4th generation).”

Mendes (rightly) notes using AirPods for translative purposes is “right up the wearable wheelhouse” for products like AirPods and Meta’s Ray-Bans. Indeed, from an accessibility standpoint, using earbuds (or glasses) for translation can be not only more discreet in appearance, but also more accessible in terms of not having to look at, say, Apple’s built-in Translate app while holding it. Such a dance may be hard, if not outright impossible, for those with suboptimal hand-eye coordination. Likewise, it’s highly plausible things like languages are more intelligible for people who are auditory learners or perhaps are neurodivergent. Whatever the case, Mendes is, again, exactly right to posit using wearables for translation is a perfect use case for the technology. Moreover, Mendes is also reasonable in his speculation this feature may have been kept under wraps because Apple plans to make it part of the iPhone 17 software story.

On a related topic, that AirPods are purported to gain a new gesture serves as a good reminder to give a brief shoutout to another AirPods gesture: the head gestures for accepting or declining calls. Much to my chagrin, I get a ton of spam calls every day, which I normally ignore and let go to voicemail. When I’m wearing my AirPods, however, the aforementioned head gestures as a de-facto accessibility feature; instead of reaching for my phone to tap a button, I can merely shake my head to send those spam calls away. To use natural, nigh universally understood methods of nonverbal communication in this manner is genius—and it’s accessible too. Rather than search the abyss of my pocket(s) to hurriedly find my phone and take action on an incoming call, I easily can nod or shake my head as necessary. It’s undoubtedly convenient, as well as technically cool, but it’s also accessibility. Using head gestures to decide on phone calls alleviates a helluva lot of friction associated with using my phone for that.

Yet one more reason to choose AirPods over something like my Beats Studio Buds+.

Read More
Steven Aquino Steven Aquino

Google Gives Gemini New ‘Guided Learning’ Mode

Not to be outdone by OpenAI and ChatGPT, Google has given Gemini a new “Guided Learning” mode. The news came earlier this week from Jay Peters at The Verge.

CEO Sundar Pichai detailed Guided Learning in a post for Google’s Keyword blog.

“Answers from the Guided Learning mode can include things like images, videos, and interactive quizzes,” Peters said in his story. “The company worked with students, educators, researchers, and learning experts to ensure the mode is “helpful for understanding new concepts and is backed by learning science,” according to Pichai.

Google’s conceit with Guided Learning is similar to OpenAI’s insofar as the goal is to not give answers to students as though Gemini were a highfalutin answer key. Rather, Peters’ dek says the goal is much more pedagogical: Guided Learning aims to “[help] you work through problems” instead of unhelpfully give them the answers. From an accessibility perspective, the conceit between Gemini’s Guided Learning and ChatGPT’s Study Mode is the same in that both can be counted on to present information in a single space. This can be helpful for people with various cognitive disabilities where keeping track of myriad aids such as flashcards can, somewhat counterintuitively, become problematic. Chatbots can coalesce lots of information.

Once more I say, chatbots are more useful than merely being conduits for cheating by disengaged students. Study-oriented features can make learning more accessible.

Guided Learning comes amid OpenAI’s high-profile announcement of its newest model, called GPT–5. CEO Sam Altman described it as “the smartest model we’ve ever done.”

Read More
Steven Aquino Steven Aquino

‘Ode to the EarPods’

Basic Apple Guy, purveyor of well-made wallpaper, likes Apple’s wired earphones.

“Don’t get me wrong, I am still very much on team AirPods, but I have increasingly found use cases and situations where having a pair of good olde wired EarPods has proven quite useful,” he wrote in a new blog post. “They don’t need charging, they work with just about anything, and they’ve quietly aged into a little slice of tech nostalgia.”

Prior to 2016, when AirPods debuted alongside the iPhone 7 and 7 Plus, I spent many, many years using various incarnations of Apple-branded earphones. From the 30-pin iPod connector to the 3.5mm jack to Lightning, I’ve used them all across various iPods and iPhones. A major reason I found AirPods so revelatory almost a decade ago (!) lies in their cord-free nature. However long I used Apple’s cabled earphones, the biggest frustration with them accessibility-wise, was untangling the cord. The image of a rat’s nest cable Basic Apple Guy included in his piece is scary enough for Halloween; I tried so hard to keep the cord untangled, mostly without success. My hand-eye coordination is bad enough that I’d spend what felt like eons trying to untangle the cable, a task which always involved the most colorful expletives known to humankind. Thus, the advent of AirPods freed me from such torture. From a practical perspective, I also agree with Basic Apple Guy’s fondness for the EarPods’ remote. While the gestures/stem control on AirPods is fine, I’ve never particularly enjoyed the sensation of pressing or swiping close to my ear. I tolerate it, but it’s sensory input that doesn’t at all feel good.

Between my slew of AirPods in my office, all of which span myriad generations and surnames, and my Beats Studio Buds+, I’ve surely no shortage of wireless earbuds to use; if something happens to one pair, I easily can reach for a backup. For travel purposes, however, I’ve made a point to have a set of the $19 USB-C EarPods as a emergency earphone solution in case my AirPods die or, worse, get lost or stolen.

The EarPods are an inexpensive safety blanket—a great addition to my tech travel kit.

Read More
Steven Aquino Steven Aquino

Apple Enhances AirPods Charging Case Interface to ‘more clearly indicate charging status’ to users

Apple has yet another de-facto accessibility feature coming in iOS 26.

Ryan Christoffel reports for 9to5 Mac this week the charging case for AirPods has been enhanced so as to more clearly signify its charging status. Christoffel notes a user on X, Minimal Nerd, posted a screenshot (embedded below) of a system card explaining the color codes and their meanings. The UI reads the various lights “now more clearly indicates charging status.” Green means charged, yellow means in-progress charging, and orange means the case itself needs juice, according to the new user interface.

The change is apparently new to iOS 26 Beta 5, which Apple shipped earlier this week.

“The differences between yellow and orange are especially subtle, making it unclear whether users will be able to distinguish them,” Christoffel said of the UI. “Currently, Apple’s support document only notes green and amber as the indicator colors.”

Christoffel added Juli Clover at MacRumors reported there exists code in iOS 26 which has the system notify users “when it’s time to charge.” Clover also noted how, in prior iOS 26 betas, Apple sent iPhone notifications when one’s AirPods needed charging.

This new color-code system should make understanding charging status more accessible. To wit, it can be hard to decipher which color means what—particularly with ones like amber and yellow being so similar, as Christoffel rightly noted. The new system makes it much clearer. Additionally, it’s helpful to know one’s AirPods are being charged by listening for the little tone when putting it on a wireless charging mat. Not only do you hear the chime, but you see the colored light appear with it. That bimodal sensory input can be important insofar as it comforts someone that they placed their AirPods in the right spot for charging. Unlike modern iPhones, AirPods don’t support MagSafe; this means a disabled person who, like yours truly, has lackluster hand-eye coordination potentially could miss the “spot” when trying to set the earbuds down to charge. Without the chime and/or light, you may think your AirPods are charging when, in actuality, they’re dying because you missed the spot by a quarter-inch or whatever.

As I said, these color codes (and the chime) are de-facto accessibility features.

Read More
Steven Aquino Steven Aquino

A mini Review of Walmart’s Onn 4K Pro

I missed it when news broke, but Ben Schoon at 9to5 Google reported in early June Walmart’s Onn 4K Pro streaming box was updated to run Android 14. The update, inscrutably named URO1.250103.029.A1, also brought the April 2025 security patch.

“Users shouldn’t expect any major changes from this update, though,” Schoon said in describing the June software upgrade. “Android 14 for TVs was mainly focused on TV sets, but it should make everything feel a bit more snappy.”

As Schoon wrote, the Onn 4K Pro is a seriously great deal; as of this writing, its price is only $45, down from the usual $50. I bought one several months ago to try out of curiosity and came away very impressed. The device runs stock Google TV, offering no Walmart-branded apps or the like. The remote, while plastic, feels nice in hand and its buttons are nice to press and responsive. And the box can act as a smart speaker when you’re not watching anything. Performance-wise, the Onn 4K Pro is serviceable and does the job. As a devout Apple TV 4K user, however, Walmart’s box can’t hold a candle to Apple’s in terms of sheer power and overall fidelity. I oftentimes joke Apple TV is laughably over-engineered for its primary purpose—streaming video—but I really appreciate how performant it is when testing the competition. Say what you will about tempering expectations between a $50 box and a $130 box, but the user experience in navigating the menus, et al, are demonstrably and undeniably better on tvOS. More pertinently for my reporting, tvOS smokes Google TV in accessibility features too.

Where the Onn 4K Pro pulls ahead is in Google TV. While I generally prefer tvOS for its niceness and the amenities pertaining to the Apple ecosystem, I do have a soft spot in my heart for how Google TV makes finding stuff to watch easier—and arguably more accessible. Beyond getting the Liquid Glass treatment, tvOS 26 brings little improvement in the mechanics of the user interface. I maintain that, on screens as big as televisions, tvOS has the potential to be so much more than a static grid of icons. That the Apple TV app is a container for things you watch is backwards; the app should be the whole UI, just as on Google TV. Likewise, tvOS should integrate a live TV guide too. Every year, I hope Apple will finally give tvOS its overdue “iOS 7 moment” and do a top-to-bottom overhaul of the platform, but am always left disappointed. I’m critical because, frankly, I greatly prefer Apple design to Google’s, functionality be damned. For all its warts, tvOS simply feels nicer to use than Google TV. But, as I said, that doesn’t take away from my admiration of all Google has implemented into the system for users.

As one prime example, the YouTube TV integration is killer if you’re a subscriber.

I heartily recommend the Onn 4K Pro over Google’s own box if you wanna wade into Google TV’s waters. Again, I was pleasantly surprised (and delighted) by Walmart’s box.

My pal Jason Snell wrote up a comparison of streaming boxes back in March.

Read More
Steven Aquino Steven Aquino

I’m filing this Under ‘I Learn something every day’

This post’s headline says it all.

Last October, I interviewed the developers behind Croissant for iOS and macOS. I can’t describe the app’s functionality better than its website does; it says Croissant is “a buttery smooth app for cross-posting to Bluesky, Mastodon, and Threads.” The two-person team of Ben McCarthy and Aaron Vegh told me, in part, Croissant wasn’t expressly built for accessibility’s sake, but nonetheless emanates as a byproduct. The duo’s overarching goal with Croissant was to make a piece of software which was “something simple and streamlined,” according to McCarthy. As I wrote, Croissant’s appeal in an accessibility context is that a disabled person needn’t have to manually post the same thing to multiple services. Although copy-and-paste is a workaround, but still involves extra taps—actions which can be taxing to many people out there who cope with any sort of cognitive/visual/motor conditions (or some combination thereof).

Thus, Croissant’s streamlining is accessibility too.

Anyway, one of my previous gripes about Croissant was there existed no button one could push to automatically generate image descriptions, or alt-text, for images. Lo and behold, I went to use Croissant on my iPhone earlier today and noticed a small button that does just that! I asked McCarthy about it on Mastodon and they replied by saying the text-generation feature isn’t new and, in fact, has “been there for a while.” I incorrectly presumed the feature used AI, but it doesn’t; McCarthy told me it works by way of Apple’s “VNRecognizeTextRequest” API, a tool which Apple describes to developers as “an image-analysis request that finds and recognizes text in an image.”

The moral here? Croissant’s accessibility game is even stronger. Go download it.

Read More
Steven Aquino Steven Aquino

Controlling a computer with your Mind is possible

In the run-up to Global Accessibility Awareness Day in May, I reported on Apple’s yearly preview of the new accessibility features coming to its panoply of platforms later this year. Features like Magnifier for Mac, Accessibility Nutrition Labels, Name Recognition, and more all are now confirmed to be in Apple’s “OS 26” updates, currently in beta.

At the end of my aforementioned story, I mentioned the timing of its publication was fortuitous insofar as it coincided with a report from Rolfe Winkler of The Wall Street Journal that Apple purportedly has been developing so-called BCIs, or brain-controlled interfaces, to assist people coping with motor disabilities. Moreover, I noted that my pal Chance Miller wrote for 9to5 Mac researchers strongly believe BCI has potential to “revolutionize” the way(s) in which disabled people access computers. Miller also said Apple is expected to “add broader support for BCIs” to Switch Control later this year.

This bit of preamble is pertinent now because Miller’s colleague in Ryan Christoffel reports this week BCI maker Synchron, with whom Apple’s said to be collaborating on the technology, posted a video (embedded below) showing a man named Mark Jackson using Synchro’s BCI to control his iPad. Jackson, who has ALS, has been an early tester of the technology and was interviewed for Winkler’s piece for the Journal. Jackson is one of only 10 people to be fit with Synchro’s Stentrode implant for the FDA-approved trial. The Stentrode device uses electrodes to read brain signals and act upon them.

According to Christoffel, the description of Synchron’s video calls out a new API built by Apple, Human Interface Device (HID), which is designed to work with a BCI device. Synchron calls Jackson’s demonstration “groundbreaking” in the way he “navigates his iPad home screen, opens apps, and composes messages using only his mind.”

BCI tech, like time travel, feels like something out of a sci-fi novel—but it’s real! This “mind control” tech truly does hold profound potential to bring greater accessibility to people who have severely limited, if any at all, motor skills. On a related note, I’ve long been fascinated by the work done by Elon Musk’s Neuralink for accessibility’s sake, and would legit love to pick Musk’s brain on such a topic in an interview with me someday.

Read More
Steven Aquino Steven Aquino

Tesla’s Ride-Hailing service Launches in bay area

Late last week, Elon Musk posted on X Tesla’s ride-hail service is now available in the Bay Area. The Tesla AI posted on X invitations for using the service are “going out now.”

Ryan Mense, writing at Bay Area-based CW affiliate KRON4, reported last week the aforementioned X post by Tesla AI includes a service map. Mense notes the service area includes “boundaries of Marin County and Berkeley in the north and San Jose to the south, [with] eastern and western boundaries cover cities near the bayshore.” Of particular import is the distinction that the Tesla ride-hail service is not driverless; there is a human in the vehicle during trips. In other words, there’s no FSD mode in these cars.

Musk’s post comes after The Verge reported Tesla sought permission to operate here.

I’ve seen a few Tesla diehards post about using Tesla’s ride-hail service in Austin. Personally, I have no interest in trying it here in San Francisco; Waymo and Uber more than satisfy my needs—and, frankly, I’m no fan of Musk. Nonetheless, between Tesla, Waymo, Uber/Lyft, and even Zoox, San Francisco—as well as the Bay Area regionally—Tesla’s news is yet another example of the region’s place as a hotbed for innovation. In an accessibility context, the more app-based, on-demand ride-hail services in existence is undoubtedly a good thing for people who are, say, Blind and low vision and thus are precluded from driving on their own. The nerds (and venture capitalists) like to crow about the technical might of artificial intelligence, not to mention the coolness and convenience of summoning rides from one’s iPhone, but the reality is it’s much more impactful than sheer coolness or even novelty. Using ride-hailing services offer accessibility and inclusion, as well as imbuing heightened feelings of self-esteem through greater agency and autonomy. As I always say, this stuff is 100% non-trivial.

Read More
Steven Aquino Steven Aquino

Corporation for Public Broadcasting announces ‘Responsible and orderly closeout’ amid Cuts

The Corporation for Public Broadcasting (CPB) on Friday announced its decision to begin “an orderly wind-down of its operations” as a result of the organization’s recent loss of federal funding. The exclusion of CPB from the Senate Appropriations Committee’s FY 2026 Labor, Health and Human Services, Education, and Related Agencies (Labor-H) appropriations bill was the first in “more than five decades.”

Congress authorized the Washington DC-based CPB’s formation in 1967 to act as, as the nonprofit organization says, “the steward of the federal government’s investment in public broadcasting. It helps support the operations of more than 1,500 locally managed and operated public television and radio stations nationwide.” Moreover, the CPB notes it is “the largest single source of funding for research, technology, and program development for public radio, television, and related online services.”

Despite the extraordinary efforts of millions of Americans who called, wrote, and petitioned Congress to preserve federal funding for CPB, we now face the difficult reality of closing our operations,” Patricia Harrison, CPB’s president and CEO, said in a statement for the announcement. “CPB remains committed to fulfilling its fiduciary responsibilities and supporting our partners through this transition with transparency and care.”

According to the CPB, its employees have been notified that “the majority of staff positions will conclude with the close of the fiscal year on September 30, 2025,” with a “small transition team” remaining in place through January of next year in an effort to “ensure a responsible and orderly closeout of operations.”

“Public media has been one of the most trusted institutions in American life, providing educational opportunity, emergency alerts, civil discourse, and cultural connection to every corner of the country,” Harrison said of the CPB’s raison d'être. “We are deeply grateful to our partners across the system for their resilience, leadership, and unwavering dedication to serving the American people.”

The loss of the CPB is gut-wrenching for public, independent media, as well as for diversity and inclusion. I’ve covered the work of PBS Kids extensively several times over the last few years, covering programming and more that make up the confluence of disability, technology, and television. In fact, just last week I reached out to the network seeking color from its senior vice president and general manager in Sara DeWitt about what the Trump administration’s budget cuts may mean for her team. I’ve interviewed her on numerous occasions, but PBS Kids declined comment this time. Be that as it may, one needn’t get an on-the-record interview to know which way the wind is blowing; to wit, as with cuts to SNAP and Medicaid, disability inclusion and representation is taking it especially hard on the chin lately. In the case of broadcasting, that PBS Kids produces shows like Carl the Collector that not only put disabled people in the spotlight, it gives every child (and their families) crucial lessons in why showing compassion and empathy is important. With the CPB shuttering operations, that puts such educational opportunities in serious peril—to the detriment of society writ large.

I’ll report back if and when PBS Kids makes official statement(s) on these matters.

Read More
Steven Aquino Steven Aquino

On ‘Personal Superintelligence’ And Accessibility

Earlier this week, Mark Zuckerberg published an essay about what he calls “Personal Superintelligence.” The 600-word post comes on the heels of Meta’s spending spree of late to staff up its Superintelligence Labs group, during which Meta has poached several Apple employees working on AI. The seemingly ever-growing list notably includes the person who purveyed Apple’s foundation models, Ruoming Pang.

Pang’s defection to Meta was first reported by Bloomberg’s Mark Gurman last month.

But back to Zuckerberg and artificial intelligence.

“It seems clear that in the coming years, AI will improve all our existing systems and enable the creation and discovery of new things that aren’t imaginable today,” he wrote. “But it is an open question what we will direct superintelligence towards.”

Those who harbor more cynical inclinations towards Zuckerberg and Meta have, somewhat rightfully, labeled his latest manifesto as much ado about nothing. After all, Zuckerberg once was bullish on the so-called “metaverse” portending the future of technology—in with a bang, but out just as quickly with nary a whimper. Personally, I too thought the metaverse was nothing more than a big bunch of hooey. That said, I’m willing to acknowledge he does manage to plant a few kernels of truth in his piece.

The “nut graf” of Zuckerberg’s post concerns wearables—namely, glasses.

“The intersection of technology and how people live is Meta’s focus, and will only become more important in the future,” he said. “If trends continue, then you’d expect people to spend less time in productivity software, and more time creating and connecting. Personal superintelligence that knows us deeply, understands our goals, and can help us achieve them will be by far the most useful. Personal devices like glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices.”

Personal devices that know our context is a sentiment which resonates deeply with accessibility. As a devout Apple user, it’s not hard to look at something like Vision Pro and, despite how cool and cutting-edge the headset is, it isn’t hard to envision a shrinkage of its technology so as to fit normal-sized glasses. Apple knows this too, but you gotta start somewhere, so $3,500 buys you baby steps into the future. From a disability standpoint, the allure is obvious: Vision Pro’s mixed-reality makes it such that software can be layered in the real world, literally in front of one’s eyes. Even Apple’s Liquid Glass, it could be argued, was created partly with Apple’s accelerated roadmap in mind. It’s a design language that seems (to me, anyway) ideally suited for products like Vision Pro and more. The dividends are of limited utility right now beyond sheer novelty, but think of Zuckerberg’s aforementioned glasses. Imagine, for instance, a future version of visionOS running on a pair of glasses similar to Meta’s own Ray-Bans that show you turn-by-turn directions in Apple Maps, incoming texts in iMessage, or even a person’s contact card when they approach you. For someone who’s Blind or low vision, as is yours truly, it would be extremely accessible for Siri to say, “Josie is approaching, here’s her information” if it’s hard to make out a person’s face and/or physique from afar. Maybe some of this information is relayed through AirPods, but the salient point is simply that, for certain things, a pair of “Apple Vision Glasses” would be more useful—and more accessible—than using the iPhone in our pockets. Put another way, it’s why Apple Watch is such a capable satellite device today. To wit, beyond mere convenience, it can be far more accessible (and more convenient) for many disabled people to raise their wrist for notifications than reach for the phone in their pocket.

Maybe Zuckerberg is ultimately wrong in his prognosticating. Maybe the smartphone truly is the end-all, be-all form factor for mobile computing. I, for one, wouldn’t anoint him (and by extension, Meta) to lead the charge on the next technological revolution. But assuming he’s right, at least in certain respects, his blurb here about the evolution of personal computing vis-a-vis glasses may well prove remarkably prescient over the next decade or two. Whatever happens, one thing will remain crystal clear disability community is ripe with technologists, and any advancements in technology will be embraced with unbridled enthusiasm if they help us better access the world we live in.

Read More
Steven Aquino Steven Aquino

OpenAI Adds ‘Study Mode’ To ChatGPT

OpenAI this week announced a new pedagogical feature to ChatGPT: Study Mode.

“ChatGPT is becoming one of the most widely used learning tools in the world. Students turn to it to work through challenging homework problems, prepare for exams, and explore new concepts. But its use in education has also raised an important question: how do we ensure it is used to support real learning, and doesn’t just offer solutions without helping students make sense of them?” OpenAI wrote of Study Mode. “We’ve built Study Mode to help answer this question. When students engage with Study Mode, they’re met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding. Study Mode is designed to be engaging and interactive, and to help students learn something—not just finish something.”

As to technical details, OpenI says Study Mode was built using “custom system instructions we’ve written in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning including: encouraging active participation, managing cognitive load, proactively developing metacognition and self reflection, fostering curiosity, and providing actionable and supportive feedback.” The behaviors, the company added, “are based on longstanding research in learning science and shape how Study Mode responds to students.”

Study Mode was “built with college students in mind,” according to OpenAI.

Study Mode is, as ever, pertinent to accessibility as a de-facto assistive technology. While teachers and university professors are apt to loathe software like ChatGPT and its ilk because of the ways in which they ostensibly stunt the learning process by giving students an instant—and virtually infinite—answer key, the truth is such criticism goes only so far. In a disability context, ChatGPT’s new Study Mode could plausibly be a boon to, say, neurodivergent people with unique learning styles. Having ChatGPT help with prompting, et al, and coalescing information into a single space can be worth its weight in gold; it may be far more accessible for someone to keep track of the subject matter using ChatGPT than a bunch of flashcards or strewn about in various places. Likewise, someone with cognitive/motor/visual disabilities (or some combination thereof) may find Study Mode a more accessible methodology than juggling a trillion browser tabs. This scenario reminds me of an anecdote I’ve shared before, wherein Jenny Lay-Flurrie, chief accessibility officer at Microsoft, shared with me in an interview last year her neurodivergent teenage daughter found using the ChatGPT-powered Bing Search a more accessible tool for doing research when writing essays for her English classes.

As I’ve said, chatbots have utility. They’re not sheer conduits for lazy people to cheat.

Study Mode is available now to users on the Free, Plus, Pro, or Team plans. Those who are ChatGPT Edu users will get the feature “in the next few weeks,” OpenAI said.

Read More
Steven Aquino Steven Aquino

Easterseals CEO: SNAP, Medicaid Cuts in Trump’s ‘Big Beautiful Bill’ A ‘Double Whammy’ to disabled

Earlier this month, Liza Berger at McKnight’s Home Care Daily Pulse posted an interview with Easterseals president and chief executive Kendra Davenport in which Davenport detailed how Americans with disabilities are impacted by the Trump administration’s One Big Beautiful Bill Act. President Trump officially signed the bill into law on July 4.

Easterseals, founded in 1919, is America’s oldest disability nonprofit organization.

Among the cuts in Trump’s “Big Beautiful Bill” are substantial cuts to Medicaid and SNAP, or food stamps. The cuts, Berger said, amount to $900 billion and will, Davenport said, “deal a punishing blow to many people with disabilities, including many seniors.”

“Many people with disabilities are reliant on [food assistance], especially seniors,” Davenport said about the massive budget cuts levied by Trump and his cronies. “If that goes away, it’s a double whammy. They’re losing their healthcare. They’re losing their access to food and their assistance to be able to nourish themselves consistently. Those are big concerns, and it’s all hitting the same people, if you will.”

In addition to the SNAP cuts, Davenport sounded the alarm on the cuts to Medicaid services—which, here in California is known as Medi-Cal—as well as so-called direct service professionals, or DSPs, who visit disabled people in their homes and help them with independent living. The Medicaid cuts have a collateral damage effect on these workers, as Berger notes they will receive less money per hour. For its part, Easterseals, according to Berger, “has 70 affiliates and touches 70 million people including older adults and veterans.” The reduction in wages for DSPs, she added, may prove too much for the organization’s affiliates to bear; the result of which is ultimately less of these professionals for members of the disability community to lean on for crucial support.

Berger’s story is worth a read in its entirety. The Trump administration’s cuts to such crucial services do, in my opinion, underscore society’s general disdain for disability and disabled people. We’re seen as less than human, extant primarily to serve as inspiration for overcoming adversity, the odds, and our own bodies. It’s discouraging (and grossly ableist) but also entirely predictable. More broadly, these budget cuts also show the evil callousness of Trump and his sycophants—a reminder in itself that the modern Republican Party is decidedly not the Republican Party of George H.W. Bush, who signed the Americans with Disabilities Act into law 35 years ago this month.

Technologically speaking, the SNAP cuts comes amid Uber announcing expansion of retailers who accept SNAP payment in UberEats. The less money disabled people have to spend on UberEats, the less food they’ll have to eat. Which goes back to my point in the previous paragraph: most abled people don’t give two shits about people like me.

Read More
Steven Aquino Steven Aquino

Gemini App Gets Remade ‘Audio Overview’ player

A report from 9to5 Google’s Abner Li this week brings with it news Google has added what Li describes as a “nice quality-of-life update” to the Gemini app on iOS and Android: the ability to generate audio overviews, replete with native playback controls. The feature is in version 16.27 of the Google app on Android, as well as Gemini on iOS,

“Previously, tapping on a generated Audio Overview opened the file in your browser with a long URL,” Li wrote of the interface changes for Audio Overview. “You could listen in that Chrome tab or download (and use the Files app) for an unwieldy experience. Now, the Android and iOS app, like gemini.google.com, uses a native player.”

Li’s story includes screenshots (on Android) showing asking Gemini to “Generate Audio Overview” of various PDF files. In a broad scope, this functionality strikes me as conceptually similar to how, Forbes for instance, includes a button on webpages that people can click or tap to have an article read aloud to them. It effectively turns news stories into audiobooks—which, as Dr. Victor Pineda told me last year, were originally conceived by people in the Blind community. Ipso facto, the Gemini app’s new Audio Overview feature is, at its core, an accessibility feature. Beyond Blind and low vision people, the overviews may very well be a boon to, say, people who are strong auditory learners in grasping information. Likewise, it’s plausible someone with limited range of motion may find audio content more accessible than manually scrolling the aforementioned PDF document. Whatever the reason(s), it’s obvious the existence of Generate Audio Overview is as much about accessibility as it is purported convenience.

For its part, OpenAI has a Voice Mode for ChatGPT. I covered its Read Aloud feature last year, with my story including an interview with OpenAI’s Joanne Jang. She told me all about Read Aloud, as well as OpenAI’s philosophy on prioritizing accessibility for all.

Read More
Steven Aquino Steven Aquino

‘signing into streaming accounts is a major pain’

Ryan Christoffel this week wrote for 9to5 Mac about an issue he described as Apple TV’s “biggest problem”: signing into streaming accounts. Apple, he said, is cognizant of the issue and is attempting to ameliorate things by building a new API for developers.

“If you’re a linear TV user, switching to an Apple TV 4K with streaming apps can require a huge learning curve. Different accounts, different apps and credentials, and of course entering those credentials on your TV,” Christoffel wrote. “Even for the most tech-savvy, it’s a clunky experience entering streaming app credentials using a TV remote.”

Apple’s new framework for tvOS 26, called Automatic Sign-In, is characterized by the company as “[letting] people sign in to your app once—on one device—and access it across each of their Apple devices,” adding the upshot of the API is it “eliminates the need to re-enter usernames and passwords, so people can enjoy your app seamlessly from any screen.” Obviously, the user-facing benefits are contingent upon companies such as ESPN, Hulu, and others actually adopting the API in the same way, for example, Netflix supports the native playback controls on tvOS rather a building a custom setup.

“You’ll be able to log in to Netflix once [with Automatic Sign-In] on your iPhone, then automatically get logged in on iPad, Apple TV 4K, and so on,” Christoffel said.

Many streaming services have, to their credit, offered QR codes to help expedite the sign-in process. I use this method occasionally, and it’s been fine if not ideal. While there is a cogent argument for the accessibleness of QR codes in presenting information to disabled people, Apple’s solution vis-a-vis its new Automatic Sign-In API should prove markedly more accessible. The idea behind it is conceptually identical to how, say, AirPods pairing works: do so once and it’ll spread across the galaxy, so to speak. Moreover, the framework is yet one more example of a de-facto accessibility feature; Christoffel rightly frames the problem as tedious and annoying and inconvenient, but as ever, it’s an accessibility issue too. The fact of manually signing in to each and every streaming service—even with QR codes in tow—can be an arduous journey for many in the disability community. It isn’t a trivial distinction because, as I often say, it’s the little things that end up making the biggest difference in shaping the overall user experience—good or bad. For those with disabilities, implementation details like sign-in can have an outsized effect on how someone can use a product.

In other words, there’s a thin line between convenience and accessibility.

Finally, here’s a tvOS tip for me that’s helped with accessibility. Thanks to Joe Rossignol at MacRumors, I recently learned it’s possible to change the keyboard layout from a linear view to a grid view. Having letters, numbers, et al, packed together in a compact space is much easier to maneuver; I prefer the grid view to the seemingly infinite expanse of the (default) linear view. (And of course, I use my iPhone as a keyboard too.)

Read More
Steven Aquino Steven Aquino

Forthcoming PlayStation software Update adds Multi-Device Pairing to dualSense Controllers

Sony on Wednesday announced its DualSense controllers will receive the ability to be automatically paired to multiple devices simultaneously. The software update, available in beta later this week, was detailed by Sony Interactive Entertainment’s vice president of product management Shuzo Kikuchi in a post published on Sony’s PlayStation blog.

“Many PS5 peripherals, including the DualSense wireless controller, are designed to support a variety of devices beyond PS5 including PC, Mac, and mobile devices. We believe enabling compatibility of our peripherals across multiple platforms creates a more flexible and seamless gaming experience,” Kikuchi wrote in the introduction. “As part of this effort, we’re excited to announce that the latest PS5 system update beta will preview a new feature that allows DualSense wireless controllers and DualSense Edge wireless controllers to be paired across multiple devices simultaneously, making it easier to switch between them without needing to pair each time.”

Kikuchi notes users have heretofore been required to pair their controller(s) each time someone wanted to use it with other devices. With the update, that tedium will be gone; users will be able to pair up to four devices at the same time and easily switch between them from the controller itself. “For example, you can take your controller which you use with your PS5, then seamlessly switch connection to a PC to play PC games, or connect it to a smartphone to enjoy Remote Play from your PS5,” Kikuchi said. “With this enhanced flexibility, you can enjoy gaming more freely across multiple devices.”

The pairing/switching process involves a combination of presses on the controller.

For its part, Apple has supported PlayStation controllers for several years on its panoply of platforms. In addition, the company announced at WWDC last month visionOS 26 supports Sony’s PlayStation’s VR2 Sense controllers after it had been rumored awhile.

Today’s news from Sony is welcome, particularly in an accessibility context. The big win here is alluded to in Kikuchi’s piece, as users needn’t manually pair their PlayStation controller with their other device(s). As with setting up AirPods, pair once and it propagates to one’s other kit. A disabled person who, for instance, has any number of cognitive/motor/visual conditions which could cause friction with bespoke pairing, this feature of ostensible convenience is transcended to become something arguably more meaningful by breaking a barrier and enabling greater accessibility to gaming to all. It goes to show how a seemingly mundane implementation detail—pairing accessories—can in actuality have a profound role in shaping a positive user experience for people.

Read More
Steven Aquino Steven Aquino

Electronic Arts’ New ‘FC26’ Includes ‘First-Ever’ High Contrast Mode, more accessibility Features

Electronic Arts (EA) this week published so-called “pitch notes” for the latest edition of its forthcoming EA Sports’ soccer title, FC26. The game comes out on September 25.

The post’s authors are a trio of FC26 designers and producers in Keegan Sabatino, Gigo Navarro, and Thomas Caleffi. They write the team’s overarching goal with the new version of the game is “[making] gameplay better,” which has been achieved, they said, by focusing on three key areas: (1) competitive and authentic gameplay; (2) gameplay fundamentals; and (3) features and updates inspired by listening to user feedback.

“FC26 is powered by your feedback, and we aim to make it feel more responsive, rewarding, and enjoyable,” the team said of the game’s muse during development.

Most pertinent to me, of course, is accessibility. The team writes it has “worked closely with our Accessibility Design Council to introduce a range of new accessibility features” to FC26, with the marquee feature being a high-contrast mode. EA touts the addition is “one of the first-ever sports titles” to include such functionality. The all-new high contrast mode is described as “[increasing] the visual separation between footballers, the pitch, and other key gameplay elements, making action clearer and more visible to those who need it,” with EA adding the mode should prove especially helpful to blind and low vision gamers who “find it challenging to track fast-paced gameplay.” Moreover, high contrast mode is highly customizable; according to EA, users can choose whether to apply the increased contrast to a player’s entire body or just their kit, for example. Additionally, there’s a Pitch Saturation Slider that people can fiddle with to “adjust the intensity of the pitch colors, helping you focus on the most important parts of the game.” The feature’s raison d’être is characterized by EA as “[reducing] visual clutter and make gameplay more comfortable for everyone.”

Beyond high contrast mode, EA notes other accessibility features for FC26 include an “accessibility boot flow screen” that, similar to how Apple devices work, gives people the chance to toggle on whatever accommodation(s) they need to accessibly play the game, as well as improved, more configurable captions (or subtitles, as EA says).

FC26 will be compatible with PlayStations 4 and 5, Xbox Series X and S, Nintendo Switch and Switch 2, and PC, as well as cloud gaming services such as Amazon’s Luna.

Read More