Steven Aquino Steven Aquino

The disability Angle In ESPN’s New Stuart Scott Film

As I write this, I’m three-quarters into ESPN’s latest 30 for 30 film, which premiered last week. The nearly 90-minute documentary, titled Boo-Yah: A Portrait of Stuart Scott, chronicles Scott’s life, both personally and professionally as a Black broadcast journalist. Scott, who died of cancer at age 49 in 2015, joined ESPN in 1993 and eventually rose to prominence to become the most popular SportsCenter anchor.

ESPN described the film last month in a press release as “[tracing] Stuart’s journey from local television in North Carolina to becoming one of ESPN’s most influential voices. At a time when hip-hop and popular culture was often marginalized in mainstream media and few Black anchors held national prominence, Stuart brought both unapologetically to SportsCenter—blending sharp analysis, pop culture and swagger in a way that spoke directly to a new generation of fans.”

The network continued in its announcement: “As the film recounts, Stuart’s impact extended far beyond the newsroom. He bridged sports and culture, made SportsCenter must-watch television and became a symbol of courage through his public battle with cancer—culminating in his unforgettable ESPYS speech that reminded viewers, ‘You beat cancer by how you live, why you live, and the manner in which you live.’”

I’m covering the documentary for several reasons, not the least of which because I learned by watching Boo-Yah that Scott had a disability. He coped with a rare visual condition called keratoconus, the effects of which were compounded by an eye injury sustained when a football hit him in the face during a New York Jets mini-camp in 2002. Upon recovering, he wore glasses and, according to the documentary, held his stat sheets super close to his face—I can relate—and struggled to read the teleprompter.

Scott was a mainstay of my sports-watching life; he indeed was my favorite SportsCenter personality. Beyond the disability angle, which I obviously am drawn towards, I feel like there are a lot of professional parallels to Scott’s tenaciousness in getting work (and thus respect) as a journalist from a marginalized community. I of course didn’t know Scott, but I definitely can empathize with his belief that he had to prove himself worthy in an industry where 99.9% of people don’t look like you. Even as I approach my own 13-year anniversary this coming May, with all that I’ve accomplished in tech media over the past decade-and-a-half, I continually feel the pressure to prove my worth over and over again—despite what friends and peers tell me about my extensively impressive résumé. Like Scott, I’m a minority in journalism—arguably the minority’s minority group—and constantly feel like, as Scott’s daughters recount at one point in the film, I must “work twice as hard to get half as much.” We’ve seen lots of success, but only after we’ve kicked down doors at every turn to procure our plaudits.

Scott made it to ESPN. Will I ever make it to ABC News or NBC News or The Gray Lady?

As a related aside, the ESPN app on tvOS is delightful—so much so, it’s in my Top Shelf.

Anyway, I highly suggest sitting down to watch Boo-Yah. It’s well worth your time.

Read More
Steven Aquino Steven Aquino

Inside the rochester institute of technology’s Latest Mission to center the Deaf Viewpoint

Early last month, Susan Murad wrote for the Rochester Institute of Technology’s website about how researchers at RIT, as the New York-based institution is colloquially known, soon will “use eye-tracking to show how deafness impacts vocabulary knowledge and reading as well as how deaf and hard-of-hearing children, who have historically shown lower than average reading outcomes, develop into highly skilled readers.” The research project is largely made possible by way of a not-insignificant lift from a $500,000 grant provided by the venerable National Institutes of Health, or NIH.

According to Murad’s story, RIT’s research is led by Dr. Frances Cooley, an assistant professor at the National Technical Institute for the Deaf’s Department of Liberal Studies. Dr. Cooley, who leads the school’s Reading and Deafness Lab, and team, Murad reported, are examining “how vocabulary knowledge in American Sign Language supports English reading development” [as well as] “how first-language knowledge shapes second-language reading comprehension and eye-movement control.” The team’s findings will “have important implications for theories of reading development and for educational practices that support bilingual learners,” according to Murad.

Fast-forward to mid-December and I had the opportunity to sit down virtually with Dr. Cooley to discuss the work by her and her team. She explained the root of her interest in deafness and reading comprehension traces back to an article she came across while doing graduate work that said the average Deaf person reads at a fourth grade level. Such a sobering statistic bothered Dr. Cooley, she told me, largely because “[it] said to me we’re not doing something in our educational practices to allow deaf students to thrive.” As such, the knowledge motivated her to begin looking into why reading levels amongst Deaf people are so low; she wanted to better understand Deaf people and how exactly they read, along with a deep dive into groups of Deaf readers. In particular, Dr. Cooley was keenly interested in who had early access to ASL versus those who didn’t.

“When we look at those who had early access to American Sign Language, we actually see these incredible differences that are beneficial for Deaf readers,” Dr. Cooley said. “They are actually more efficient. They read faster. They skip more words, and this doesn’t actually negatively impact their comprehension. This is particularly interesting because they’re technically second language users of English, and most second language users are going to be less efficient in their second language, but these Deaf readers are even more efficient than a typically hearing monolingual reader.”

She continued: “I really got excited about this strengths-based approach to understanding what a successful Deaf reader does, and I wanted to be able to translate that into educational practices so that all Deaf readers can thrive. I really think moving away from a focus on what people can’t do and transitioning that to what they can do is really beneficial in a bunch of different ways. Eye-tracking—I love to say your eyes are your best way to point your brain at different things—we don’t really have any other way to point our brains at things, so if we’re looking at the eye movements, we can get really fine-grained information about what people are doing when they’re actually reading. I think that’s much more interesting than having someone read a sentence or read a paragraph and answer questions about it, because that involves a whole bunch of other processes like memory, and to me, that’s less interesting to me. It’s still important, but what people are actually doing as their eyes move across a sentence can tell us so much about the underlying processes of what their brains are actually interested in [when they] successfully extract language from text.”

In a sentence, Dr. Cooley said all this highfalutin eye-tracking tech and subsequent research is meant to “establish how a Deaf child uses their first language ASL skills.”

Asked to expound on her goals, she replied thusly: “I’m looking primarily at Deaf children who had early access to sign language: either they have Deaf, signing parents or they have hearing parents who made an effort to learn sign language early. Then these kids go to bimodal, bilingual schools, so they’re really depending on their ASL skills to learn to read English. I really want to know how, from a bilingualism perspective, how that first language access and having a strong first language can benefit the ability for these children to learn a second language, which is English or any other ambient language in a community, by exploiting their first language skills. We see this in hearing populations. We see this all the time. Bilingualism is the norm in most countries around the world, bilingual or multilingualism. If we understand a Deaf child signer as a developing bilingual child, and we think about the aspects of their first language and how that can help them learn their second language more successfully, we’re getting a more appropriate and equitable snapshot of this minority population.”

When asked about the technical component involved with eye-tracking, Dr. Cooley said the device she uses is mounted atop a desk with a laptop behind it such that a child can sit normally and read what’s on screen. The tracker then uses a painless, undetectable infrared light to the subject’s eyes, which is reflected and travels back to the computer. The reflected light contains data into where the child’s eyes are positioned while reading—all of it in real time. “Based on what we already know about how readers use information to read, we can then look at Deaf readers in this paradigm,” Dr. Cooley said.

She further noted there exists “a really big body of research” centered on eye movements and reading, adding it’s only been recently, in the last 20–30 years, that Deaf people, especially Deaf signers, have been included in these kinds of studies. The richer inclusion meant, Dr. Cooley said, researchers have been able to learn a lot more about how everybody, Deaf or not, “[uses] their eyes to extract language from text.”

As someone with low vision who, incidentally, has struggled with eye-tracking on things like Face ID and Apple Vision Pro, I asked Dr. Cooley how nimble her tracker device is. Her answer? Not very. The technology she currently uses assumes what she described as “your most typical eye differences,” emphasizing the tracker works “just fine” with aids like contact lenses and glasses. Beyond that, however, she “said the team is “unfortunately” excluding people who have ocular motor conditions (like yours truly) not out of maliciousness, but out of a desire to “be certain that what these kids are doing with their eyes is reflective of what their brains are trying to do.” Dr. Cooley went on to tell me people with lazy eye, medically known as strabismus, are excluded because their eyes can’t always point to where their brain wants to focus. This weakness, technologically anyway, is crucial because Dr. Cooley’s tracker relies upon an algorithm to function. She hopes to improve the algorithm over time such as to accommodate more types of readers, but that, she said with humility, is beyond her ken. Nonetheless, it is something very important to her that gets addressed as time goes on.

“If we’re not capturing the cognition of every single population of people, I don’t think we’re really capturing cognition—and that includes people with differences in their eye shapes and people with differences in how they use their vision,” Dr. Cooley said. “But at this point, it’s easier to start with the most traditional eye move [and] eye shape because it’s just easier to draw the conclusions we need. But [accommodating visual disabilities] is an important thing to think about. It’s just currently not one of my goals.”

At a more personal level, Dr. Cooley’s ties to deafness and the community are tight. She’s married to a Deaf person and has been a self-described “second language signer” for close to 16 years, telling me she likes to think of herself as being “pretty involved” with the Deaf community. Despite her horn’s toots, though, Dr. Cooley readily acknowledges the “positionality” as a hearing person in a hearing-dominated world. On the eye-tracking project, she explained there are consultants who help the researchers with not only data collection, but also informing with best practices when working with Deaf children so as to not be “triggering.” This is a key point, Dr. Cooley said, because a lot of Deaf people cope with what she termed “educational trauma,” so RIT’s goal is to avoid said triggers and instead be as “Deaf-friendly” as possible. Still, a significant number of people have reached out to Dr. Cooley and team to express their appreciation for going after the insights they’re trying to glean from their research.

“There’s a great need for this type of information. I think practitioners need it. There’s a lot of information out there about what is most important for a deaf child,” Dr. Cooley said. “One of the biggest arguments that can be made for an oral approach—avoiding sign language and instead making sure a Deaf child is able to speak and read lips and use hearing devices—one of the biggest arguments for that is they won’t be able to learn to read, or will be far less successful in learning to read if they can’t associate sounds with letters. I think that isn’t actually representative of what most Deaf people can do. If you look at Deaf signers, they have this incredibly rich and robust language; most Deaf people will talk about how they use their signing to help read to their children… they sign along with the book, and so their children are exposed to both print and sign. If we can take advantage of these things, I think we can not only make a Deaf child reader more successful, but also feel a little bit better about themselves and not feel like who they are and how they happen to be born is going to make them unable to do something. I think anybody should be able to do anything, and if our educational practices are not well-researched or not founded in research, we can’t know for sure they’re the best practices. It’s pretty clear, given the wide variability in reading outcomes for a lot of Deaf and hard-of-hearing people, that there’s something we don’t know, or there’s something that some people are doing better than others. We just we have to test it and see what’s going on to actually be able to make a difference.”

She added: “All of the conversations I’ve had with people, they’ve all been extremely positive. I think education experts, the people who are actually teaching children in the schools, policy makers, early intervention specialists, everybody wants some type of research that can really be used to show ‘Hey, ASL is not detrimental to your Deaf child, it’s actually going to be beneficial. Here is one of the ways that it’s beneficial.’ I have a lot of people reach out to me and asking for these resources and ask for papers that show American Sign Language is only beneficial for Deaf children learning to read.”

At its core, RIT’s work is ultimately about centering the Deaf point of view.

“I always say, if we actually listened to Deaf adults, a lot of this research might not be necessary,” Dr. Cooley said. “They’ve been telling us for years and years and years that ASL is so incredibly important for so many different reasons, but we need the research. Someone has to do it, and I’m so privileged I get to do it. And I love, love [doing] this work… it makes me excited! It feels like a privilege to be doing what I’m doing.”

Dr. Cooley spoke effusively about being based in Rochester and the city’s sizable Deaf presence. (In fact, this very piece is not my first rodeo with the National Technical Institute for the Deaf, having covered the Sign Speak app in September 2024.) She said it’s typical for those in cognitive science to choose the path of least resistance when it comes to recruiting people to participate in studies like hers. Naturally, the Deaf community is a smaller populace, even in Rochester, so it’s “going to take a little bit more effort” to get folks into the lab. But the payoff is worth it; Dr. Cooley told me her troops have fostered a tight relationship with Rochester’s School for the Deaf. She told me the school is a K–12, bimodal and bilingual institution for Deaf and hard-of-hearing students. Because of proximity, both geographically and logistically, Dr. Cooley said her staff actually finds it “not too difficult” to hook up with interested parents and others. And Rochester isn’t the end-all, be-all either; Dr. Cooley said her team has similar positive relationships spanning the country, from Texas to Indiana and beyond.

“Because of those relationships, we aren’t nearly as concerned with the data collection as somebody else without those relationships would be,” she said. “It’ll definitely take longer to run this type of research than it would take to run this type of study with hearing children because there are fewer concentrated pockets of these readers.”

Looking towards the future, Dr. Cooley hopes to forge “stronger partnerships” with experts across various disciplines, people who oftentimes exist on “in their own little silos.” Without these cross-collaboration, there’s too much navel-gazing and not nearly enough advancing in understanding the world, and the people who inhabit it, better.

“I really hope in the future, we’re able to get to a point where we can directly meet the needs of all children, not just Deaf and hard-of-hearing children—all children who have varied needs in terms of their ability to read and write,” Dr. Cooley said in looking into the proverbial crystal ball. “In the current day and age, if you can’t read and write, your ability in an academic or professional field is going to be pretty limited. I think being able to meet the needs of all of our children so they can be fully functional and fully capable adults is the goal. I really hope my research can start bringing us towards that.”

Read More
Steven Aquino Steven Aquino

White House Claims ASL Interpreters would ‘intrude’ on the president’s public image

Meg Kinnard reported last week for The AP the White House argues that using ASL interpreters during press briefings “would severely intrude on the President’s prerogative to control the image he presents to the public.” The Trump administration made said claim in response to a lawsuit seeking to compel them to provide interpreters. Attorneys for the Justice Department added President Trump has “the prerogative to shape his Administration’s image and messaging as he sees fit.”

“Department of Justice attorneys haven’t elaborated on how doing so might hamper the portrayal President Donald Trump seeks to present to the public,” Kinnard wrote on Friday. “But overturning policies encompassing diversity, equity and inclusion have become a hallmark of his second administration, starting with his very first week back in the White House.”

Kinnard continued: “Government attorneys also argued that it provides the hard of hearing or Deaf community with other ways to access the president’s statements, like online transcripts of events, or closed captioning. The administration has also argued that it would be difficult to wrangle such services in the event that Trump spontaneously took questions from the press, rather than at a formal briefing.”

I first covered this story back in July, the editorializing from which bears repeating here. Like the State Department’s decision to go back to Times New Roman from Calibri in correspondence, the White House’s proclivity to poo-poo the need for sign language interpretation—a defense that much more laughable because Gallaudet University is virtually down the street—is yet another example of the Trump administration’s extinguishing of any and all diversity and inclusion initiatives. It’s being made abundantly clear the powers-that-be, starting with Trump himself, wants America to be White, wealthy, male, and able-bodied. But such rationale is par for the course—not just at 1600 Pennsylvania Avenue, but for society as a whole. The disability community, yours truly included, is always cast away to the margin’s margin, even amongst DEI supporters, because society has internalized that having disabilities is bad and a sign of a “broken” human condition. Down to brass tacks, that’s why accessibility exists: to accommodate traversing a world unbuilt for people like me. Likewise, it’s why disability inclusion is so miserably behind other areas of social justice reporting in journalism; it’s oftentimes seen as too esoteric or niche to devote meaningful resources towards. All things considered, that’s why I always say doing this work and amplifying awareness is a task of Sisyphean proportions most days. We use technology as much as anyone else. We read the news like anyone else. We’re Americans like anyone else in this country… but somehow are thought as something less than the human beings we obviously are.

Read More
Steven Aquino Steven Aquino

Apple Says ‘Pluribus’ is ‘Most-Watched Ever’

Marcus Mendes reported for 9to5 Mac this week Apple TV’s new hit show, Pluribus, has officially become the streaming service’s “most-watched ever.” The news comes shortly after Apple announced Pluribus became its “biggest drama launch ever.”

“Last month, Apple said that Pluribus had overtaken Severance Season 2 as Apple TV’s most successful drama series debut ever, a landmark that wasn’t completely surprising, given the overall anticipation and expectation over a new Vince Gilligan (Breaking Bad, Better Call Saul) project,” Mendes wrote on Friday. “Now, on the same day that F1 : The Movie debuted at the top of Apple TV’s movie rankings, the company confirmed that Pluribus has reached another, even more impressive milestone: it is the most watched show in the service’s history. Busy day.”

As Mendes notes, Apple keeps its viewership cards—and its subscriber numbers—close to the proverbial chest, so it’s difficult to quantify exactly what “most-watched ever” actually means. At any rate, I can attest personally that Apple TV is unquestionably my favorite streaming service—and not solely because of its embrace of earnest disability representation. Like anyone else, I like to be entertained and Apple TV does it for me with shows like Pluribus and Severance and The Morning Show and For All Mankind. I’m not quite up to speed with Pluribus as of this writing, but can heartily say it and Severance are two of the best damn shows I’ve ever seen in my 44 years of life. What makes them even more enjoyable is, technologically speaking, my 77” LG C3 OLED—which came out in 2023 but I got in early January 2025—is so bright and sharp, along with its infinite contrast, and makes not only for spectacular picture quality, it makes for spectacular, accessible picture quality in terms of sheer size and obviously fidelity. Between my various Apple devices, I’ve grown accustomed to OLED displays for some time now; that said, there’s nothing like experiencing OLED on a screen as large as a television’s. Like Steve Jobs said of the iPhone 4’s Retina display 15 years ago, once you go OLED, it’s hard to go back to a “lesser” (and, yes, less expensive) technology.

Anyway, go watch Pluribus posthaste if you haven’t already. It’s so damn good.

According to Mendes, the show’s first season will run through December 26. Season 2 is currently in development following Apple’s original commitment to do two seasons.

Read More
Steven Aquino Steven Aquino

Google Translate Gets Live Translation Enhancements in Latest update

Abner Li reports for 9to5 Google today Google Translate has been updated such that live translation leverages Gemini—including while using headphones. The feature is available in the iOS and Android apps, as well as the Google Translate website and Google Search. Live translation is launching first in the United States and India with the ability to translate from English into over 20 languages such as Chinese and German.

“Google Translate is now leveraging ‘advanced Gemini capabilities’ to ‘improve translations on phrases with more nuanced meanings,’” Li wrote on Friday. “This includes idioms, local expressions, and slang. For example, translating ‘stealing my thunder’ from English to another language will no longer result in a ‘literal word-for-word translation.’ Instead, you get a ‘more natural, accurate translation.’”

(File this under “I Learn Something Every Day”: Google Translate has a web presence.)

As to the real-time translation component, Li says the feature is underpinned by Gemini 2.5 Flash Native Audio and works by pointing one’s phone in the direction of the speaker. He also notes Google says Translate will “preserve the tone, emphasis and cadence of each speaker to create more natural translations and make it easier to follow along with who said what.” Importantly, Li writes the live translation function is launching in beta on Android for now; it’s available in the United States, India, and Mexico in more than 70 languages, with Google further noting the software works with “any pair of headphones.” iOS support and more localization is planned for next year.

“Use cases include conversing in a different language, listening to a speech or lecture when abroad, or watching a TV show/movie in another language,” Li said in describing live translation’s elevator pitch. “In the Google Translate app, make sure headphones are paired and then tap ‘Live translate’ at the bottom. You can specify a language or set the app to ‘Detect’ and then ‘Start.’ The fullscreen interface offers a transcription.”

It doesn’t take an astrophysicist to surmise this makes communication accessible.

At Thanksgiving dinner a couple weeks ago, one of my family members regaled everyone with stories about his recent trip to Paris. He of course knows I’m a technology reporter, and he excitedly told me he bought a pair of AirPods Pro 3 at the Apple Store before his trip so he could try Apple’s own Live Translation feature, powered by Apple Intelligence. I was told it worked “wonderfully” with him being able to hear their French translated to his English piped into his earbuds. It seems to me Google’s spin on live translation works similarly, with the unique part (aside from Gemini) being that it isn’t limited to Pixel Buds. At any rate, language translation is a genuinely good use case for AI—and, more pointedly, a good example of accessibility truly being for everyone, regardless of ability, because it breaks through communicative barriers.

Apple announced Live Translation on AirPods at its fall event in September.

Read More
Steven Aquino Steven Aquino

Report: Refreshed Studio Display Found in code

Earlier this week, Filipe Esposito reported for Macworld an internal build of iOS 26 contains references to a looming update to the Studio Display. The finding, using the codename “J527,” corroborates previous reporting by Mark Gurman at Bloomberg.

“References in the code clearly show that this new Studio Display has a variable refresh rate that can go up to 120Hz, just like the ProMotion display on the latest MacBook Pros. The current Studio Display is limited to 60Hz,” Esposito wrote on Wednesday. “Furthermore, the code references a ‘J527’ monitor that also supports both SDR and HDR modes, an upgrade from the current SDR-only model. This is a strong indication that Apple will replace the LCD panel with better technology, such as Mini-LED that can achieve higher brightness levels.”

According to Esposito, other features of the still-in-development second-generation Studio Display include an A19 processor, ProMotion, and much better HDR support.

I’ve written previously about my sore need for a new Mac to replace my outmoded (yet still chugging along) 2019 Retina 4K iMac, a task I’ve put off for a variety of reasons. I really do feel lots of FOMO not running macOS 26 Tahoe, however, and feel bad for life “dictating” to me that the lowest common denominator—my job not requiring tons of compute power—makes my trusty yet tired iMac “good enough.” As I’ve said before, it sucks to miss out on Apple Silicon amenities like iPhone Mirroring—a feature which I haven’t written about much, if at all, but which has serious benefits from an accessibility perspective. All of this to say, I’m very excited at the prospects of a new external monitor that I can plug one of my MacBooks into; a laptop’s screen is serviceable to me while I’m out of the house—narrator: his severe anxiety and depression scoffs at the notion—but if I’m working primarily at my desk, I’d much rather have a bigger screen to accommodate my low vision. So while the Pro Display XDR is forever my white whale monitor, this rumored Studio Display upgrade sounds damn good too—and is arguably the eminently more practical device for my spartan needs.

One way or another, I’m hellbent on making 2026 the Year of Steven’s Desk Makeover.

Apple released the Studio Display in 2022 to complement the all-new Mac Studio.

Read More
Steven Aquino Steven Aquino

‘Fire TV makes entertainment more accessible’

Late last week, Amazon published a piece on its website in which it touts a few of accessibility benefits of its Fire TV operating system for people with disabilities. The platform’s assistive technologies, the company said, “represent more than just technology: they’re about creating moments where everyone can enjoy entertainment their way,” adding Fire TV “adapts to your needs rather than the other way around.”

“Picture this: It’s movie night, and everyone’s gathered around the TV. One person is trying to solve the mystery before the detective, another is straining to catch every word of dialogue, and someone else needs their hearing aids to enjoy the show. We’ve all been there—wanting to share entertainment moments together but having different needs to experience these moments best,” Amazon wrote in the introduction. “During a time of year when friends and family are gathering more often, Amazon Fire TV is highlighting how Fire TV is built for how you watch. This initiative celebrates the unique ways we all enjoy entertainment and highlights innovative features that make watching your favorite TV shows and movies more accessible and enjoyable for everyone.”

The meat on the bones of Amazon’s post highlights three features in particular: Dialogue Boost, Dual Audio, and Text Banner. I’ve covered all of these technologies in one way or another several times over the years, and have interviewed Amazon executives such as Peter Korn many times as well. In fact, one of my earliest stories for my old Forbes column was an ode to Fire TV hardware in the Fire TV Cube. My praise holds up today; whatever one thinks of Fire TV’s ad-littered user interface and general design, it’s entirely credible for a disabled person who, for example, has motor and visual disabilities, to choose a Fire TV Cube as their set-top box precisely for Fire TV’s accessibility attributes—especially the Cube’s ability to control one’s home theater. To wit, it isn’t trivial that the Cube can switch between HDMI inputs on a TV and even switch on a game console or Blu-ray player. Given the smorgasbord of remotes and whatnot, that someone can ask Alexa to, say, “Turn on my PlayStation 5” is worth its weight in gold in terms of accessibility for its hands-free operation. Again, to choose Fire TV (and the Cube) as one’s preferred TV platform because of accessibility is perfectly valid; it’s plausible that accessibility is of greater importance than the subjectively “messiness” of the Fire TV’s UI and its barrage of advertisements.

You can learn more about Fire TV accessibility (and more) on Amazon’s website.

Read More
Steven Aquino Steven Aquino

Times New Rubio

This week, The New York Times ran a story, under a shared byline of Michael Crowley and Hamed Aleaziz, which reported on Secretary of State Marco Rubio’s memo to State Department personnel saying the agency’s official typeface would go back to 14-point Times New Roman from Calibri. The Times didn’t include Rubio’s full statement, but John Gruber obtained a copy from a source and helpfully posted a plain text version.

“Secretary of State Marco Rubio waded into the surprisingly fraught politics of typefaces on Tuesday with an order halting the State Department’s official use of Calibri, reversing a 2023 Biden-era directive that Mr. Rubio called a ‘wasteful’ sop to diversity,” Crowley and Aleaziz wrote on Wednesday. “While mostly framed as a matter of clarity and formality in presentation, Mr. Rubio’s directive to all diplomatic posts around the world blamed ‘radical’ diversity, equity, inclusion and accessibility programs for what he said was a misguided and ineffective switch from the serif typeface Times New Roman to sans serif Calibri in official department paperwork.”

The reason I’m covering ostensibly arcane typographical choices is right there in the NYT’s copy: accessibility. The Biden administration’s choice to use Calibri, decreed in 2023 under then-Secretary Antony Blinken, was driven in part to be more accessible—Calibri was said to be more readable than Times New Roman. In his piece, Gruber calls bullshit on that notion, saying the motivation was “bogus” and nothing more than a performative, “empty gesture.” He goes on to address Secretary Blinken’s claim, according to a The Washington Post report, that the Calibri-to-Times New Roman shift was made because serif fonts like Times New Roman “can introduce accessibility issues for individuals with disabilities who use Optical Character Recognition technology or screen readers [and] can also cause visual recognition issues for individuals with learning disabilities.” Gruber rightly rails against the OCR and screen-reader rationale as more bullshit while also questioning the visual recognition part.

I’m here to tell you the visual recognition part is true, insofar as certain fonts can render text inaccessible to people with certain visual (and cognitive) disabilities. This is because the design of letters, numerals, symbols, et al, can look “weird” and not “normal” to certain people and how one’s brain processes visual information. Why this is important is because bad typography can, for a person with low vision like yours truly, adversely affect the reading experience—both in comprehension and physically. Depending on your needs and tolerances, slogging through a weird font can actually lead to physical discomfort like eye strain and headaches. It’s why, to name just one example, the short-lived ultra-thin variant of Helvetica Neue was so derided in the first few iOS 7 betas back in 2013. It was too thin to be useful in terms of legibility, prioritizing aesthetics over functionality. (A cogent argument could be made the tweaks Apple has made to Liquid Glass, including adding appearance toggles, are giant, flashing neon signs of correction from similarly prioritizing aesthetics over function at the outset.)

As somewhat of a font nerd myself—I agonized over what to use at Curb Cuts when designing the site before settling on Big Shoulders and Coda—I personally find Times New Roman ugly as all hell and not all that legible, but I can see the argument that it’s more buttoned-up than Calibri for official correspondence within the State Department. Typographical nerdery notwithstanding, however, what I take away from Rubio’s directive is simple: he cares not one iota for people with disabilities, just like his boss.

Read More
Steven Aquino Steven Aquino

Google Gives Pixel Watch 4 Pinch, Flick Gestures

Abner Li reports for 9to5 Google today Google has released what he describes as a “sizable” update for Pixel Watch 4 that adds one-handed gestures. The newfound functionality is part of Wear OS 6.1, which began rolling out to users late last week.

“Based on Android 16, BP4A.251205.005.W7 is rolling out to the Pixel Watch 2, 3, and 4, including both the Bluetooth/Wi-Fi and LTE models,” Li wrote. “This is officially ‘WearOS 6.1.’ (There are no updates to the original model, which will remain on Wear OS 5.1.).”

(Leave it to Google to lean into the nerdy and inscrutable version numbering.)

According to Li, the Pixel Watch 4 gains two new gestures, enabled by default: Double Pinch and Wrist Turn. The ability to answer and end calls with Double Pinch is coming “soon,” Google says. For now, Google says Double Pinch has robust capabilities, including “[scrolling] through alerts, instantly send the first Smart Reply, manage your timer and stopwatch, snooze an alarm, play/pause music, or even snap a photo.”

From an accessibility standpoint, it’s reasonable to presume these new gestures would make Pixel Watch more accessible to users with disabilities. The obvious analogue is, of course, Apple Watch Series 11. In watchOS, users are able to use gestures like Double Tap and Wrist Flick to do essentially the same exact things on Apple Watch that Google touts Pixel Watch now can do. The win for accessibility is simple: for users with certain motor disabilities, that one can use a one-handed gesture to control their watch—whether Apple Watch or Pixel Watch—can make specific actions more accessible. For example, someone needn’t strain their eyes (or their finger) to find and tap the bespoke Answer/End buttons on the screen to accept or end calls, respectively. A quick tap or pinch does the trick, which increases efficiency in addition to improving accessibility.

Today’s Pixel Watch software update news comes only a few days after Google announced accessibility enhancements to Android as part of its December Pixel Drop.

Read More
Steven Aquino Steven Aquino

Onn 4K pro Gets Gemini Support In Update

Luke Bouma at Cord Cutters News reported last week Walmart’s Google TV-powered Onn 4K Pro streaming box recently received a software update which added support for Gemini. The news fulfills a promise by Google that Gemini would be rolling out to more devices by the end of 2025. Google’s own Google TV Streamer got Gemini last month.

“The core of the upgrade centers on an evolved version of Google’s Gemini AI, which has been fine-tuned for more intuitive voice interactions and contextual understanding. Users will notice immediate improvements in voice search, where the AI now processes natural language queries with greater accuracy and speed,” Bouma wrote. “For instance, a simple request like ‘find me comedies from the ‘90s with strong female leads’ yields personalized recommendations drawn from vast libraries across Netflix, Hulu, and YouTube, factoring in viewing history and even real-time mood detection via on-device microphones. This represents a significant leap from the previous Google Assistant integration, which often required more precise phrasing to avoid misfires.”

TCL’s mini-LED QM9K TV was amongst the first devices to get Gemini on Google TV.

Back in early August, I posted a brief review of the aforementioned Onn 4K Pro. Bouma is correct is his assertion the addition of Gemini further buoys the value proposition of the $50 box; indeed, I wrote over the summer there’s a lot to like about Google TV’s content-centric design, especially its YouTube TV integration. Apple could learn a lot from its peer in adapting ideas to improve tvOS and the corresponding Apple TV 4K. Nonetheless, I also wrote tvOS is infinitely more performant than Google TV on the Onn 4K Pro, as the A15 chip in the “current” model smokes whatever off-the-shelf processor runs the Onn product. That, and the Apple ecosystem amenities, are what ultimately keeps me from switching my home theater allegiances. I dusted off my Onn 4k Pro over the weekend to install the Gemini update—and an accompanying update to the remote!—but was disappointed with my inability to summon it anywhere; all I could use was the stock Google Assistant. At any rate, my brief time revisiting the Onn box reminded me how technically inferior it is compared to my Apple TV. Say what you will about the apples-to-oranges comparison between a $50 box and a $130 box, and I still do chuckle at the Apple TV arguably being laughably over-engineered for its raison d'être, but it’s that performance prowess that, in the end, makes the Apple TV the crème de la crème of streaming devices. Walmart and Apple are decidedly not the same, as the kids say now.

I disagree with Bouma’s contention in his story that the now-with-Gemini Onn 4K Pro makes Walmart a disruptor when it comes to technological innovation—if anything, the retailer is opportunistic by leaning into the openness of Google TV/Android—but it nonetheless doesn’t take away the fact the Onn 4K Pro is a damn nice product for the price. If tvOS were to go away tomorrow, I’d switch to the Onn box without a second thought over Roku or Fire TV. As it stands, the Onn 4K Pro is a nice, good enough option—made even better with Gemini’s capabilities—for those who may want more than their TV’s built-in operating system yet can’t afford the premium price of the Apple TV 4K.

Read More
Steven Aquino Steven Aquino

Microsoft Shares ‘year in recap’ for accessibility

Last week, Microsoft marked the International Day of Persons with Disabilities by publishing a blog post in which the company detailed its “year in recap” for Windows accessibility. The piece was written by Akheel Firoz, a product manager at Microsoft.

“The Windows Accessibility team adheres to the disability community’s guiding principle, ‘nothing about us without us,’” Firoz wrote in the introduction to the blog post. “In the spirit of putting people at the center of the design process guided by Microsoft Inclusive Design, working with and getting insights from our advisory boards for the blind, mobility and hard of hearing communities is critical to creating meaningful and effective features that empower every single user.”

Firoz gives Windows’ Fluid Dictation feature top billing in the post, writing it’s “a feature designed to make voice-based text authoring seamless and intuitive for everyone, intelligently corrects grammar, punctuation and spelling in real time as you speak[and] this means your spoken words are instantly transformed into polished, accurate text, reducing the need for tedious manual corrections or complex voice commands.” He goes on to say users are able to leverage Copilot (on supported machines) to ensure custom vocabulary is recognized—all without the need for a network connection. At the heart of the enhancements made to Fluid Dictation is, as Firoz wrote, Microsoft’s desire to enable users to “focus on your ideas, and not the mechanics of text entry by minimizing errors and streamlining corrections when typing with your voice.”

Elsewhere, Firoz details improvements to Voice Access, “more natural and expressive voices” for Magnifier and Narrator, as well as efficient document creation with Narrator.

Although Microsoft, led by chief accessibility officer Jenny Lay-Flurrie, is institutionally committed to advancing accessibility for the disability community, it’s nonetheless worth pointing out the company’s blog post came out just one day before Tom Warren at The Verge reported Microsoft is “quietly walking back its diversity efforts.” Square those how you will, but I personally found the timing interesting if probably coincidental. As someone who has interviewed Flurrie several times, my strong suspicion is she’d riot were Microsoft to walk back the accessibility efforts it has made.

Read More
Steven Aquino Steven Aquino

Apple Execs Kate Adams, Lisa Jackson to Depart

The times, they keep a-changin’ for Apple.

Following the news earlier this week that John Giannandrea and Alan Dye would be moving on, Apple on Thursday announced two more members of its leadership group would be leaving in the not-too-distant future. Kate Adams, Apple’s top lawyer, and Lisa Jackson, who leads the company’s environmental and social initiative programs, both will be retiring in 2026. In Adams’ case, she’ll be replaced by Jennifer Newstead; Newstead previously worked as Meta’s chief legal officer and joins Apple next month.

Curb Cuts typically isn’t the place to read hot executive turnover news and analysis, but this week’s moves by Apple warrant exceptions for accessibility’s sake. Indeed, the exception game certainly applies in Jackson’s case, as her purview of social initiatives obviously includes accessibility. In journalistic terms, covering accessibility as I have for close to 13 years is decidedly unglamorous and non-conducive to scoops or “sources said” reporting—although I’ve had my moments in my career. That said, I can dutifully report my understanding from various sources over time during Jackson’s tenure in Cupertino is that she has long been an ardent supporter of Apple’s accessibility efforts, both in engineering and in inclusivity. Moreover, I’ve interacted with Jackson on more than one occasion, before and after media events, to exchange pleasantries and the like. During those times, Jackson has herself been emphatic about empowering the disability community and just the technical marvels so many of the actual accessibility features truly are. While Sarah Herrlinger is Apple’s public “face” when it comes to accessibility, akin to Craig Federighi with software writ large and, externally, to Jenny Lay-Flurrie at Microsoft, Jackson, from everything I’ve been told, is very much an internal champion of the cause as the proverbial sausage is being made.

Apple can be rightly criticized for lots of things—including, yes, in the accessibility space (see: Liquid Glass). But Apple’s work in accessibility is the furthest thing from performative or an empty bromide. Top to bottom, Apple truly does care about this shit.

“Apple is a remarkable company and it has been a true honor to lead such important work here,” Jackson said in a statement for Apple’s press release. “I have been lucky to work with leaders who understand that reducing our environmental impact is not just good for the environment, but good for business, and that we can do well by doing good. And I am incredibly grateful to the teams I’ve had the privilege to lead at Apple, for the innovations they’ve helped create and inspire, and for the advocacy they’ve led on behalf of our users with governments around the world. I have every confidence that Apple will continue to have a profoundly positive impact on the planet and its people.”

COO Sabih Khan will oversee Jackson’s charges following her departure, Apple said.

Read More
Steven Aquino Steven Aquino

Google Announces Latest Android Accessibility Enhancements in new blog post

Google commemorated this year’s International Day of Persons with Disabilities earlier this week by publishing a blog post in which the company detailed “7 ways we’re making Android more accessible.” The post was written by Julie Cattiau, Google’s product manager for Android accessibility.

“In celebration of International Day of Persons with Disabilities tomorrow, we’re excited to share several new accessibility features on Android that make it easier to see your screen, communicate with others and interact with the world,” Cattiau wrote.

The marquee feature mentioned in Google’s post is what Cattiau calls “an expanded dark mode.” The system will use dark mode even in apps that don’t support it natively, with Cattiau touting the expansive effect “creates a more consistent and comfortable viewing experience, especially for people with low vision or light sensitivity.”

(This expanded dark mode is something I wish Apple adds to iOS sooner than later.)

Elsewhere, Google’s post walks through improvements to Expressive Captions, which can now “detect and display the emotional tone of speech,” which notably includes videos uploaded to YouTube after October. There’s also better voice dictation for the TalkBack screen readers, as well as more accessible pairing and setup of hearing aids, and a much more robust, Gemini-powered Guided Frame in the Pixel camera app.

The enhanced accessibility features coincide with Google’s December Pixel Drop.

Read More
Steven Aquino Steven Aquino

Apple Design Chief Alan Dye Leaves for Meta

Mark Gurman at Bloomberg shared a blockbuster scoop today: Alan Dye has left Apple.

"Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices,” Gurman reported earlier on Wednesday. “The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.”

When reached for comment, Apple gave Bloomberg a statement from CEO Tim Cook.

“Steve Lemay has played a key role in the design of every major Apple interface since 1999,” Cook said in the statement. “He has always set an extraordinarily high bar for excellence and embodies Apple’s culture of collaboration and creativity.”

Dye’s departure—he starts his gig as Meta’s chief design officer on New Year’s Eve—marks the second time this week a senior Apple executive is making an exodus. On Monday, the company announced AI boss John Giannandrea would be leaving his post while naming Amar Subramanya, who left Microsoft for Apple, as his replacement.

I have nothing substantively to add towards the analysis of Dye’s tenure in Cupertino. I can say, however, Jason Snell’s story on today’s news is well worth a read. What I will share, though, is I had a chance to speak with Dye once in the recent past, albeit off the record. Back in September 2022, at the iPhone 14 launch event at Apple Park, I got to spend maybe 5–10 minutes in the hands-on area conversing with Dye all about the then-new Dynamic Island. I remember feeling excited about the accessibility prospects of the new feature during its reveal, and I told Dye exactly that. His eyes were locked onto mine as I explained to him how and why I thought the Dynamic Island would be beneficial to me as a person with disabilities, and he seemed genuinely moved and fascinated by my first take thoughts. Like every other Apple executive I’ve spoken to, on or off the record—Cook included—Dye responded by telling me accessibility is a company-wide value, adding his team works closely with its comrades in Accessibility.

From an accessibility point of view, I can’t help but wonder if the aforementioned Lemay will step into his role and similarly embrace accessibility’s vital part in good product design—much in the same vein to how I think about how Cook’s eventual successor will sustain, and possibly evolve, Apple’s pledge of support to the disability community.

Maybe Dye’s exit (and Lemay’s ascension) will be a “coup” for disabled people?

Read More
Steven Aquino Steven Aquino

‘MLB: The Show’ Coming to Phones

In other video game news, my pal Zac Hall reports today for 9to5 Mac the long-running popular baseball simulation title, MLB: The Show, is soon coming to mobile devices.

“With the 20th anniversary of the game’s first release approaching, MLB: The Show is coming to iPhone for the first time… the team calls it ‘a new standalone experience built from the ground up to deliver realistic baseball gameplay on mobile devices,’” Hall said.

Hall also notes the game’s developer, San Diego Studios, posted on X with the big news and announced MLB: The Show is launching first in the Philippines, adding “this is step one [and] we’re testing, learning, and building toward broader availability.”

MLB: The Show on mobile requires iOS 26, with San Diego Studios noting gamers get to enjoy “enhanced graphics, increased frame rates, and higher resolutions” on iPhone 16 and later,” according to Hall. Neither an iPad version nor cross-platform play is planned.

San Diego Studios has posted a trailer video to YouTube, which I’ve embedded below.

As a diehard baseball fan—the sport is my first love—I have a couple recent versions of MLB: The Show for my PlayStation 5. It’s incredible how realistic the gameplay is, particularly the unique batting stances and pitching motions for each player. Likewise, the ballparks also are incredibly detailed. As someone who grew up playing myriad sports titles on the Sega Genesis—now the Mega Sg—I’m continually awestruck by how far graphic-rendering technology has progressed over the last 30–35 years. They make MLB: The Show a great franchise and a ton of fun to play. No word on accessibility features for the phone version of MLB: The Show, but it’d be cool for them to be there.

Read More
Steven Aquino Steven Aquino

Electronic Arts Announces More Accessibility Patents Join Patent Pledge Commitment

Redwood City-based Electronic Arts (EA) announced on Wednesday the addition of eight new patents to its ongoing Patent Pledge for Accessibility. The additions bring the total number of patents to 46, and coincides with the 5-year anniversary of the Pledge.

“Through the Pledge, we share our accessibility-centered technology with the wider industry so that together we can meet the needs of our diverse gaming community,” EA says of the Patent Pledge. “It covers some of our most innovative technologies designed to break down barriers for players with disabilities. This includes those with vision, hearing, speaking or cognitive conditions. Better yet, all this IP has been shared royalty-free, which means you won’t need to pay royalties or license fees to use it.”

According to EA, there are four foundational technologies covered in the aforementioned eight new patents. The technologies are: (1) intent-based models for select actions; (2) expressive speech audio generation; (3) robust speech audio generation; and (4) speech prosody audio generation. In particular, the intent-based model technology is the underpinning for the Grapple Assist feature in EA Sports UFC.

Elsewhere, the company also noted enhancements to its open-source Fonttik accessibility tool. There are “new colorblindness simulation filters to the existing text size and contrast analysis technology,” according to Electronic Arts.

EA said today marks “another advancement in [the company’s] mission to inspire the world to play through a commitment to making video games accessible to everyone.”

“Our aim over the past five years has been to create more accessible gameplay experiences for everyone, no matter how or where they play, and open up video games to as wide an audience as possible,” Kerry Hopkins, EA’s head of global affairs, said in a statement. “The accessibility patent pledge is a valuable, whole-industry resource with royalty-free solutions for various use cases, including speech recognition and generation, photosensitivity analysis, and color blindness adjustments. We are proud to enable developers to reach more players with these technologies.”

Read More
Steven Aquino Steven Aquino

Apple Releases ‘I’m not remarkable’ short film

Apple on Tuesday released a new film, called I’m Not Remarkable (embedded below).

The short film, brought to life by the same creative team behind another Apple accessibility film in 2022’s The Greatest, is a music video of sorts for the song “I’m Not Remarkable” by Kittyy & The Class. As the film is from Apple and thus partly a vehicle for product marketing, the film shows off things like iPhone and Apple Watch running accessibility features such as VoiceOver on iOS and AssistiveTouch on watchOS, respectively. Additionally, Apple has a new webpage which complements the video.

Messaging-wise, I’m Not Remarkable is, in fact, rather remarkable as it pushes back on long-held societal stereotypes about people with disabilities. It puts forth the idea that those in the disability community—yours truly included—are first and foremost human beings like anyone else who happen to use (Apple’s) technology to access a world unbuilt for us. We’re just people trying to live our lives like everyone else on this planet.

Moreover, a cogent argument could be made, again, that Apple’s accessibility software is technically remarkable unto itself. Armchair analysts and Wall Street types love to flog the company for a perceived lack of innovation, especially lately regarding artificial intelligence, but the reality is a big driver of Apple innovation lies in accessibility. It’s truly an incubator of innovation, what with features like the iPadOS pointer and Apple Intelligence’s Type to Siri tracing its origins back to ostensibly esoteric, niche assistive technologies. In both cases, they were “handed off” internally by the Accessibility group to the wider OS team so to be further massaged for more mainstream use cases.

Read More
Steven Aquino Steven Aquino

the action button Makes ChatGPT More Accessible

Tim Hardwick reported last week for MacRumors the ChatGPT app on iOS can be opened by the Action Button on iPhone. This means, Hardwick wrote, users can “use it to jump straight into a spoken conversation, giving you quick, hands-free access to a far more capable assistant.” Apple added the Action Button with iPhone 15 Pro in 2023.

“A long press of the Action button will now open ChatGPT’s voice mode. The first time you activate it, the app may request microphone access. Tap Allow to proceed. After that, you can begin speaking immediately,” Hardwick said in his piece last Friday. “A recent update means voice conversations now take place inside the same chat window as text-based prompts, instead of switching to a separate voice-only interface. Responses appear in real time, combining spoken output with on-screen text and any visuals the model generates. This keeps your conversation’s context intact and makes switching between typing and speaking smoother.”

I decided to cover this piece because of accessibility, of course. Although I’ve sung the praises of Google’s Gemini numerous times in the past, I recently put the aforementioned ChatGPT app on my iPhone Air’s Home Screen and have been really liking it. It feels “smarter” than Gemini, and I like how there’s a certain symmetry to using ChatGPT for my AI wonts since OpenAI is for now the sole backend provider for Apple Intelligence. From an accessibility perspective specifically, that it is possible to map ChatGPT to the Action Button can make it an eminently more accessible way to launch the app—even more than my current setup of the Home Screen app and widget. (I prefer to use the Action Button to launch Magnifier.) If you’re someone with a disability who relies on ChatGPT for general knowledge or performing tasks such as note-taking, tying it to the Action Button makes sense for accessibility and expediency.

Relatedly: Allen Pike’s blog post on the greatness of the ChatGPT app for Mac.

Read More
Steven Aquino Steven Aquino

OLED iPad mini Announcement Could Come In ‘Third or Fourth Quarter’ of 2026, Report Says

Ryan Christoffel reported for 9to5 Mac this week Apple’s likely to launch an OLED-equipped iPad mini during the third or fourth quarter of 2026 “at the earliest.” The speculative release timing comes from a series of posts on Weibo by a leaker in China.

Mark Gurman has said the OLED iPad mini could come “as early as next year.”

“The last couple of iPad mini models have been released in October and September, respectively,” Christoffel wrote on Wednesday. “So a fall launch for the new OLED model would be consistent with [the purported release timeframe].”

As someone who’s (a) used to OLED on nearly every screen sans my Retina 4K iMac; and (b) who’s itching to downsize from my current M4 13” iPad Pro, news of an OLED iPad mini is damn exciting. I still have the A17 Pro iPad mini that I reviewed last year, which I’ve been dallying with again recently as my “couch computer” and have enjoyed the rekindled experience very much. To reiterate a sentiment I shared a little over a year ago in my aforementioned review, the mini, in my mind, “represents the purest expression of Jobs’ original conceit for the tablet,” adding “the reality is iPad mini truly is a nigh-perfect device for doing the things Jobs said tablets excelled at.” Since an OLED iPad mini doesn’t exist right now, the next best thing would be an 11” M5 iPad Pro, which is tempting too—but the fact remains, as I also said last year, an iPad mini with an OLED screen is the Platonic ideal for me when it comes to my day-to-day tablet usage.

Read More
Steven Aquino Steven Aquino

Walmart Now Selling M1 MacBook Air for $549

Joe Rossignol reported this week for MacRumors Walmart is selling the M1 MacBook Air, brand-new, for $549 as part of its Black Friday sales. The laptop was amongst the first Mac machines fitted with Apple’s homemade silicon, announced in November 2020.

Walmart’s M1 MacBook Air is the base configuration, featuring 8GB RAM and 256GB solid-state storage, in silver and space gray. Gold is sold out as of this writing.

“Apple discontinued the MacBook Air with the M1 chip last year, after it launched models with the M3 chip, and it has since updated the MacBook Air with the M4 chip,” Rossignol wrote on Tuesday. “Prior to being discontinued, the model with the M1 chip was being sold for a starting price of $999 brand new, but Amazon sometimes offered it on sale for $899 or less.”

As Rossignol rightly notes, the M1 Air is extraordinarily competent despite being 5 years old now. I covered a similar price drop of the computer back in August, and the points I made in that story are worth reiterating here. In terms of literal accessibility, buying power-wise, the $549 price tag on the M1 Air has a stratospherically high value proposition. For a disabled person who must pinch their pennies, the M1 Air may well be the best, least expensive option to upgrade their laptop. While it admittedly isn’t as svelte and “modern” as the redesign ushered in with the M2 generation—the version I have, by the way—the industrial design still smokes any PC laptop, and importantly retains its hallmark thinness and lightness. Moreover, from a software perspective, the M1 chip is performant and affords amenities like iPhone Mirroring in addition to the typical cavalcade of macOS accessibility features. And although Rossignol also rightly caveats next year’s macOS 27 release theoretically could drop support for the M1 chip, I’d say chances are pretty good it’ll stay supported for some time. This means a person buying this discounted M1 Air at Walmart right now could take comfort in the notion their (relatively) minimal investment won’t reach end-of-life for several years yet.

Read More