A look Inside This Year’s Imagine RIT Festival
The Rochester Institute of Technology (RIT) held its annual Imagine RIT: Creativity and Innovation Festival this past Saturday. The most recent event, which has been held every spring since 2008, showcased more than 450 exhibits for those in attendance. The goal of the Festival, RIT writes on its website, is to help people “get a glimpse of the creativity and innovation that students, faculty, and staff experience every day.”
“Our goal is to inspire the next generation of problem solvers and spark excitement about science, technology, engineering, and math,” Lisa Stein, RIT’s executive director for events and conferences, said in a statement on the institution’s website.
Of the hundreds of exhibits featured during this week’s Festival, there were a few which marry technology with disability for accessibility’s sake. One of them involves prostheses and helping those with limb differences—a topic I wrote about yet again earlier this month. Ahead of this year’s Festival, I connected with third-year biomedical engineer Nataly Rosas Franco who, alongside her compatriots Alex McMahon, Emanuel Mongkuier, Kaitlyn So, Marguerite Wascovich, Max Sushynski, and William Brent, two years ago embarked on conceiving and developing an adjustable prosthetic arm for young children. Franco explained in a brief interview conducted over email the septet decided on designing pediatric prosthetics because “we were interested in working on arm prosthetics in general, specifically one that went up to the elbow.” Although there exist “many” arm prosthetics on the market today, there are comparatively few which are few expressly designed for young children—“at least not many that were long-lasting with sophisticated mechanisms,” Franco said.
“We quickly noticed that pediatric prosthetics were usually stiff and bulky since children rapidly outgrow them,” she added. “Or, if they were adjustable they were very rudimentary in movement. We wanted to focus on having adjustable components that could ‘grow’ with the child to provide a cost-effective solution for these patients. While also providing the same refined mechanisms found in adult prosthetics.”
The economics of prostheses are particularly sensitive, considering, as Franco told me, pediatric prosthetics cost anywhere between $5,000 and $50,000, with the top-end of the really wide range typically reserved for athletic pediatric prosthetics. What’s more, the cost isn’t a one-time expense; indeed, new prosthetics have to be built and bought as children age until they reach adulthood. “By including adjustable parts that can ‘grow’ with the child it can help mitigate these exorbitant costs,” Franco said.
When asked how the group’s prosthetic arm functions, Franco explained “by having many adjustable features that can be extended by a caregiver as the child grows,” with areas such as the socket, forearm, and fingers able to expand “through time” in length and width. The expandability matters economically, as most prosthetics are pricier precisely because they’re one-of-one, custom designs. That Franco and team’s prototype is adjustable means costs can be lower because the prosthetic device needn’t be so customized. (The only caveat to this, she said, is if a person required a full arm replacement.) Moreover, Franco said the group’s prosthetic stands out from conventional ones in part because there are “definitely more mechanisms involved.” To wit, her device has capabilities such as individual finger movement, wrist movement, and forearm expansion. In addition, unlike prosthetics from companies such as ExpHand—who similarly tout making prosthetics which “grows”—but only open and closes the hand, Franco boasted their prototype does more, telling me the team implemented EMG, or electromyography, sensors such that wearers can enjoy more nimbleness and a more natural experience—as though they had their limb(s).
“We seek to give children with [limb difference] as normal a life as possible,” she said.
As to the project’s bill of materials, Franco said she and team spent “not more than $1,200” building the prosthetic arm, adding “much of it” is 3D-printed. The internal electronics are themselves inexpensive, with Franco saying the most expensive component is the $33 servomotor. “At most, [the prosthetic] would end up being around the low end of what prosthetics usually cost, but even then it is more cost-effective since it lasts approximately 9 years from ages 7–16 years old,” Franco said.
Elsewhere, Dhaval Mahajan is a human-computer interaction graduate student from India. Mahajan, alongside Sidney Grabosky and Ziming Li, developed smart glasses not because the threesome was excited for connected wearables. They were interested in the category because members pf the research team had been working with autistic adults in a vocational training program for a few years. Specifically, they wanted to use virtual reality technology to “simulate workplace scenarios in a controlled setting,” Mahajan said. Job coaches, he explained, have “consistently” raised the question of whether, given AI’s rise in prominence and the popularity of smart glasses, would it be possible to implement the technologies in training and/or on-the-job settings? The need, not the nerdery, sparked Mahajan and team’s work on bringing their idea to life.
“The appeal of a wearable display in this context lies in its ability to address a significant challenge in vocational support. As trainees become more independent, the coaching that helps them succeed during training is often gradually reduced,” Mahajan said to me in a recent interview over email. “The tools that provide this coaching (checklists, printed recipes, and prompts from coaches) can be either socially noticeable or difficult to manage in real time when interacting with customers. A wearable display can offer a more discreet and user-friendly alternative. It provides guidance in a trainee’s line of sight without using their hands, diverting their gaze from the customer, or requiring a coach to be present at the moment of need.”
The glasses’ software is a custom web app built for supporting the aforementioned needs of the job coaches. Mahajan said the software has three main functions: (1) takes in speech input in order to process the command/query; (2) uses an LLM (large language model) to understand context and directions; and (3) updates the task interface. Furthermore, the software includes a real-time order panel displaying customer requests, with tooltip-like bubbles surfacing suggestions for cues like asking for clarification as well as a step-by-step checklist replete with photos. Broadly, Mahajan described the team’s project as “more of a wearable display than a standalone computer,” based on the XReal Air 2, and noted the software runs via connected machine, with the glasses projecting images unto the user. (It’s worth noting this method is similar to how the original Apple Watch worked; the paired iPhone did the heavy compute. Apple is purportedly using the same strategy for its still-in-development competitor to Meta’s Ray-Bans.) The team deliberately chose to walk this technical path at this stage of their research and development, Mahajan said.
“It let us iterate on the interface quickly and respond to feedback from autistic adults and their job coaches,” he said. “The same interface could move onto more integrated hardware as that category matures.”
The team’s glasses aim to solve two problems, according to Mahajan. First and foremost, they lessen cognitive load. He explained customer service roles require simultaneously juggling multiple skills: product knowledge, customer care, a point-of-sale system, and live conversation. While the individual tasks are eminently learnable, managing them in totality can prove daunting. Thus, the glasses help by what Mahajan described as “putting the current step and a short cue into the line of sight takes that memory work off the trainee and lets them stay present with the customer.” Secondly, the advent of the glasses compensate for coach availability. Job coaches, Mahajan told me, are a finite resource; there’s a wide delta between intensive hands-on training with a client and eventually—hopefully—going hands-off and letting them function autonomously. The glasses, then, can serve as a proxy for the human coaches’ direction during the in-between period. But the LLM does have its limitations, as ever.
The team’s glasses aim to solve two problems, according to Mahajan. First and foremost, they lessen cognitive load. He explained customer service roles require simultaneously juggling multiple skills: product knowledge, customer care, a point-of-sale system, and live conversation. While the individual tasks are eminently learnable, managing them in totality can prove daunting. Thus, the glasses help by what Mahajan described as “putting the current step and a short cue into the line of sight takes that memory work off the trainee and lets them stay present with the customer.” Secondly, the advent of the glasses compensate for coach availability. Job coaches, Mahajan told me, are a finite resource; there’s a wide delta between intensive hands-on training with a client and eventually—hopefully—going hands-off and letting them function autonomously. The glasses, then, can serve as a proxy for the human coaches’ direction during the in-between period. But the LLM isn’t coaching well if there are umpteenth paragraphs of instructions. Indeed, Mahajan said to me that revelation “surprised” him at first, adding the workers and coaches found "paragraphs of advice harder to process mid-conversation.” Better, Mahajan said, to build a simple, well-designed interface which, as he told me, has a checklist that automatically ticks off tasks and a request list populating as a customer is speaking to employees. “The AI is still doing real work, but with context-aware reminders and tips,” Mahajan said.
Mahajan is bullish on smart glasses as a product category, telling me “we’re glad” they’ve entered the mainstream consciousness. He noted any wearable biggest obstacle is “whether the wearer is willing to put it on and keep it on” and lauded companies for “[making] these devices lighter, less conspicuous, and more acceptable in public, and accessibility research directly benefits from that work.” People are going to be more inclined to wear a pair of smart glasses which resemble regular glasses—frames that are sleek, svelte, and comfortable, especially at work.
“We’d push for more on the design side,” Mahajan said. “Most mainstream devices are built around general-consumer use cases, capture, translation, and a smart assistant. In contrast, accessibility tends to arrive later, as a feature or partnership. The question I’m more interested in is what it looks like to design a wearable interface with disabled users from day one, for a specific task they’re trying to learn or do. That’s a different kind of product… it’s where some of the most meaningful work in this space still lies.”
Lastly, there’s Alex Baker and the Neurotechnology Exploration Team. The NXT, as it’s colloquially known at RIT, was described by Baker as a club which “allows students from many different majors to collaborate and test the potential of the connection between the human body and technology.” The NXT team, he added, is committed to “advancing neurotechnology through creating accessible and assistive technology.” For his part, Baker found the space “personally interesting” after seeing the tech in 2024, saying he was “amazed” by “the application of neuroscience, the capabilities of the technology as a whole, and the potential of the electrical signals our bodies emit.”
“Technology is constantly evolving, and the applications are becoming increasingly surreal. Our brains, muscles, and entire nervous system are fascinating and prove how complex we truly are,” Baker said. “The combination of these felt like a fantasy or a dream, so being able to work on making it a reality.”
Baker and the NXT team, which, similarly to Franco’s cohort, also works on prostheses for people with limb difference, built a wheelchair controlled by brainwaves. Baker said the team’s goal is to “[create] disability and rehabilitation services as a solution for people who can’t easily operate a wheelchair on their own or struggle with the use of their prosthetics.” The technology is open-source, with the goal there being to provide “easily affordable aids so people can live more independently and comfortably.”
NXT’s wheelchair is powered by EEG, or electroencephalogram, data, which involves non-invasive methods of collecting information. The team utilizes OpenBCI hardware to capture brain signals, with the hardware built upon a Raspberry Pi and Nvidia’s Jetson Nano. The former is responsible for controlling general system control and communication, while the latter shoulders the burden of "executing our AI model and conducting real-time signal processing,” Baker said. Software-wise, the team uses Python to process the EEG data and communicate with the aforementioned hardware.
* * *
Overall, the three projects I’ve highlighted here echo share common threads. First, technology can absolutely make people’s lives richer and more accessible—literally so in the case of Franco and team’s cost-efficient prosthetic arm. Unlike so much of assistive technology, the societal views on which are rife with patronization and “gee whiz” platitudes, the students I spoke with for this feature story had clearly identified how technology can empower matter-of-factly. Second, AI can be used for genuine good. So much of the talk over artificial intelligence centers on dystopian use cases and Skynet-like dangers, not to mention “software brain”—valid concerns, the lot of them—means stuff like what’s coming out of RIT gets a relative pittance of media attention. My fellow newshounds have decided the pitfalls, et al, of AI are worth examining and re-examining—but that myopic focus comes at the cost of (predictably) undervaluing what the technology can do to actually help humankind, especially those in the disability community. The salient point is twofold: (1) good on these RIT students for recognizing accessibility’s importance and acting upon it; and (2) AI coverage needs more stories about said students’ efforts to honest-to-goodness improve lives.
It’s worth noting Rochester is doing big things in technology. Not only is there the RIT, there’s the Rochester National Technical Institute for the Deaf and students there producing stuff like Sign Speak—and leveraging AI all the while. What I’m saying is, while the Bay Area obviously has Silicon Valley, and our northerly neighbors in Oregon has its Silicon Forest, Upstate New York assuredly has technological might all its own.
Next year’s edition of RIT’s Creativity and Innovation Festival is set for April 24, 2027.