Building Voice Technology That Works: CDLI ASR Devices Makerthon

/

Over four days, twenty makers and four users with speech impairments worked together as co-designers not as separate groups of builders and testers. The result was assistive technology shaped by the people who actually need it.

Link Copied!
Kenya
17 Feb 26
A group photo of the Makerthon team at Senses Hub Nairobi.

Over four days, twenty makers and four users with speech impairments worked together as co-designers  not as separate groups of builders and testers. The result was assistive technology shaped by the people who actually need it.

The CDLI ASR Devices Makerthon had a clear primary mission: co-design edge-based speech transcription devices with people with speech impairments, prove that small AI models can run offline or in low-connectivity environments, and produce open-source hardware and software designs that can be locally assembled anywhere in the world using commonly available parts. Every decision flowed from these goals this wasn't a showcase event, it was a proof of concept with real-world stakes.

Teams moved from understanding user challenges to building functional prototypes powered by Raspberry Pi and custom ASR models, capable of transcribing speech, generating spoken responses, and running entirely on edge without any cloud connectivity required.

Elite Group built a home automation system letting users control lights by voice while also detecting smoke or fire and alerting emergency responders, one device serving both accessibility and safety.

Voicenote Group created an offline speech therapy tool. Users speak target words, the system transcribes them and checks against the intended phrase, then uses confidence scoring to give immediate feedback: "great," "okay," or "try again." No therapist or internet connection required.

Team Elezi focused on stuttering, building a transcription tool designed to bridge communication between two people when one has a stutter, framing the challenge as a conversational one, not just an individual one.

Team Vocalis developed an AI-powered speech assistant with multiple modes: stammer detection, active listening support, smart word assistance, and both a "stammer button mode" and "conversation bridge mode" for different situations.

Beyond the prototypes themselves, the makerthon was designed to build something longer-lasting. By bringing together makers around assistive technology, it laid the groundwork for an ongoing community that future teams can extend. The makerthon demonstrated that small models can handle real speech challenges offline, that open hardware designs are achievable within tight constraints, and that when users help design the technology built for them, the solutions are more likely to actually work.

This initiative is led by UCL’s Global Disability Innovation Hub supported by google.org and UK International Development funded AT2030 Programme. The Centre for Digital Language Inclusion works in collaboration with local and international partners. Technical support has been provided by Modal, whose GPU sponsorship is powering the development of the ASR models. Other collaborators include The Research Center Trustworthy Data Science and Security, Talking Tipps Africa Foundation, Senses Hub, University of Ghana, Strathmore University, Hogan Lovells.

Team members discussing during one of the sessions
A team showcasing their product during the makerthon
An example of a product that was created using Raspberry Pi
Katrin, CDLI's AI lead giving a certificate
Kenya
17 Feb 26