Speech Sounds You Can See

When a child struggles to hear clearly, speak confidently, or read fluently, we often treat the symptoms separately—speech therapy for articulation, reading intervention for decoding, and hearing aids for audibility. But there’s a quiet gap running underneath all three: a gap in phonological access. For many children with auditory processing disorder (APD), auditory neuropathy spectrum disorder (ANSD), peripheral hearing loss, or even normal hearing with dyslexia, the problem isn’t just hearing sounds—it’s recognizing, sequencing, and storing them accurately. That’s where cueing becomes transformative.

Cueing is a visual system that represents the individual phonemes of spoken language using a combination of handshapes and placements near the face, in sync with natural mouth movements. There are just eight handshapes for consonants and four placements for vowels, but when combined with lipreading, they give a child full access to every spoken sound in the language. It’s not about teaching a child to speak—it’s about giving them access to the structure of language itself. The smallest units. The ones with meaning.

One of the most overlooked strengths of cueing is how incredibly low-cost and easy to learn it is. Most parents and professionals can learn the full system in just two days of instruction, and much of it can be practiced independently at home. Once learned, cueing can be woven into everyday conversation, storytime, and structured lessons—with no need for expensive equipment or ongoing therapy costs.

Compared to years of tutoring, reading intervention, or speech therapy, cueing is a simple, early investment that can significantly reduce the need for costly services later. It gives children the phonological foundation they need upfront, which makes future literacy and language instruction far more effective—and often shorter.

Cueing is not in common use. Unlike signing, which is widely known and deeply embedded in Deaf culture, cueing is still unfamiliar to many families, educators, and even audiologists. And it’s important to say clearly: cueing is nothing like ASL. It doesn’t have its own grammar, spatial structure, or classifiers. It’s not a natural language. It’s a system designed to make the sounds of spoken language visible.

Cueing has at times been met with skepticism, in part due to its historical proximity to oralist approaches that emphasized speech over sign language. But cueing is not oralism. Lipreading alone is only 25–50% accurate under the best conditions. Cueing, by contrast, is 100% visually accurate. It gives full access to the phonemes of spoken language—without relying on hearing at all. It was never meant to replace sign language. It was designed to address a very real problem: the literacy gap.

Dr. R. Orin Cornett wasn’t just any hearing academic—he chose to work at Gallaudet University, the world’s leading institution for Deaf education. He came in with deep respect for the intelligence and ASL fluency his students demonstrated. What struck him was the gap between that fluency and their low English literacy. Cueing wasn’t invented to erase ASL. It was created as a visual tool to give access to English phonology—because ASL, being a different language, doesn’t represent spoken English sound structures.

And to be clear, I am in no way suggesting that Deaf children should not have access to ASL. I am a huge supporter of ASL. It is essential for identity, culture, community, and expressive depth.

Cueing is not a replacement for ASL—it’s a support. Its primary benefit is improved learning access, particularly in reading and phonological awareness. It enhances lipreading accuracy and can run parallel with ASL, giving children access to both language systems. One of the greatest advantages of cueing is that it shortens the learning curve for parents.

Most hearing parents are not fluent in ASL, and many end up modeling limited or incorrect signs. Cueing helps these families communicate fluently and clearly, right away, without giving their children a watered-down version of language during critical developmental years.

That said, I recognize and respect that cueing has been critiqued by some as audist or as a hearing-centric tool imposed on the Deaf community. But this critique often conflates intent with impact. Dr. Cornett’s goal was not to replace Deaf identity but to address a specific gap in access to English phonology. Tools are not inherently oppressive—they can be used to empower or to limit, depending on the context.

Cueing isn’t oralism. Oralism says, “Try harder to hear.” Cueing says, “Here—let me show you the sound.” It’s a tool. And when used appropriately, ethically, and in partnership with ASL, it can be life-changing.

And what’s even more compelling is that neuroimaging studies have shown that cueing is processed in the same auditory regions of the brain as spoken language. In a 2017 fMRI study, deaf native users of Cued Speech showed activation in the superior and middle temporal gyri and left inferior frontal gyrus—the same regions typically used for processing auditory speech. This suggests that cueing is not just a workaround, but a valid and effective pathway to building phonological structure through visual input.

This is critical for children who do not have clear access to sound—whether due to peripheral hearing loss, auditory neuropathy, or auditory processing issues. These children often struggle to organize the sound structure of language; their internal filing system is missing drawers, mislabeled, or overloaded. Cueing helps establish a clean, predictable framework for organizing phonemes.

Later, when access improves—through hearing aids, cochlear implants, or auditory training—those sounds already have a place to go. The brain isn’t scrambling to decode them. It just files them where they belong.

Think of cueing like using cue cards—little visual prompts that show you exactly what sound is coming next. You don’t have to hear it. You don’t even have to understand what it means. You just see the cue and connect it to a sound. It’s like giving your brain a set of visual flashcards that match every syllable of what’s being said.

While cueing was historically called Cued Speech, many of us now prefer the terms cueing or Cued Language. The focus on “speech” has created confusion, implying that verbal production or auditory hearing is the goal. But cueing isn’t about talking—it’s about language organization. And for many children who struggle with auditory decoding, memory, or rapid speech perception, this visual clarity is game-changing.

Most of the original research on cueing was done with children who had severe to profound peripheral hearing loss. And yet many of them developed strong spoken language, reading, and writing. Some used cochlear implants. Some didn’t. But they all had something in common: consistent, complete access to phonology through visual input.

And that information matters even more for children with peripheral hearing loss, APD, ANSD, fluctuating hearing loss, or dyslexia. These kids often miss subtle differences in speech—especially in noise or fast conversation. Cueing gives them a way to see what they’re missing. It also helps them understand how real-world speech behaves. When started early, cueing can even help a child build such strong phonological awareness that they may never need formal reading intervention.

Many children with APD, ANSD, or hearing loss also struggle to learn foreign languages, especially when it comes to pronunciation. Cueing can support this by offering visual access to unfamiliar sound systems. Cueing systems exist for dozens of languages and dialects, making language learning more accessible.

Cueing isn’t designed to teach speech, but it gives children insight into how speech works. Take the word train. Phonemically, it starts with a /t/ and an /r/. But in running speech, the /t/ becomes rounded and softened by the /r/, turning into something closer to “chrain.” Cueing lets us talk about that. It shows the underlying structure, and also highlights how that structure shifts in fluent conversation.

Cueing isn’t just a tool for decoding. It’s like Orton-Gillingham in motion—phonological instruction delivered in real time, during live, running speech. It’s structured, consistent, and multisensory, but instead of being limited to paper or tutoring sessions, it travels with the child into everyday conversation.

And it works at every level. Think of a word like supercalifragilisticexpialidocious. No one expects a child to know what that means—but with cueing, they can still break it into syllables, track the rhythm, and practice the pronunciation.

In fact, one of the most exciting studies to date was conducted by Jennifer Montgomery, who worked with a small group of profoundly Deaf students with dyslexia. She combined Orton-Gillingham instruction with cueing—and within one year, these students made a full year of growth in reading. It proves that when phonology becomes accessible, even the most challenging reading profiles can shift.

I’ve seen it happen myself. I worked with a family whose son had severe apraxia and auditory processing issues. They had tried everything—speech therapy, apps, picture cards—and were told progress would be slow. Then they added cueing. Not all day. Just 20 minutes at the dinner table. I also fit him with low-gain hearing aids. The combination of cueing and better access through hearing aids was what helped most. The changes weren’t instant, but they were clear. His understanding improved. His speech became more organized. He began using full sentences. Cueing didn’t cure his apraxia. It gave his brain a place to put language.

And cueing doesn’t just improve language—it changes lives. When I was in school, some of the Deaf students who used cueing were more likely to show up in mainstream and honors classes. Many later attended Ivy League or competitive state universities. Their access made a difference.

That’s part of what makes cueing so powerful. Most hearing parents will never become fluent in ASL within the few short years when their child needs language most. But they can learn to cue. And they don’t have to choose. They can sign. They can cue. They can build a home where both languages are respected—and neither is lost.

References

Aparicio, M., Peña, M., & Ménard, L. (2017). The neural basis of speech perception through lipreading and manual cues: Evidence from deaf native users of Cued Speech. Frontiers in Psychology, 8, 426.

Mitchell, R. E., & Karchmer, M. A. (2004). Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign Language Studies, 4(2), 138–163.

Montgomery, J. L. (2013). A Case Study of the “Preventing Academic Failure” Orton-Gillingham Approach with Five Students Who Are Deaf or Hard of Hearing: Using the Mediating Tool of Cued Speech (Doctoral dissertation, Teachers College, Columbia University). https://eric.ed.gov/?id=ED553611

National Association of the Deaf. (n.d.). Implications of Language Deprivation for Young Deaf, DeafBlind, DeafDisabled, and Hard of Hearing Children. https://www.nad.org/implications-of-language-deprivation.../

Trezek, B. J. (2017). Cued Speech and the development of reading in English: Examining the evidence. Journal of Deaf Studies and Deaf Education, 22(4), 349–360.

Previous
Previous

“Dyslexia doesn’t suddenly appear in third grade.” So why do we often wait until then to intervene?

Next
Next

Why Is My Child Strong in Expressive Language, Yet Weak in Receptive?