More Than a Metaphor: Why AI Use Feels Like an ASL Interpreter

TL;DR: I’m responding to a comment on a previous post where someone questioned my comparison between AI and interpreters. I don’t think they’re the same, but both are access tools. Interpreters bridge gaps in hearing, and AI helps me bridge gaps in memory, sequencing, and expression. This essay explores that comparison more deeply, including how interpretation differs from transliteration, the roles of ASL and Cued Speech, and how each contributes to language access.

I know my views aren’t always politically correct.

I ramble sometimes, because I’m human, not AI. So instead of juggling every point someone raised, I’m focusing on one.

I’m walking the track using speech-to-text right now. This is all me. No AI. Or at least, it was until I ran into the 8000-character limit. That’s when I asked AI to help me shorten it. Not to rewrite it, just to fit it into the box.

Eventually, I gave up and turned it into a full-fledged post.

You may not agree with me, and that’s okay. I agreed with parts of what was said too. But I want to talk about ASL and interpreting, which I care deeply about.

Since we’re already in uncomfortable territory, I’ll bring up another one: Cued Speech.

I was a certified ASL interpreter. I grew up in the Deaf community and attended Gallaudet for two years. I know how human interpretation works, and how often it fails. Many interpreters shouldn’t be interpreting. Some lack the linguistic or cognitive skills, and most DHH students don’t know what they’re missing, so they can’t speak up.

It’s like asking someone to raise their hand if they didn’t hear a beep. How would they know?

They only realize there’s a problem when something important gets misunderstood. But even then, how do they know whether the error came from the interpreter or their own processing?

I’ve worked alongside interpreters who have unacknowledged hearing loss or masked auditory processing issues for years.

I eventually decided, as both an audiologist and a certified interpreter, that it was unethical for me to continue interpreting. Background noise made it impossible for me to reliably access speech, especially during my time at California State University, Northridge. Now I understand that memory issues likely played a role too. I couldn’t risk missing parts of a message and trying to patch it together with context. That’s one of the reasons I’m beginning cognitive training.

I believe interpreters can use accommodations if those supports allow them to do the job well. But if someone has an auditory processing issue or hearing loss that prevents accurate comprehension in noise, they have no business interpreting in that setting. There may be other places they can function, but not there.

I don’t expect perfection. Machines miss things too. Just look at speech-to-text. It constantly turns “misophonia” into “Miss Sonja.”

That’s one reason I prefer Cued Speech in highly technical settings. Kids who grow up cueing tend to have broader English vocabularies and better outcomes, especially when their parents aren’t fluent ASL users. Research backs this. Cuers with cochlear implants often outperform peers in reading and auditory tasks.

Most also become fluent in ASL and spoken languages because they have access to phonemes and a strong foundation in their first language. That isn’t true for many Deaf kids born to hearing parents, which is 95 percent of the time. And those parents often struggle to learn ASL early or well enough. I wrote about this in my blog When ASL Isn’t Enough.

I grew up around the Deaf community. I’ve seen firsthand how many children never become fluent in any language. I’ve done the research. And I know that if a child doesn’t have a solid first language, an interpreter alone won’t be enough. Language access requires immersion, not just translation.

But when parents aren’t fluent in ASL, what then? We should give children transliteration of the language actually spoken in the home. We should give them visual access to the native language around them.

It would be like me trying to raise my child in Spanish without being fluent or creating immersion. If I didn’t bring native speakers into their life, my child would be language-deprived. And then I’d expect them to learn English fluently without ever having a real first language.

That’s what happens to many Deaf children. They arrive at school without a solid language base. That’s how we end up with college students reading at a third or fourth grade level. Language deprivation doesn’t happen when children get full, consistent access. But too often, they don’t.

Transliteration, unlike interpretation, assumes the client is competent. The transliterator doesn’t simplify. They provide a visual match to sound, and it’s the client’s job to make sense of it.

That matters. What if the interpreter doesn’t fully understand the material or can’t express nuance in ASL? Transliteration lightens their cognitive load and shifts the responsibility for understanding to the student.

The student sees every word the teacher says, not a simplified version. They get metaphors, layered language, synonyms. They learn to self-advocate. They can ask for repetition, but not for the content to be watered down.

That builds stronger vocabulary, literacy, and awareness. It treats the student as a partner, not a passive recipient.

I’m not against ASL. I prefer it in many situations. I chose Gallaudet for my postdoc. I value summarized input. I struggle with attention and don’t always want to see every phoneme. I want clarity.

But I’m already fluent in English and ASL. I had access early. I watched interpreters daily in high school. I have normal hearing, autism, ADHD, and some auditory processing issues.

So interpreters work for me. I know when I’ve missed something, most of the time. I can fill in the gaps because I’ve got language closure and experience.

And really, all this about interpretation versus transliteration in child development is a separate thread. What we’re talking about here is someone with native-level English needing a tool to access their own thoughts. That’s me.

I’ve said I prefer interpreters, and I do. But I also had full language input growing up. If I hadn’t, if my family hadn’t given me native access, I wouldn’t have had the internal structure to benefit from interpretation at all. You can’t simplify what was never fully there.

And for a child whose language is just forming, whose family isn’t fluent, whose input is shaky, ASL might be necessary, but it’s not enough. They need full language input, with all its mess and richness.

So why do I compare AI to an interpreter?

Because it helps filter and organize my thoughts. It reads aloud. It holds memory. It keeps me focused. It lets me think clearly.

For me, transliteration isn’t enough anymore. I’ve spent a lifetime expressing myself that way. But now, I need support that filters, just like my patients need speech enhancement.

Today, walking the track, I asked ChatGPT a complex question. I know it’s not always accurate. But it let me explore the topic fully and summarize it. What would’ve taken me weeks, I did in 30 minutes. And now I have a record.

I could have tried to shorten all of this by hand, but let’s be honest. My kids were waiting in the car. I was sitting in a McDonald’s parking lot wasting time, money, energy, and probably water too.

So I did what made sense.

I used AI to help me condense what I’d already written. Not to cheat. Not to hide. Just to be efficient.

Because sometimes Facebook gives me a box.

And I think in essays.


Previous
Previous

Soft Hits Hard

Next
Next

AI for Communication Accessibility