Pursuing APD Testing for Young Children
Before diving into testing for auditory processing disorder (APD), it’s important to take a critical look at what current assessments actually measure—and where they fall short. For all the structured protocols and scientific language, it’s no surprise that some professionals still question whether APD is real.
I understand their skepticism. I’ve seen the gaps firsthand. I recently conducted testing with a five-year-old child, and the experience underscored just how nuanced and fragile this process can be—especially with young children, particularly those with delayed language or neurodevelopmental differences like autism or ADHD.
At age five or under, our options for formal testing are limited, and even the ones we do have depend heavily on the child’s verbal ability, regulation, and attention. One example is the Staggered Spondaic Word (SSW) Test, where compound words like “mailbox” or “sheepdog” are presented in a staggered, overlapping format across the ears. The child might hear “sheepdog” in the right ear and “mailbox” in the left ear. The second syllable of “sheepdog” (“dog”) overlaps with the first syllable of “mailbox” (“mail”), delivered simultaneously but to different ears. The child must then repeat all four syllables—in the correct order—starting with what they heard in the right ear, then what they heard in the left.
Another difficult example is the Competing Words subtest, where the child hears two different words at once—one to each ear. For instance, they might hear “sheep” in the right ear and “mail” in the left. They are asked to repeat both words, starting with the one heard in the right ear, then the left. After about 20 word pairs, the order reverses, and now they have to repeat the word in the left ear first, then the right. Many five-year-olds don’t even have a firm grasp of left versus right. Add in the need for working memory, sequencing, and verbal fluency, and you’re asking a lot from the very children most likely to struggle with these skills.
Some children can complete these tasks—but many can’t. That doesn’t mean they’re fine. It means the tests may not be developmentally appropriate. The tools we use may not be able to access the very problem we’re trying to identify.
There are also concerns about how much information we’re truly gathering with certain tools. Take Acoustic Pioneer’s Feather Squadron, for instance. It includes age norms starting at five, which sounds promising. But in practice, the version used with five-year-olds often includes only a handful of tests—sometimes just one or two very short subtests. Despite this abbreviated format, audiologists are charged the full price for the test. While the test isn’t outrageously expensive, the mismatch between limited output and full cost adds to the frustration of trying to deliver meaningful results with limited tools.
Even more concerning are the broader policy decisions that may influence how tests are scored. I was told—by a source I trust—that in at least one country outside the U.S., a government-administered auditory test may have been altered to reduce the number of children who qualified for support services. I want to be clear: I can’t independently verify this, and I’m not naming the test or the country to avoid any liability. But it does raise an important point.
Imagine a scenario where a large number of children were being identified as needing accommodations—FM systems, dedicated teaching personnel, long-term follow-up. And imagine that the government, faced with the cost of supporting all those students, decided the budget couldn’t absorb it. Instead of expanding services, they adjusted the test—effectively narrowing the gate. Not because fewer kids were struggling, but because fewer could be supported.
If this kind of decision is being made, it should alarm all of us. When testing is shaped by what a system can afford, rather than what children actually need, we risk turning support into a privilege—rather than a right.
This is where virtual testing can offer surprising advantages. In my experience, when a child is tested from home—on their own couch, in their own pajamas, with familiar sounds and rhythms around them—we often see better cooperation and more representative results. We can split testing into shorter, more manageable sessions that align with the child’s real attention window, rather than force them to perform through a full battery in a clinic setting that feels foreign and demanding.
Even when a child can get through a test, there’s a deeper issue we have to confront: how were the test norms established? What kind of child was used to define what’s “normal”? Were they monolingual? Privately educated? Free from early ear infections? Did they grow up in language-rich homes? Were they neurodivergent? Did they receive early intervention?
The normative samples behind many APD tests are often limited and unrepresentative. So when a child “fails,” are we seeing a disorder—or a mismatch between who the test was built for and who the child actually is?
That’s why we must go beyond the test score. We need to pair behavioral results with case history, classroom observations, academic performance, and input from other professionals. Teachers may notice listening fatigue or inconsistent comprehension. Speech-language pathologists may flag expressive-receptive gaps. Occupational therapists may observe sensory modulation issues that impact listening. These perspectives are not secondary—they’re essential.
And while we desperately need more objective tools, our current physiological measures are still stuck in research labs. Techniques like pupillometry (measuring dilation in response to listening effort) and superior auricular muscle tracking (monitoring ear reflexes tied to attention) are promising but not yet available in clinical settings. Functional MRI, PET scans, and even P300 can show that the brain heard something—but they can’t tell us if the person understood it.
Let me be very clear: I am not saying that behavioral testing has no place in APD evaluation. It absolutely does—especially for younger children. As blunt as these tools can be, they still offer valuable insight. If we dismiss them entirely, we risk removing audiologists—the very professionals best equipped to understand auditory function—from the equation altogether.
And here’s why that matters: Audiologists are not trying to be territorial. We’re not trying to “own” this diagnosis. But we are uniquely trained to evaluate the auditory system as a sensory system—something that speech-language pathologists, psychologists, and occupational therapists are not specifically trained to do. We understand how sound is received, transmitted, filtered, and processed before it ever becomes language or meaning. That perspective is essential.
So when we talk about audiology leading these conversations, it’s not about authority—it’s about alignment with the underlying mechanisms of the problem. But no one discipline can understand the full child in isolation. That’s why we must protect our scope and collaborate deeply with others.
In complex or hard-to-test populations, we need to bring in these other disciplines. Not as afterthoughts, but as co-investigators in the shared goal of understanding the whole child. And sometimes, that means being willing to look past rigid scoring and embrace qualitative indicators like attention patterns, fatigue, real-world breakdowns, and caregiver insight.
Just like I believe in treatment trials with low-gain hearing aids—letting families try a support tool and only pay if there’s benefit—I believe testing should be structured the same way. If everything comes back totally normal, families should have the option to stop there, receive a data sheet, and walk away without paying for a full report they don’t need.
Similarly, if a family doesn’t need a formal diagnosis—especially in homeschool settings or enrichment-based environments—they should be able to pay just for the audiologist’s time and insight, not hours of report writing. We should be offering affordable, flexible options for families who simply need direction, not documentation.
This is the kind of model we should be moving toward—one that’s flexible, transparent, and centered on meeting families where they are, not where the billing code says they should be.
If you’re a parent considering APD testing, please—don’t settle for someone who only looks at the standard deviation and draws a conclusion based solely on test scores. Don’t work with a clinician who mails you a report without ever calling to explain what it means, what to do next, or how it fits with the rest of your child’s experience.
Instead, look for an audiologist who sees the bigger picture. One who works holistically, who is connected to a network of other professionals—speech-language pathologists, occupational therapists, educational specialists—and who values the perspectives of teachers, family members, and even the child themselves.
Find someone who understands that the test is a tool, not the truth. Someone who is willing to test virtually when needed, and who recognizes that a child may show more accurate behavior in the comfort of their own home, with shorter sessions, fewer transitions, and less sensory overload. That flexibility isn’t a shortcut—it’s a reflection of respect.
Because what we’re really testing isn’t just the ear. It’s the child’s ability to access language, to make sense of their world, and to feel competent and connected in it.
That requires more than a number. It requires listening.
References
Cameron, S., & Dillon, H. (2007). Development of the Listening in Spatialized Noise–Sentences Test (LiSN-S). Ear and Hearing, 28(2), 196–211.
Dillon, H., Cameron, S., Glyde, H., Wilson, W., & Tomlin, D. (2012). An opinion on the assessment of people who may have an auditory processing disorder. Journal of the American Academy of Audiology, 23(2), 97–105.
Keith, R. W. (2009). SCAN-3 Tests for Auditory Processing Disorders. Pearson Assessments
Moore, D. R. (2006). Auditory processing disorder (APD): Definition, diagnosis, neural basis, and intervention. Audiological Medicine, 4(1), 4–11.
Vermiglio, A. J. (2021). On the clinical entity in audiology: (Central) auditory processing disorder. Journal of the American Academy of Audiology, 32(9), 639–645.
Wilson, W. J., & Arnott, W. (2013). Using different criteria to diagnose (central) auditory processing disorder: How big a difference does it make? Journal of Speech, Language, and Hearing Research, 56(1), 63–70.
Join Our Subscription Group
Want early access to new blog posts, printable handouts, and classroom-friendly tools?
Subscribe to our email list and be the first to:
Download visuals, checklists, and APD-friendly guides
Get subscriber-only discounts on upcoming digital materials
Join a growing network of professionals and families supporting kids and adults with auditory, language, and learning challenges
To join:
Email us at info@drraestout.com or visit our website:
We never send spam, and we never share your information.