Let’s Be Honest About the Problem With APD Testing
The criticism of APD diagnostics isn’t wrong. There is a real problem.
And it’s not that auditory processing disorder isn’t real. It’s that the way we test and diagnose it is deeply flawed.
Here’s what the research actually says:
“Using different diagnostic criteria, the percentage of children identified as having APD ranged from 7.3 percent to 96 percent.”
— Wilson and Arnott, 2013
That’s from a study where the same group of referred children showed diagnosis rates that varied dramatically, just based on which test battery was used. That’s not scientific precision. That’s chaos.
Other researchers have noted that some of the most commonly used tests may miss large numbers of struggling children. For example:
“The sensitivity of the SCAN and SCAN-3 in identifying children with APD was only 50 percent and 33 percent, respectively.”
— Elsisy, 2013
In other words, children who are clearly struggling to process language in real life may still pass the most commonly used auditory tests. These false negatives can delay support, deny access to services, and undermine confidence in the diagnosis itself.
Dr. David DeBonis directly addressed this concern when he wrote:
“It is possible that the use of these protocols underestimates the presence of the disorder in children who do not meet the diagnostic criteria but are, in fact, experiencing functional difficulties.”
— DeBonis, 2015
He emphasized the need for a shift away from rigid protocols toward functional, evidence-based measures that reflect the child’s real-world listening ability.
This sparked further debate. In a published response, Iliadou, Sirimanna, and Bamiou (2016) defended the use of medical coding and structured evaluation for CAPD, stating that the condition is already classified in ICD-10 under H93.25 and should be formally diagnosed—not dismissed as too complex to measure. In reply, DeBonis (2016) clarified that his critique was not about the reality of the disorder, but about the tools being used to identify it. He reiterated his concern that we may be underestimating the disorder’s impact due to limitations in how we define and detect it.
Even insurers like Aetna have weighed in. In Clinical Policy Bulletin #0668, Aetna states:
“APD is a controversial diagnosis... There is considerable disagreement among professionals as to the definition of the disorder, its assessment, and its treatment.”
When the diagnostic field is this divided, the result is that people often fall through the cracks—caught between “not impaired enough to qualify” and “struggling too much to succeed.”
We also have to recognize that the tests we currently use are the tools we have. They’re not perfect, and they weren’t designed for every profile we now see in clinical practice. But that doesn’t mean we throw the baby out with the bath water. We can build on what exists by adding subjective listening effort ratings, fatigue-state testing, and functional observation over time. These additions can dramatically improve our sensitivity to the kinds of real-world struggles that standardized tests often miss. No single test will ever capture the full picture—but when we layer data sources and center the person’s lived experience, we get a lot closer to the truth.
That doesn’t mean we should be diagnosing 97 percent of referred children with APD either. Overidentification is just as unhelpful as underidentification. The solution isn’t to label everyone—it’s to create a more flexible and layered system that can recognize when someone is struggling, even if they don’t meet rigid cutoffs. This is where treatment trials come in.
We can trial low-gain hearing aids (LGHA) to gently enhance clarity in everyday listening without overwhelming the user. We can add an FM system to give a child or adult direct access to the speaker’s voice, especially in noisy settings. And we can incorporate auditory training—not necessarily as a standalone program, but as part of a broader, individualized support plan that includes both structured tasks and naturalistic, interest-based listening experiences.
These supports are not meant to replace existing speech, language, or literacy interventions. They can work alongside them. For some individuals, structured auditory activities offer measurable benefit. For others, generalization may come more easily when skills are practiced in the context of music, storytelling, conversation, or personally meaningful tasks.
The key is not to pick one method over another—it’s to create a layered support system that reduces effort, builds confidence, and helps the brain make more efficient use of the auditory signal. When we treat the auditory load itself, we often see greater availability for language learning, academic tasks, and emotional regulation. Treatment trials aren’t about proving a diagnosis. They’re about discovering what helps. And if something helps—even in the absence of a perfect test score—that deserves to be documented and supported.
I’ve also heard concerns from some speech-language pathologists that giving a child the APD label can delay access to their services. That should never be the case. The presence of an auditory processing difference should not cancel out the need for language intervention—it should clarify and strengthen the rationale for it. Most children who benefit from auditory support also need speech, literacy, or executive functioning intervention. These are not competing approaches. They are partners in the same process.
When we reduce listening effort and improve access to the signal, people often become more available for the language and academic work we’re asking them to do. This is not about replacing speech or reading therapy—it’s about giving those interventions a better chance of success. It’s also not about using diagnosis as a gatekeeper. It’s about universal design, flexibility, and function. Because the goal isn’t to prove who qualifies. The goal is to make learning or working easier for the people who need it most.
Another concern I often hear—from both families and professionals—is the cost of APD testing. And it’s a valid one. Full evaluations can be expensive, especially when they include multiple components, written documentation, or are tailored for school or workplace accommodations.
That’s exactly why we offer a modular testing model. Whether the evaluation is for a child or an adult, people can start with a core set of functional tests. If the results come back clearly within normal limits and no full report is needed, the cost remains minimal. But if testing does raise concerns—or if someone needs formal documentation for an IEP, 504 plan, or workplace support—we can expand from there as needed.
It’s not about running every test on every person. It’s about being thoughtful and responsive, keeping costs low when we can, and making sure that if someone does need support, they get the right kind—not just a label. No one should have to spend over a thousand dollars to be told everything is normal. And if there’s something going on that testing alone doesn’t fully capture, we can offer treatment trials or recommendations without making the process financially out of reach.
The most meaningful progress often happens when people are given consistent access to clearer input—and then asked to use that input in context. That could mean structured listening therapy. But it can also mean storytelling, music, collaborative games, or interest-based learning activities that require real listening effort in real time.
No program, on its own, is a cure. What matters is that the input is accessible, the task is functional, and the person is engaged. That’s how generalization happens. That’s when listening becomes easier—not just in the clinic, but in the cafeteria, the classroom, the boardroom, and at home.
Because when our tools don’t reflect the lived experience of the person in front of us, we don’t give up—we adapt.
References
Aetna. (2023). Clinical Policy Bulletin: Auditory Processing Evaluation (#0668). Retrieved from
DeBonis, D. A. (2015). It is time to rethink central auditory processing disorder protocols for school-aged children. American Journal of Audiology, 24(2), 124–136.
DeBonis, D. A. (2016). Response to the letter to the editor from Iliadou, Sirimanna, and Bamiou regarding DeBonis (2015). American Journal of Audiology, 25(2), 179–180.
Elsisy, H. (2013). Evaluation of SCAN and SCAN-3 for screening of auditory processing disorder. International Journal of Pediatric Otorhinolaryngology, 77(, 1253–1258.
Iliadou, V. V., Sirimanna, T., & Bamiou, D.-E. (2016). CAPD is classified in ICD-10 as H93.25 and hearing evaluation—not screening—should be implemented in children with verified communication and/or listening deficits. American Journal of Audiology, 25(2), 176–178.
Wilson, W. J., & Arnott, W. (2013). Using different criteria to diagnose auditory processing disorder: How big a difference does it make? Journal of Speech, Language, and Hearing Research, 56(1), 63–70.