The Collapse After the Sprint: Why Alert vs. Fatigued State Testing Changes Everything

"Meredith" is a bright, verbal 10-year-old with a long history of being told she doesn’t try hard enough, doesn’t pay attention, or just doesn’t learn well. I conducted her APD evaluation about a week ago and am now writing her report. With permission of her mother, I've decided to share this anonymized case study.

According to her mother, Meredith has spent years being labeled low average or inattentive. Her evaluations didn’t show anything “severe.” And when her school used screening tools to rule out auditory issues, the data seemed to confirm their assumptions: she wasn’t struggling enough to qualify for more support.

But what if they tested her at the wrong moment?

For the past year, I’ve been evaluating kids in both alert and fatigued states, something that’s surprisingly rare in clinical practice. We know children—especially neurodivergent children—often run out of auditory stamina as the day goes on. But we almost never test for it. With Meredith, we had a unique opportunity to compare her performance using Acoustic Pioneer, a computerized auditory processing platform that can be administered at home in a natural environment, both when she was fresh and again when she was tired.

The difference was dramatic.

When Meredith was well-regulated and alert, her scores were within normal limits on most tasks. Her tonal pattern memory was 100%—1.1 standard deviations above the mean. Her non-linguistic dichotic double sounds were 85% (0.7 SD above the mean). Her speech-in-noise comprehension showed solid use of spatial cues with a 5 dB benefit when localization cues were added, landing her 1.0 SD above the mean.

But when we repeated the exact same testing later—after a full day, when she was mentally and emotionally fatigued—her performance dropped sharply.

Her tonal pattern recognition fell to 87.5%—now just -0.2 SD from the mean. Her tonal memory for 3-tone sequences plummeted to -2.0 SD. The same child, unable to remember more than a few tones in a row. Her word memory also collapsed, from 7 in a row (2.0 SD above the mean) to just 3 in a row (-2.0 SD). That’s a 4 standard deviation difference in core auditory working memory.

The biggest surprise came from her rapid tones task. When fresh, she detected 50 ms gaps—scoring at -0.5 SD. But when tired, her threshold slipped to 100 ms, dropping to -1.5 SD. Her temporal resolution was literally slower when her system was fatigued.

Even her speech-in-noise performance, which is typically stable, was affected. When alert, she gained a 9 dB improvement from spatial cues, ending at -2 dB (2.3 SD above the mean). But when fatigued, her localization benefit dropped to just 5 dB, and her overall performance fell to -6 dB (1.0 SD). Still helpful, but less efficient, and far more cognitively taxing.

If we had only tested her in the morning—as schools and most clinics do—we would have said she was doing fine. If I had only looked at her SCAN scores, I might’ve agreed with the school that she just wasn’t trying hard enough. That maybe she was just “slow.” But she wasn’t slow. She was tired.

And what really stood out? The SSW (Staggered Spondaic Word) test, which we happened to do first while she was still regulated, revealed early signs of struggle. It’s a more difficult test than the SCAN and in my experience, it’s often more sensitive to kids like Meredith—the ones who can start strong but don’t hold up under load. In fact, her fatigued-state Acoustic Pioneer results mirrored her SSW performance far more than they matched the SCAN.

This raises a deeper question: What are we really measuring? If we’re only looking at one time of day—usually the child’s best moment—are we even seeing the truth?

And it’s not just about Meredith. This happens every day. We rely on tools like Acoustic Pioneer or the SCAN as if they’re full evaluations. They’re not. Acoustic Pioneer, even on its own documentation, is a screening—a good one, but still just a screening. And the SCAN, while useful, is often the only thing schools or providers use to make decisions about eligibility. Kids like Meredith get ruled out of services not because they don’t need them—but because we tested them when they were still sprinting.

But what about when the sprint ends?

This is why virtual testing matters. Families can’t always come in for two sessions. But platforms like Acoustic Pioneer allow me to test kids in their own environment, more than once. We can capture both the best and the hardest parts of their day. And the contrast tells us everything.

Meredith doesn’t need more labels. She needs shorter classes. Breaks between transitions. A reduced day. A recognition that she can succeed—but not endlessly, and not without support. If we had tested her only in the morning, we would have missed the crash entirely.

We owe it to kids like her to test both. Because the collapse matters as much as the sprint.


References

Barker, M., & Purdy, S. C. (2015). An initial investigation into the validity of a computer-based auditory processing assessment (Feather Squadron). International Journal of Audiology, 54(9), 1–11.

Gustafson, S. J., & Hornsby, B. W. (2022). Children’s real-world listening behavior and fatigue: Ecological momentary assessment of listening effort. In K. L. Phillips (Ed.), Advances in Child Hearing Research (pp. 211–230). Springer.

Hornsby, B. W., & Kipp, A. M. (2016). Subjective ratings of fatigue and vigor in adults with hearing loss are driven by perceived hearing difficulty rather than degree of loss. Ear and Hearing, 37(e1), 1–10.

Hornsby, B. W. Y., Werfel, K. L., Camarata, S. M., & Bess, F. H. (2014). Subjective fatigue in children with hearing loss: Some preliminary findings. American Journal of Audiology, 23(1), 129–134.

Isemn, K., & Emanuel, D. C. (2023). Auditory processing disorder: Protocols and controversy. American Journal of Audiology, 32(3), 614–639.

Keith, R. W. (1983). Interpretation of the Staggered Spondaic Word (SSW) test. Ear and Hearing, 4(6), 287–292.

McGarrigle, R., Dawes, P., Stewart, A. J., Kuchinsky, S. E., Munro, K. J., & Heinrich, A. (2017). Measuring listening effort and fatigue: A pilot study comparing subjective, behavioral, and pupillometric markers. Ear and Hearing, 38(Suppl. 1), 52S–59S/

McGarrigle, R., Munro, K. J., Dawes, P., Stewart, A. J., Moore, D. R., Barry, J. G., & Amitay, S. (2014). Listening effort and fatigue: What exactly are we measuring? A British Society of Audiology Cognition in Hearing Special Interest Group white paper. International Journal of Audiology, 53(7), 433–445.

Riccio, C. A., Hynd, G. W., Cohen, M. J., & Molt, L. (1996). The Staggered Spondaic Word Test: Performance of children with Attention Deficit Hyperactivity Disorder. American Journal of Audiology, 5(2), 55–62.

Stern, C. A. S. (2016). The reliability and validity of the SCAN and SCAN-C for use with children with auditory processing disorders: A systematic review (Doctoral dissertation, City University of New York). CUNY Academic Works.

Previous
Previous

“This Is the House That Jack Built”: Why behavior isn’t the root cause—and how we rebuild from the ground up.

Next
Next

“She couldn’t remember a single classmate’s name. Then, in one day, she remembered 50.”