Beyond the Booth: Why APD Assessment and Intervention Need a Complete Overhaul (Pre-Publication Draft)

Beyond the Booth: Why APD Assessment and Intervention Need a Complete Overhaul (Pre-Publication Draft)

TL;DR for Parents: If your child struggles to listen in noisy places like classrooms or restaurants, current auditory processing tests might miss their real difficulties. These tests are like taking a single photo instead of watching a movie - they don’t show how listening gets harder when your child is tired, stressed, or dealing with background noise all day. The tests often use boring stories and artificial sounds that don’t match real life.

Many kids who really struggle end up “passing” these tests and don’t get the help they need. We need better testing that measures listening effort (how hard it feels), fatigue (how performance changes throughout the day), and considers your child’s personality and interests. The good news? Some clinicians are working on new approaches that actually measure what matters for real-world success.

TL;DR for Clinicians: Current APD testing suffers from poor ecological validity, arbitrary diagnostic accuracy (7.3-96% depending on which tests you choose), and systematic bias against certain personality types. We’re testing static performance in artificial conditions while missing dynamic factors that predict real-world function: listening effort, fatigue curves, personality matching, spatial complexity, and contextual demands. The solution involves adapting Acceptable Noise Level (ANL) testing with effort scaling, implementing functional assessment in natural environments, and developing comprehensive batteries that capture resilience rather than just accuracy. Case history remains our most important “test” - if someone reports functional difficulties, that’s valid clinical data regardless of traditional test results.

When most people hear “auditory processing testing,” they picture a child or adult in a booth, wearing headphones, repeating words or numbers. The results are tidy: right or wrong, percent correct, maybe plotted against a norm. But life isn’t tidy. Real-world listening is dynamic, messy, and costly.

And that picture tells us almost nothing. We test a child or adult in a perfectly controlled sound room over a 2-3 hour period — you get the appointment time you get, and that’s all you get. It’s a snapshot, not a video, not a reel. It simply isn’t real. But how do we choose which period of time? Why test at 10 AM instead of 2 PM? Why not consider whether this person is a morning lark or night owl? Are they introverted and already drained from the novel clinical environment, or extroverted and energized by the interaction?

We act as if auditory processing exists in a personality vacuum, as if the brain that thrives on stimulation processes sound the same way as the brain that needs quiet to function. But what about those who fall somewhere in between — the ambiverts who might handle some noise well but struggle in specific contexts?

That’s why in my practice, I don’t stop at a one-time score. I measure listening effort using a 0-10 scale of difficulty that patients can easily understand and apply, asking them to rate how hard it feels during the SCAN and other tests as we do them. I use the same scale across all tests, whether it’s the SCAN, speech-in-noise tasks, or any other assessment. I measure fatigue by repeating tasks in short windows and later in the day — because what looks fine in the morning often collapses after school. These are things the booth alone will never show.

The next missing piece is a tool called Acceptable Noise Level (ANL). This test was originally developed as a measurement for predicting hearing aid benefit, but I’m trying to reinvent it and reuse it for APD assessment. It’s not typically used and it’s very hard to find — I finally found the manufacturer after extensive searching.

____

What ANL Measures

The original Acceptable Noise Level test (ANL) is simple but powerful. It measures how much background noise a person is willing to accept while listening to speech at their most comfortable level. Here’s how it works:

1. A person sets an ongoing story to a comfortable loudness.

2. Multi-talker babble is added.

3. The person themselves adjusts the background noise up and down until they figure out how much they can tolerate without annoyance.

The difference between speech and noise is the ANL score. Low ANL (less than 7 dB of signal-to-noise ratio) means you need speech to only be slightly above the noise level. It’s sort of like if you were swimming — you could tolerate the water close to your chin or your nose without getting annoyed, you don’t mind if you get a little water in your face. High ANL means you can’t stand the risk of that water — you’re almost on a floaty above the water because you’re so easily bothered by that noise.

But the typical ANL isn’t enough on its own because people tend to not understand the instruction consistently.

One person says they’re annoyed at a particular level while another person interprets annoyance differently. Instead of simply telling people to let me know when they’re annoyed, I’ve tried retraining them with the concept of listening effort and I use it on a 0-10 scale.

You know when you’re walking, for example, and just having an easy conversation and a person is talking to you and you’re enjoying yourself, that would be like a 1. Once you start walking slightly faster to a brisk pace, that would be more like a 3. You’re not actually struggling yet, but it’s taking some effort. When you get up to a 7, that’s when you’re at a relatively quick jog. You might not be able to sing, for example, but you could still have a conversation, but it might be a little harder to remember what you’re talking about because you’re having to focus on your form. But then when you get up to a 10, you’re about ready to collapse because you’re in an all-out sprint. You probably can’t talk at all. It’s failure.

And here’s what Dr. Anna Nabelek’s research showed: ANL can actually predict hearing aid success with about 85% accuracy. People with high ANLs were much more likely to abandon their aids, even when their audiograms looked “fine” and their word scores were high. In other words, ANL is a test of resilience — not just accuracy, but willingness to keep going. It taps into the cost of listening and how much adversity someone can tolerate before they give up.

I’m working on ways to modify the ANL using a listening effort scale when they are alert versus when they are tired, to capture how noise tolerance changes with fatigue. And the faster somebody goes from 3 (easy) to 7 (moderate effort) to 10 (maximum effort) means they’re fatiguing more quickly — giving us insight into their listening endurance and resilience.

____

The Problem with Current APD Batteries

Let’s be honest about what the average APD battery actually includes. There’s the Staggered Spondaic Word (SSW) test with competing words in both ears. Speech-in-noise testing usually involves BKB-SIN or QuickSIN, which use sentences in four-talker babble, but it isn’t spatialized — it’s just layered on top.

The SCAN-3:C (children) or SCAN-3:A (adults) includes filtered words, auditory figure ground, competing words, and competing sentences. Maybe the Feather Squadron test, which does have some strong components including two-word phrases in white noise with and without spatial cues, non-speech testing, pitch matching, and memory tests. But all in all, is this enough? These tests are only done once, not repeated to measure fatigue.

Here’s the uncomfortable truth: **most people pass current APD testing.** I’ve never seen anyone fail rapid speech. Almost everyone gets competing sentences when they listen to a sentence in one ear and ignore the other. Gap detection on the SCAN? Virtually no failures⁷. And the standard speech-in-noise tests? They’re like video games that get progressively harder, but not harder in ways that mimic real life.

Think about it: listening to words in white noise ... does that really show auditory processing? What about the child who can ace that artificial task but melts down trying to follow a teacher’s instructions while classmates whisper, chairs scrape, and the air conditioning hums?

Wilson and Arnott demonstrated this beautifully in 2013: depending on which two tests you pick from the common nine, APD diagnosis accuracy ranges from 7.3% to 96%.

How do we choose which tests to administer? Often, it’s just based on what equipment we own in our clinic. Just because the SCAN has standard deviations doesn’t mean it’s accurate when you consider these other factors: age, cognition, fatigue, personality, deprivation history, hidden hearing loss.

Perhaps we should just stick to the tests we have, even though they were based on lesions or age-related issues in adults and then retrofitted to children, because “it’s what we have.” But kids are falling through the cracks, and so are adults. The most important test of all is entirely qualitative, it’s the case history, it’s how hearing is affecting function. When someone tells you they struggle in restaurants, can’t follow conversations at family gatherings, or their child melts down after school because listening all day is exhausting, that’s real data. That’s what matters.

Why do we trust beeps anyway? How are pure tones even related to speech or environmental sound awareness? Beeps are simple, predictable, and nothing like the complex, rapidly changing acoustic patterns of real communication. Speech contains formant transitions, coarticulation effects, prosodic cues, and temporal fine structure that pure tones simply can’t capture. Environmental sounds involve multiple overlapping frequencies, amplitude modulations, and spatial characteristics that have no relationship to threshold detection of isolated tones in quiet.

So why do families pay thousands of dollars for testing that’s irrelevant and inaccurate? Too often, it’s to get a diagnostic passcode for accommodations. But here’s the paradox: children and adults who genuinely struggle in complex listening environments are passing these tests because the tests aren’t measuring what actually matters. We’re creating a system where the people who most need support are least likely to qualify for it.

____

The Hidden Bias in Current Testing

We are well aware that the current auditory processing battery is not very strong. Plenty of people who struggle in complex listening environments can still pass when they’re awake, alert, and particularly if they’re extroverted and thrive on external stimulation.

This mirrors what we see with hearing aids: extroverts tend to do better than introverts, but is that because they actually succeed more, or because the way we fit hearing aids drowns introverts in sound? Traditional hearing aid fittings often prioritize maximum audibility without allowing for rest and escape as needed. What looks like “success” for an extrovert might be overwhelming sensory bombardment for an introvert.

The same bias likely affects APD testing. An extroverted child might breeze through a 20-minute battery in a novel clinical setting, energized by the interaction and challenge. An introverted child with identical underlying processing abilities might perform worse simply because the testing environment itself is draining. And what about the child who’s somewhere in between, the child who may be initially comfortable with moderate stimulation but overwhelmed by chaos or understimulated by too much quiet?

This is especially critical for neurodivergent children. While a clinic setting might be sufficient for some, many require a familiar environment to perform in a way that makes sense for their real life. Testing an autistic child in an unfamiliar sound booth may tell us more about their response to novelty and sensory overwhelm than about their actual auditory processing abilities.

____

The Hearing Loss Paradox

And why aren’t people with traditional hearing losses eligible for these types of tests? If their audiogram is outside of normal, they’re even more likely to have auditory processing issues, but we can’t test them. So how can two people with the exact same audiogram have totally different capabilities even though it looks the same on paper? And we fit them with the same REM prescriptive target even though their needs might be totally different.

Consider age and cognitive differences too. You could have the same audiogram in a 40-year-old versus an 8-year-old versus an 80-year-old and see huge processing differences when it comes to temporal processing, hearing gaps in noise, and spectral resolution. That can affect comprehension severely when it comes to consonants particularly.

The 8-year-old’s developing auditory system might struggle with rapid acoustic changes that the adult processes effortlessly. The 80-year-old might have significant age-related temporal processing decline, reduced working memory, and slower cognitive processing speed that doesn’t show up on pure-tone testing but devastates speech understanding in complex environments, especially when fatigue sets in or when listening demands exceed their cognitive resources.

What if there’s a history of auditory deprivation, language deprivation, and information deprivation due to hearing losses or other factors?

Someone who received hearing aids late in childhood, or experienced prolonged untreated hearing loss, or grew up with excessive headphone use might have the same current audiogram as someone with typical auditory development, but their neural pathways for processing complex sound never developed properly. Their brain might struggle with auditory scene analysis, streaming, or pattern recognition in ways that no amount of amplification can fix, yet current fitting protocols treat them identically.

What if there’s hidden hearing loss where the audiogram shows up as normal because they still have 20% of their outer hair cells working, but they can’t hear in noise? These individuals have lost most of their cochlear amplification and fine-tuning ability, so speech sounds muddy and unclear in complex environments, but pure-tone testing misses it entirely. They get told their hearing is “fine” and are dismissed from audiology clinics, when they actually need sophisticated signal processing and noise management strategies that current practice never considers.

This is where the Acceptable Noise Level test “ANL” becomes even more critical. Two people with identical 40 dB hearing losses might have completely different noise tolerance profiles. One might thrive with aggressive noise reduction and directional microphones, while the other finds those same features isolating and prefers a more open, natural sound environment. Current fitting protocols can’t capture these differences, we’re essentially prescribing the same glasses for two people who both “can’t see the blackboard” without understanding that one has nearsightedness and the other has astigmatism.

____

What Auditory Processing Actually Is, And How We’re Missing It

Auditory processing isn’t just “can you decode speech sounds?” It’s the dynamic interplay between signal detection in complex acoustic environments, cognitive load management when listening gets effortful, attention allocation across competing sound sources, energy conservation over time as fatigue builds, motivation and engagement with preferred versus non-preferred content, and personality-based processing preferences like introvert versus extrovert patterns.

Current testing misses most of this because it measures performance at one point in time, in artificial conditions, with decontextualized material.

But what if we tested special interests versus neutral content? Does the dinosaur-obsessed child show better ANL scores when listening to paleontology stories versus random sentences? What about time-of-day effects — how does that same child perform at 8 AM versus 2 PM versus after a full school day? Do introverted children show better processing with calm, focused audio while extroverted children thrive with multi-layered complexity?

ANL could bridge this gap, but only if we expand it beyond its current limitations. The original test is tied to hard-to-find manufacturers. There’s no standardized children’s version. And most critically, it doesn’t account for the factors that actually drive real-world listening success: interest, energy, personality, and spatial complexity.

There’s a real question about who these “APD” tests were actually normed upon. I’ve taken a look at some of the studies that showed the normative values for the SCAN, and who they decided was normal versus not normal was based on how they did on the SCAN. I have to recheck it, but how did they know these kids were actually normal when they considered them as controls?

When I look at something like the ANL, it’s not going to be normed in the same way — it’s going to be a qualitative kind of test until we get some better norms. And the test instructions are gonna have to be very consistent from person to person, as well as the actual presentation of the test itself. Like what is the story the person is listening to: Is it high-interest or low-interest?

What is the background noise? It’s multi-talker babble, but is it going to be spatialized like sitting in different areas of space, or is it going to be just from one speaker? There’s all sorts of questions you need to ask before you call a test standardized. And even if it is standardized, how do you know that what you think you’re measuring is what you actually are measuring?

That’s one of the reasons I’m hiring biostatisticians to help me create some new tests based on previous tests. I probably won’t use the actual ANL test material itself or even call it ANL, but what it is is: how much can you put up with? Maybe I’ll call it the Drowning Test, I don’t know. Maybe it’s the Walk-Jog-Run Test, who knows what it’ll be. But what it is is a measure of resilience, tolerance, and how quickly the curve changes as the noise increases and what it does when you’re tired versus awake.

But that raises questions: How tired is "fatigued"? How do we know how tired the person is when we say they’re fatigued? Should we just do multiple runs of the same kind of test through the day? Will they get more familiar with the task? Will material change their own perception of what they can tolerate? How do we choose the subjects?

In the original ANL test, they had stories about picking grapes or wine or going to a restaurant or something like that, which I find incredibly boring, and I didn’t really like the man’s voice. What if it were “The Itsy-Bitsy Spider”? What if it were “Where the Wild Things Are”? What if it were one of my favorite novels? They even say that the speech material doesn’t really matter... so if it doesn’t matter, let’s find out if that’s true.

____

How We Could Make the ANL More Child-Friendly?

Imagine an ANL designed for kids where they set a story to a “just right” volume, we add cafeteria chatter in the background, and they play a game: raise their hand when the “noise monster” gets too close, or hit “stop” when the rocket slider goes too high.

Now add four conditions. Context-rich stories with coherent narratives where each sentence builds on the previous one.

Context-free content with random, unconnected sentences and no narrative thread. Interest-based content covering dinosaurs, cooking, Minecraft, soccer. And personality-matched content with quiet stories for introverts, group discussions for extraverts, and moderate complexity for those somewhere in between.

One of the big criticisms about auditory processing testing is that it overlaps with language and it’s hard to determine which is auditory versus language. But that’s not really the issue. Language comes from auditory processing, and context makes a big difference. We should have stories that provide contextual support and ones that don’t, because this mirrors real life.

Think about it: following a teacher’s coherent lesson about the Revolutionary War (context-rich) requires different listening skills than processing random announcements over the intercom (context-free). A child might excel when context helps fill in missed words but struggle when every word must be caught precisely.

____

Why Spatial Listening Needs to Be Added , And Why Current Tests Fall Short

The current ANL is a flat test: one voice, one wall of babble. But real life is spatial. The brain uses direction: voices behind, clinking glasses to the side, a teacher in front, in order to unmask speech. Tests like the LiSN-S show how spatial separation helps kids with APD³, but even the LiSN-S has a major limitation: it only tests speech coming from the front.

How realistic is that? If you’re sitting in the backseat of a car, the conversation you need to follow might be coming from either side. If you’re in the front seat, you might need to tune into kids talking behind you. In classrooms, peer discussions happen in all directions. At dinner tables, family conversations swirl around you in a full 360-degree soundscape.

Current spatial tests miss the reality that we need to process important information coming from any direction while filtering out distracters from everywhere else.

Picture a future ANL where background noise isn’t just layered on top, but delivered from different angles — front, back, sides, even diagonally. Then we’d know not only how much noise a person tolerates, but how they use spatial cues to survive it, regardless of where the important information is coming from.

____

The Need for Functional Testing in Real Environments

We need functional listening tests performed in schools and workplaces. I recently saw a child who had been using headphones since age 4 and tested as having auditory processing problems — but this was actually auditory deprivation, language deprivation, and information deprivation caused by years of headphone use. The functional hearing test made this immediately apparent. While it showed up in clinic test scores because the disorder was severe, it was even more obvious in the classroom setting where the real-world impact was undeniable.

This case also shows why early intervention with speech enrichment, sound enrichment, and information enrichment can make all the difference, whether through listening devices, Cued Speech, ASL, or other approaches. Accurate diagnosis in functional settings drives appropriate intervention, while clinic-only testing might miss the underlying cause entirely.

____

A Comprehensive Future for APD Testing

Right now, auditory processing testing too often reduces to one score on one test. But listening is never one-dimensional. It’s accuracy, effort, tolerance, and fatigue braided together.

A truly comprehensive APD battery should include traditional measures like gap detection, dichotic digits, and temporal processing alongside speech versus non-speech dichotic tasks to understand how the brain handles linguistic versus acoustic information differently.

We need accuracy-in-noise measures to see what words get through, listening effort ratings to understand how hard it feels, fatigue curves to track how performance collapses over time, and ANL with personality matching to determine when someone will stop trying and what type of content sustains them. Add 360-degree spatial processing to see how direction helps or hinders from all angles, plus personality-informed testing to understand whether someone’s listening style matches introverted, extraverted, or somewhere-in-between processing preferences.

Together, these give us a truer picture of everyday listening. And yet, ANL has been left behind. There is no child version, no spatial upgrade, no integration into APD batteries.

We need functional testing that works regardless of hearing thresholds, age, or cognitive status. ANL, listening effort ratings, fatigue curves, and personality-informed assessments could revolutionize hearing aid fitting by actually matching technology to individual processing styles rather than just compensating for decibel losses.

____

Rethinking Auditory Processing Intervention

Similarly, we need auditory training methods that emphasize the building blocks of auditory processing: focus, split attention, visual and auditory memory, processing speed, and cognitive flexibility. Well-researched cognitive training programs like LearningRx already target these foundational skills for general learning differences. We need auditory-specific versions that build these capacities systematically rather than just drilling speech-in-noise tasks.

Traditional auditory training acts like a scalpel, specifically targeting specific auditory skills. But the world doesn’t work like that, so it doesn’t always generalize. Real listening demands broad cognitive flexibility, sustained attention, memory integration, and the ability to shift between focused and ambient awareness. Training should mirror this complexity rather than isolating individual components that may never connect to functional performance.

And using a test-retest method of determining the effectiveness of auditory training doesn’t make sense if the original test isn’t accurate.

How can we measure improvement on tests that miss the very skills that matter for real-world function? If your baseline measure doesn’t capture listening effort, fatigue effects, spatial processing, or personality factors, then showing improvement on that same flawed measure tells us nothing about whether the person can actually function better in their daily life.

____

A Call for Systemic Change

As parents, we worry about our children and we want to make sure they get all the accommodations they might need for disabilities, so we go for auditory processing testing. Then they’re tested and diagnosed (or they aren’t) by a cookie-cutter method based on what tests the clinic owns or doesn’t own. And we never stop to wonder if the cookie-cutter pattern actually measures what we think it is measuring. There’s all sorts of questions about it, like Wilson and Arnott in 2013 and the 7.3% to 96% accuracy — or actually not accuracy, but identification. There’s a reason that people don’t believe in auditory processing: it’s because the testing method is like a cheap suit from a bad department store.

Allied professionals (e.g., speech pathology, psychology, education, etc.) have been criticizing auditory processing disorder (APD) as a diagnosis, and their testing methods, and they’re completely right. Current APD assessment has been like an off-the-rack suit that doesn’t quite fit.

What we need to do is change it into something couture. It needs to be custom fitted for each person, and one of the ways we can do that is with listening effort and fatigue. But another way we can do it is looking at the person’s own perception, both through an extensive case history, functional hearing tests, and a test that came out a long time ago, reinvented: the acceptable noise level.

The difference between a photo and a reel captures everything wrong with current testing.

As I wrote in the song, “In Still Motion”:

“He runs through the tide, wind in his hair

Laughter drifts light through salt-heavy air

Still frames catch him mid-spin, wide-eyed

A moment of joy, no shadows to hide*

*But somewhere inside, the rhythm sways

One foot lags as the shoreline plays

No siren blares, no rule’s been crossed

Just something faint that feels like loss”

These lyrics captures how current testing works — we’re taking beautiful “still frames” that miss the subtle struggles underneath. We see the moment of success but miss the drift, the lag, the something that “feels like loss.”

The tests were smooth, the charts were clean

Measured in stillness, fed routine

But no one listens in pure control

The world is cluttered, full of tolls*

*He fades in noise, holds back in halls

Misses the thread when the teacher calls

But photos bloom with light and peace

As if his struggle had found release

The controlled testing environment creates an illusion. The child might pass every test, but in the real world — in noisy halls, when teachers call across busy classrooms — they fade and struggle. The “photos bloom with light and peace” while the actual experience remains hidden.

“You won’t see the drift in a single frame

Or hear the hush that won’t take a name

You’ll miss the slip, the pause, the sink

Unless you follow how the child thinks

Not quiet booths or frozen smiles

But the scattered hum of winding miles

Play the reel, not just the scene

There’s more to truth than what’s been seen”

This is exactly what we need to change. We need to follow how the child actually thinks and functions over time. We need to capture “the scattered hum of winding miles, that is, the cumulative effect of listening effort across a whole day, week, month, or even years.

“I’m learning slow to watch the sway

The echo caught between what they say

It’s not the word, but how it lands

That maps the space between the strands”

This bridge captures the art of truly understanding auditory processing. It’s not just about whether words are heard correctly, but “how it lands,” the space between understanding and effort, the subtle patterns that emerge over time.

“You won’t catch the thread in a frozen light

Or fix the story by naming it right

You have to move with what feels unclear

Stand in the stillness and lend an ear

So play the reel, walk through the blur

Meet the child in who they were

In the motion, truth is grown

Where silence cracks and seeds are sown”

That needs to change. Because classrooms aren’t quiet booths. They’re dynamic, noisy, unpredictable. And if we only measure what a child can do once, instead of what it costs them to keep doing it, we’re missing the real diagnosis entirely. We need to “play the reel” in order to see the full picture of how listening effort and resilience change over time, across contexts, when tired versus alert.

In the motion, truth is grown. Where silence cracks and seeds are sown.

That needs to change. Because life doesn’t occur in a sound booth. Reel life, I mean, REAL LIFE is dynamic, noisy, unpredictable. And if we only measure what a child can do once, instead of what it costs them to keep doing it, we’re missing the real diagnosis entirely.

The current system is broken: families spend thousands on testing that fails to identify those who struggle most, while creating barriers for the very people who need support. We need assessment that captures the reality of listening effort, fatigue, personality factors, and contextual demands.

I’m actively working on developing this comprehensive approach to APD testing. I’ll admit I haven’t yet managed to pull in the ANL, but I’m planning to very soon. If you’re interested in being involved with this project, whether you’re an audiologist, researcher, or sound engine engineer, please reach out.

We need collaborators who understand both the clinical need and the technical challenges of creating spatial, adaptive, child-friendly testing platforms.

But we also need to rethink intervention. What if hearing aids and classroom amplification systems had layering controls, allowing users to selectively tune out when things become overwhelming, like taking an auditory cat nap? Instead of forcing constant maximum audibility, we could build in escape valves for overstimulated nervous systems.

This is the future direction of better APD testing: not replacing what we do, but expanding it to capture the full reality of how people listen in the world — and how their personality, energy levels, and need for auditory rest shapes that reality.

____

Listen to “Play the Reel” - a song about the limitations of snapshot testing:

Click here to Listen:

https://suno.com/song/9dbdc9be-0b49-4559-9511-a5604c9cec8d

____

References

  1. Billings, C. J., Olsen, T. M., Charney, L., Madsen, B. M., & Holmes, C. E. (2024). Speech-in-noise testing: An introduction for audiologists. Seminars in Hearing, 45(1), 55–82.

  2. Cameron, S., & Dillon, H. (2007). Development of the Listening in Spatialized Noise–Sentences test (LiSN-S). Ear and Hearing, 28(2), 196–211.

  3. Elleseff, T. (2025, June 29). The APD Diagnosis Trap: How a controversial label harms kids in schools. Tatyana Elleseff. Retrieved August 22, 2025, from https://tatyanaelleseff.com/apd-label-harms-kids/

  4. Etymotic Research. (2005). BKB-SIN Speech-in-Noise Test (Version 1.03) [CD]. Etymotic Research.

  5. Keith, R. W. (2000). Development and standardization of SCAN-C Test for Auditory Processing Disorders in children. Journal of the American Academy of Audiology, 11(9), 438–445.

  6. Nabelek, A. K., Freyaldenhoven, M. C., Tampas, J. W., Burchfield, S. B., & Muenchen, R. A. (2006). Acceptable noise level as a predictor of hearing aid use. Journal of the American Academy of Audiology, 17(9), 626–639.

  7. Nabelek, A. K., Tampas, J. W., & Burchfield, S. B. (2004). Comparison of speech perception in background noise with acceptance of background noise in aided and unaided conditions. Journal of Speech, Language, and Hearing Research, 47(5), 1001–1011.

  8. Stern, C. S. (2016). The reliability and validity of the SCAN and SCAN-C for use with children with auditory processing disorders: A systematic review (Doctoral dissertation, City University of New York). CUNY Academic Works.

  9. The Informed SLP. (2016, May 1). APD: What exactly are we measuring here? Retrieved July 1, 2025, from https://www.theinformedslp.com/review/apd-what-exactly-are-we-measuring-here

  10. Vermiglio, A. J. (2014). On the clinical entity in audiology: (Central) auditory processing and speech recognition in noise disorders. Journal of the American Academy of Audiology, 25(9), 904–917.

  11. Vermiglio, A. J. (2016). On diagnostic accuracy in audiology: Central site of lesion and central auditory processing disorder studies. Journal of the American Academy of Audiology, 27(2), 83–95.

  12. Vermiglio, A. J., Soli, S. D., & Fang, X. (2018). An argument for self-report as a reference standard in audiology. Journal of the American Academy of Audiology, 29(3), 206–222.

  13. Wilson, W. J., & Arnott, W. (2013). Using different criteria to diagnose (central) auditory processing disorder: How big a difference does it make? Journal of Speech, Language, and Hearing Research, 56(1), 63–70.

Previous
Previous

Drowning in Sound

Next
Next

Virtual LGHA Fitting: Beyond Traditional Real Ear Measurement (REM) Verification