It is possible to create a screening protocol in the DPOAE module that provides a test outcome labelled as PASS, REFER or INCOMPLETE.
The application of DPOAEs in screening is very different to that when used for diagnostic purposes. It is therefore extremely important to select the correct test settings or protocol, depending on the patient or expected use of the test results.
As a screening protocol is typically used for the purpose of differentiating patients with likely normal hearing from those with a peripheral dysfunction, it is important to create a protocol that has only a small probability of providing a pass on a test, for patients with a moderate or greater hearing impairment.
In newborn hearing screening, test protocols typically require that 3 out of 4 bands (in the speech frequency range) are detected for a PASS result. When the newborn passes the hearing screening test, it indicates that there is a significantly low chance that they have a significant auditory disorder requiring amplification (e.g., a hearing aid or cochlear implant).
A REFER result only indicates that the detection criteria of the test protocol was not met. This could be due to test conditions (i.e., too noisy), a hearing loss (conductive or sensorineural) or poor testing techniques.
Infants that REFER on a screening test may be rescreened before being referred to a diagnostic audiology department for more thorough testing to rule out a permanent hearing loss.
Ensure that the pass-refer protocol that you create is valid for the test population and for the expected clinical outcome. For more comprehensive information about outer hair cell function and for clinical based treatment and rehabilitation decisions, a diagnostic OAE protocol should be used in conjunction with other diagnostic audiological tests.
A protocol used for screening purposes has the purpose of differentiating patients with likely normal hearing from those with a peripheral dysfunction. Therefore, it is important to create a protocol that has only a small probability of providing a pass on a test, on patients with a moderate or greater hearing impairment.
The following test parameters should be carefully defined to create a highly sensitive pass-refer protocol.
Select or create an appropriate test protocol that includes the cochlear region of interest. For DPOAEs, the test frequency is denoted by f2 and usually includes frequencies in the range of 1000 – 6000 Hz.
The number of DP test frequencies included in the protocol should also be considered, usually 4 or 5 frequencies are tested. In newborn screening, 1000 Hz is typically omitted due to ambient and physiological noise affecting test outcomes.
Two different stimulus levels, labeled as L1 and L2, need to be defined when performing DPOAE assessments. The L1 is the stimulus level of the first primary tone (f1), while L2 defines the stimulus level of the second primary tone (f2). Selecting the most appropriate stimulus levels for each primary is critical to recording a valid DPOAE.
The possible intensity relationship between L1 and L2 are: L1 < L2, L1 = L2 and L1 > L2. The L1 > L2 stimulus intensity relationship consistently produces robust DPOAEs and is widely used for diagnostic and screening applications. A typically recommended screening protocol uses a 65/55 dB SPL relationship.
The ratio of f2/f1 sets the ratio relationship between the two stimulus frequencies. The frequency relationship is critical to evoking a DPOAE response. If the two tones are spaced too far apart or too close together, a DPOAE will not be recorded.
Evidence suggests that the most clinically effective f2/f1 ratio for testing patients across all age groups is between 1.20 and 1.23. The DPOAE is largest when the ratio of f1 to f2 is 1.22.
Accurate presentation of the stimulus to the ear is important to produce valid OAEs. A stimulus tolerance value (dB) defines the allowed difference of the stimulus presentation level in both a positive and negative direction. Setting an appropriate stimulus tolerance level will ensure that a warning is provided to the tester when unwanted stimulus level changes occur that could affect test outcomes.
Changes in stimulus levels during testing usually occur due to probe movement in the ear canal. Probe movement can be reduced by ensuring a secure probe fit before testing and instructing the patient to remain still during testing.
As ambient and physiologic (patient) noise levels have a huge effect on OAE recordings, assessing the noise levels in the test environment before and during testing is extremely important, especially in nonideal test environments such as in a newborn nursery. In DPOAE testing, a typical noise rejection setting is 30 dB SPL.
This ensures that data recorded in conditions where the background noise is high are rejected and not included in the averaging of the response. While it lengthens the test time, it is not recommended to turn the acceptable noise level off. Data recorded with high levels of noise can reduce the integrity of the data being analyzed.
A suitable test time should be selected to ensure that enough time can be spent to extract the DPOAE response from the surrounding noise in the recording. This can usually be set as either an absolute test time (e.g., 1 – 6 minutes) or as a maximum test time per DP point (e.g., 2 or 4 seconds).
Testing normally continues until the maximum test time is reached or the required number of DP points are detected, whichever happens sooner.
Residual noise is the remaining or left over noise in the recording after continued averaging (data collection). Using a residual noise value as a stopping criterion instead of absolute test time can save unnecessary test time in cases where OAEs are absent.
For example, each specific OAE device has a hardware noise floor that averaged noise will never fall below. Let’s say this value was –20 dB SPL. The protocol could be set to stop testing at each frequency if the noise floor dropped below this value (as an OAE would have already been detected if it was present). The device’s hardware noise floor can be determined by placing the probe in a test cavity and running a protocol lasting a few minutes.
Screening protocols typically use criteria of 3 out of 4 or 3 out of 5 DP points for a PASS result. Setting the required numbered of bands too loosely (e.g., 1 out of 4 for a PASS) would increase the chances of passing an ear with a hearing loss, whereas setting it too stringently (e.g., 4 out of 4 for a PASS) would increase the number of REFER outcomes.
A mandatory DP frequency can be defined for inclusion in screening protocols. For example, if a 3 out of 4 bands for a pass protocol was used, the 2000 Hz band could be set as a mandatory DP point for inclusion. This means that if during testing, 3000 and 4000 Hz were detected, testing would continue until 2000 Hz was detected or the test timed out.
OAE amplitudes are generally in the range of -10 dB SPL to +30 dB SPL in healthy functioning ears. Therefore, setting a minimum OAE level ensures that low-level artifact responses are not accepted as a true OAE response, even when the required SNR is met. Minimum OAE levels should not be set lower than device specific system distortion levels.
A DP tolerance level may be set to define how stable the OAE level must remain over time in order for it to meet the detection criteria of a true response. A strict DP tolerance value of ±2 dB will increase test times for each test frequency, but will provide further certainty that it is a valid OAE response.
DP reliability is another criterion that can be used in conjunction with the SNR and DP tolerance setting to ensure certainty of the DP frequency response.
DPOAEs are traditionally interpreted as present when the difference between the OAE and the noise level is significant, i.e., a SNR of 6 dB is obtained. This can be problematic as the noise level used to calculate the SNR is an average level and while the OAE level is relatively stable, noise fluctuates.
If fluctuations around the measured frequency are large, then a signal to noise ratio of 6 dB does not imply that the OAE appears significantly outside the noise spectrum. The OAE can actually be an accidental peak of the noise. Using the reliability instead of (or in addition to) the signal to noise ratio gives more confidence in the OAE result.
The signal-to-noise ratio (SNR) refers to the difference in dB, between the level of the OAE response and the background noise. The SNR can be thought of as an estimate of the reliability with which the OAE response level has been estimated. When SNRs are high, the contribution of the noise in the recording is low and there is more certainty that the displayed OAE response is true.
The SNR is calculated from two variables – the OAE response (or signal) generated by the cochlea and the noise, a random variable unrelated to cochlear status. The signal (OAE response) should remain constant during averaging, while the noise level should decrease as test time increases. Most literature
recommends a minimum SNR of 6 dB in addition to other criteria to determine the presence of a valid OAE response.
To avoid undertaking unfeasibly large clinical trials involving say 10,000 infants, some countries that administer UNHS programs stipulate test criteria for a sensitivity assessment. The following is an extract from the United Kingdom Newborn Hearing Screening (NHS) program tender document:
"Evidence of cavity trials at volumes close to 0.05; 0.1; and 0.2ml; showing a maximum of 1 pass on 120 repetitions. To be conducted with normal stimulus levels present in (a) quiet conditions and (b) with wide bandnoise applied externally to such a level that the reject system is activated between 30% and 70% of the time. Such noise may be generated by the wide band masking of a clinical audiometer. For condition (b) if a method of data rejection is employed that does not enable this test to be performed, provide evidence of an equivalent test in noisy conditions."
Based on the above information, a user defined screening protocol’s sensitivity could be assessed by the following method:
The final sensitivity estimate is calculated by expressing the number of ‘true refers’ as a percentage of the total number of tests. When tests are performed in a standard test cavity, we would hope that all test outcomes were a ‘refer’ and therefore the test sensitivity would be measured at 100%.
However, statistically it is not safe to use this value unless the number of tests is very large. Instead, the sensitivity is assessed assuming a worst case scenario such that if ‘n’ tests are performed, if there was an ‘n+1’ test it would produce a ‘false pass’.