This detailed research article describes the study of electrophysiological hearing thresholds using the Interacoustics Eclipse EP system in 102 infants and toddlers (58F 44M), where 59% were 5 months or younger at the time of assessment. Sininger states the objectives of the study was to compare the predicted audiometric thresholds obtained by ASSR and ABR in this sample when both techniques use optimal stimuli and detection algorithms. The stated aim was to address the past discrepancies studies found between ABR and first generation ASSR measures. The second objective of Sininger’s team was to determine and compare test times required by the 2 techniques to predict thresholds for both ears at the 4 audiometric frequencies of 0.5, 1.0, 2.0, and 4.0 KHz. The data for determined thresholds within the study was evaluated by Sininger and correction factors applied. The findings of the study demonstrated thresholds were significantly lower for ASSR than ABR, and showed greatest difference at 0.5 and1.0KHz, with up to 14 dBnHL lower thresholds being detected using ‘next generation’ detection when compared to ABR using an automated response detection (FMP). The improvement in ASSR threshold detection was attributed to the advances implemented in the Eclipse EP system for response detection utilising information at multiple harmonics of the modulation frequency. The stimulation paradigm which utilised NB CE-Chirp also contributed to the lower absolute levels in nHL. The research clinicians in this study obtained all 8 thresholds in one appointment in 83% of ABR, and 87% ASSR assessments. Within the study participants 49% had thresholds determined as normal by ASSR and ABR. The time to obtain 8 frequency specific thresholds for these infants was 24.02 mins for ABR, and 15.31 mins for ASSR. The features of the Eclipse EP system that the authors call ‘next generation’ detection when compared to the first generation ASSR measures, are summarised as the assessment of 12 harmonics, rather than 1, and the use of both phase and amplitude information, rather than one or the other, as well as careful calculation of appropriate test criterion. This study demonstrates that ASSR using the eclipse is now a suitable alternative as well as a quicker test to perform than ABR in measuring hearing threshold in infants.
Provisional stimulus level corrections for low frequency bone-conduction ABR in babies under three months corrected age.
Frequency-specific ABR testing in infants and newborns via air-conduction (AC) is commonplace in the clinic. In cases when AC thresholds are elevated, it may be necessary to obtain additional results via bone-conduction (BC) to determine if the hearing loss is conductive or sensorineural. However, calibration of stimuli for these tests are typically referenced for adults. Newborns present challenges to BC calibration due to smaller and unfused cranial plates. Additionally, calibration of BC transducers are referenced to artificial mastoids that simulate adult mastoids. This study estimates the necessary BC stimulus corrections (relative to adults) at 500 and 1000 Hz from 27 normal-hearing newborns via a B71 and TDH39 transducers. Median age-related BC stimulus corrections for babies under 3 months of age are 30 dB at 500 Hz and 20 dB at 1000 Hz. These results emphasize the importance of correcting BC stimulus level for newborns when performing BC ABR testing.
The auditory brainstem response (ABR) is an integral clinical metric for the estimation of hearing threshold, assessment of the neurological integrity of the auditory system, and most commonly, screening for hearing loss in newborns and babies. However, the ABR response can be constrained by low signal-to-noise ratios (SNR) precluding accurate and reliable responses. Artefact rejection (AR) is one technique used to improve the SNR by allowing signal averaging to continue only if the peak amplitude of the response is below a defined limit. The current study investigates the effect of Bayesian weighting and AR level upon the efficiency of noise reduction across 26 babies referred from the English Newborn Hearing Screening program. ABR recordings using an Interacoustics Eclipse were evaluated for 5 AR levels and 2 AR levels with Bayesian averaging. Strict AR levels are optimal when noise is low; whereas, more lenient AR levels are more efficient when noise is high. Bayesian averaging can facilitate increased efficiency as noise levels increase. This suggests that the use of Bayesian weighting available in the Eclipse offers additional efficiency for reducing the effects of noise on ABR recordings.
The auditory steady state response (ASSR) is useful for estimating hearing threshold and relies on the ability of the auditory system to phase-lock and mimic the frequency and amplitude modulation of an external stimulus in the response. Although there is general agreement between thresholds obtained via auditory brainstem responses (ABR) and ASSR, discrepancies still exist. CE-Chirps have been successful at generating robust responses relative to tone-burst ABRs, and have been used in conjunction with ASSR in neonates and adults, albeit such applications of ASSR have not been extensively compared to thresholds obtained via ABR. The present study compares the narrow-band CE-Chirp evoked ASSR with click evoked ABR and behavioral methods of threshold estimation across 32 infants and toddlers. Results show that threshold estimation via CE-Chirp evoked ASSRs are highly correlated with those obtained via ABR and behavioral response audiometry for the frequencies of 0.5, 1, 2, and 4 kHz, including average responses across 2 and 4 kHz, and a combined average across all frequencies. This study suggests that narrow-band CE-Chirp ASSR, as is used in the Eclipse, accurately estimates behavioral response audiometry thresholds in infants and toddlers, even at 0.5 kHz. In addition, the use of narrow-band CE-Chirps may identify steeply sloping audiometric configurations that would be missed via click-evoked ABR.
This study evaluates the performance of the narrow-band CE-Chirp stimuli centred at 4 kHz and 1 kHz in a real-world clinical setting. The study was designed such that infants referred by the UK newborn hearing screen for ABR testing were first assessed using conventional tone burst stimuli at 4 kHz and 1 kHz, before repeating the procedure with the CE-Chirp stimuli. Key aspects of the performance were then compared i.e. response amplitude, Fmp (an objective indication of the likelihood of a response being present) and residual noise. The results from 42 infant ears showed that the mean ABR amplitudes to both 4 kHz and 1 kHz CE-Chirp stimuli, when compared to those from equivalent tone burst stimuli at the same level and comparable residual noises, were 64% greater. Fmp values for the CE-Chirp data were over twice as large as the corresponding tone burst data. Taken together these results indicate that CE-Chirp derived ABRs will offer significant time savings when testing infants, while the great Fmp values provide increased confidence in the presence of a response and this should translate into fewer “inconclusive” findings. Since the larger amplitude CE-Chirp responses will therefore lead to clear responses at lower levels than tone bursts, a more accurate estimation of the behavioural threshold is also proposed (i.e. an nHL-to-eHL correction factor that is 5 dB less for CE-Chirps than tone burst at 4 kHz and 1 kHz).
Auditory brainstem responses (ABR) are useful for evaluating the hearing status of infants that fail newborn hearing screenings or develop a postnatal hearing pathology. The quality of ABR recordings are largely dependent upon individual electroencephalogram (EEG) amplitude and state of arousal, and thus motivates obtaining ABR recordings under natural sleep, sedation or general anaesthesia. One way to reduce the contribution of high EEG levels upon the quality of ABR recordings is to obtain a more robust evoked response, which the CE-Chirp has shown promise. The present study analyzes the amplitude and amplitude growth function of CE-Chirp evoked ABRs retrospectively from 46 infants for comparison against the comparable literature data for adults and to click-evoked ABRs for infants. In addition, the effects of maturation on CE-chirp evoked ABR between 1 and 48 months of age is evaluated. Results show that CE-Chirp evoked ABR amplitudes for two groups of infants separated according to a criterion of 18 month of age are larger relative to responses reported in the literature for click-evoked ABRs from young infants. The CE-Chirp evoked ABRs are not substantially smaller for the older infants than those reported for adults; however, the CE-Chirp evoked ABR amplitudes are smaller for younger infants relative to their older infant and adult counterparts. The results suggest that use of the CE-Chirp evoked ABR improves the chance of overcoming the adverse effects of high EEG noise in ABR recordings and hence, stands to reduce recording time in young infants.
Rodrigues, GR., Ramos, N. and Lewis DR. (2013) International Journal of Pediatric Otorhinolaryngology, 77(9), pages 1555–1560.
A relatively recent innovation in auditory evoked potentials is the narrowband CE-chirp, a sound stimulus designed to improve synchrony of evoked neural activity. This paper describes the key characteristics of ABR amplitude and latency for CE-chirp stimuli over a range of frequencies and levels, in comparison to conventional tonepip stimuli in the same group of individuals.
Automated electrophysiological response detection is a key component of hearing screening programmes, and relies on balancing the time needed to complete the test, with appropriate statistical robustness in response detection. This article details a test strategy that may improve performance in ASSR detection by decreasing test time and increasing response detection rates.
This longitudinal study compares the accuracy of estimated hearing thresholds in hearing impaired infants using the ABR in the neonatal period, with the behavioural thresholds gathered at a later date when such behavioural testing became feasible. Agreement was around 10 dB for a 4 kHz tone-pip ABR and the corresponding behavioural threshold, and around 17 dB for the 1 kHz comparison.
This paper describes a research project which compares the ABRs in 11 normal-hearing young adults (N = 22 ears) in response to the Click, the CE-Chirp and the LS-Chirp (ref. 17) at a broad range of stimulus levels. The stimuli are delivered by two different insert earphones, the ER-3A and the ER-2. The ER-3A has an amplitude response that rolls of at about 4 kHz, whereas the ER-2 has an amplitude response which is flat all the way up to and beyond 10 kHz. The recordings are obtained with the Eclipse using a 30 nV background noise stop criteria and weighted averaging. For both chirps it is found that the ABRs at lower levels (i.e. below 60 dB nHL) are significantly larger with the ER-2 than with the ER-3A earphone, and further it is demonstrated that this finding most likely is due to the large differences between the amplitude-frequency responses of the two earphones.
This paper describes a reference study in which the behavioural thresholds to chirp stimuli are measured in a large group of normal-hearing individuals. The test group consists of 25 young adults (N = 50 ears) and the measurements are in compliance with the recommendations given in the ISO 389-9 standard. The test signals are the CE-Chirp and the four octave-band chirps, which are presented at two repetition rates, 20 and 90 stimuli/s, and using the ER-3A insert earphone. The calibration values are reported in dB p.e. SPL in the occluded-ear simulator. The results are similar to those from another investigation (PTB-study) and the values from the two, independent studies are therefore relevant for a future extension of the existing ISO 389-6 standard, which presently provides reference calibration values for standardized click and tone-burst stimuli delivered from various earphones.
This paper describes an experimental verification of the proposed level-specific model (ref. 17). The study compares the CE-Chirp with the LS-Chirp (and the standard Click) by recording the ABR from both ears in 10 normal-hearing, young adults (N = 20 ears). Both chirps have an electrical amplitude spectrum which is flat from 350 to 11.300 Hz (the IA CE-Chirp). The ER-3A earphone is used and the recordings are obtained with the Eclipse using a 30 nV background noise stop criteria and weighted averaging. The ABR amplitude, latency, and waveform are evaluated. The results clearly demonstrate the advantage of the LS-Chirp over the CE-Chirp at levels above 60 dB nHL.
This paper describes the development of a quantitative auditory model based on a ‘humanized’ nonlinear auditory-nerve model of Zilany and Bruce (2007). The model is able to account for the change in tone-burst evoked ABR latency with frequency, but underestimate changes in both click and tone-burst latency values with stimulus level. However, the model correctly predicts the non-linear ABR amplitude behaviour in response to different chirp stimuli (ref. 15, 16 & 17) and thus supports the hypothesis thatthe ABR generation strongly is influenced by the non-linear and dispersive processes in the cochlea.
This paper describes how the ABR amplitude is dependent of Chirp duration (sweeping rate) and stimulus level. A standard Click and five Chirps of different durations are presented at three levels of stimulation (20, 40 and 60 dB nHL) in 20 normal hearing adult ears. It is found that all the Chirps (except the longest one at 60 dB nHL) always produce larger ABR amplitudes than the Click. It is also found that the shorter Chirps are most efficient at higher levels whereas the longer Chirps are most efficient at lower levels. The paper concludes that two mechanisms appear to be involved: (1) upward-spread-of excitation at higher levels, and (2) an increased change of the cochlear-neural delay with frequency at lower levels. The observed changes in ABR amplitude and latency from the different chirp stimuli are consistent with this conclusion.
This paper describes a similar experiment as the one above. However, relative to Elberling et al (2010), recordings are obtained from 50 normal-hearing adults, the five Chirps have slightly different durations, the stimulus levels are lifted to 30 and 50 dB nHL, the frequency bandwidth of the stimuli is limited to 8 kHz, and some of the recording characteristics (e.g. HP-filter cut-off) have other values. Despite these differences the main experimental findings are the same as in ref. 15, but the effect of chirp duration on ABR amplitude is not as prominent as seen in ABR responses to different chirp stimuli at three levels of stimulation. The main reason for this result is probably the limited range of stimulus levels that has been used in this study.
This paper describes a novel approach to find the delay for each frequency component in order to design a family of chirps that optimally synchronizes all response components from across the cochlea (or brainstem) at all levels of stimulation. ABR latencies in response to octave-band chirp stimuli are collected from 48 normal-hearing adults and are used to formulate a latency-frequency model as a function of stimulus level. The delay compensations of the proposed model are similar to those found in the experimental studies described by Elderling et al (2010a) and Cebulla et al (2010).
Masking the non-test ear is necessary for certain audiometric configurations to ensure that the ABR recording accurately reflects the response (or lack of) of the test ear. However, data is limited in regards to the required level of masking noise necessary in ABR tests. This study attempts to quantify the relative masking level (RLM) in 22 normal-hearing adults for clicks and tone pips common to ABR tests via TDH-39 headphones and insert earphones. Results show that RLM is 4.5 dB greater when the noise level is increased from below the stimulus relative to when the noise is decreased from above the stimulus. Overall, RLMs are as much as 30 dB SPL at 500 Hz and 25 dB SPL across the frequencies of 1000, 2000 and 4000 Hz. RLMs approach 27 dB SPL for clicks. Therefore, a value of 30 dB above the stimulus is recommended for ensuring effective masking of the ABR stimulus in the same ear. The authors value is recommended when calculating the level of noise necessary to prevent cross-hearing during ABR testing and this is used in the NHSP masking calculator.
This paper describes how the Stacked ABR - at the output of the cochlea - attempts to compensate for the temporal dispersion of neural activation caused by the cochlear traveling wave in response to click stimulation. Compensation can also be made - at the input of the cochlea - by using a chirp stimulus. Previously it has been demonstrated that the Stacked ABR is sensitive to small tumors that are often missed by standard ABR latency measures. Because a chirp stimulus requires only a single data acquisition run, whereas the Stacked ABR requires six, the evidence justifying the use of a chirp for small tumor detection is evaluated. The sensitivity and specificity are compared of different Stacked ABRs formed by aligning the derived-band ABRs according to (1) the individual’s peak latencies, (2) the group mean latencies, and (3) the modelled latencies used to develop the chirp. Results suggest that for tumor detection with a chosen sensitivity of 95%, a relatively high specificity of 85% may be achieved with a chirp. Thus, it appears worthwhile to explore the actual use of a chirp because significantly shorter test and analysis times might be possible.
Simultaneous multiple stimulation of the ASSR.
Elberling, C., Cebulla, M., & Stürzebecher, E. (2008). In T. Dau, J. M. Buchholz, J. M. Harte, T. U. Christensen (Eds.), Auditory signal processing in hearing impaired listeners: 1st international symposium on auditory and audiological research (pp. 201-209). Centertryk A/S, Denmark: ISSAR 2007.
This paper describes some characteristics of the ASSR related to the use of multiple, simultaneous, band-limited chirp-stimuli. In a diagnostic study four one-octave-band chirp-stimuli (500, 1000, 2000 and 4000 Hz) were used to measure the ASSR-threshold in normal-hearing adults (N=20 ears). The four stimuli were presented simultaneously to both ears (eight stimuli) with rates at around 90/s. The ASSRs were detected automatically (error rate 5%), and the thresholds evaluated with a resolution of 5 dB. The ASSR thresholds were compared to the audiometric pure-tone thresholds and the deviations evaluated by the group means and standard deviations. These data compare favorably well with similar data reported by others. In a screening study a low-frequency chirp, (Lo: 180 – 1500 Hz) and a high-frequency chirp (Hi: 1500 –8000 Hz), is used to record the ASSR in newborns (N = 72). The two stimuli are presented both sequentially and simultaneously using a rate at about 90/s and a level of 35 dB nHL. The ASSRs are detected automatically (error rate 0.1%), and stimulus efficiency is evaluated by the detection time. The results from both studies demonstrate that simultaneous application of multiple, frequency-specific stimuli can effectively be applied without sacrificing response detection accuracy. However, in the screening study stimulus interactions are observed.
This paper describes how the temporal dispersion in the human cochlea can be compensated for by using a chirp designed from estimates of the cochlear delay based on derived-band auditory brainstem response (ABR) latencies. To evaluate inter-subject variability and level effects of such delay estimates, a large dataset is analyzed from (N = 81) normal-hearing adults (fixed click level) and from a subset thereof (different click levels). At a fixed click level, the latency difference between 5700 and 710 Hz ranges from about 2.0 to 5.0 ms, but over a range of 60 dB, the mean relative delay is almost constant. Modelling experiments demonstrate that the derived-band latencies depend on the cochlear filter build up time and on the unit response waveform. Because these quantities are partly unknown, the relationship between the derived-band latencies and the basilar membrane group delay cannot be specified. A chirp based on the above delay estimates is used to record ABRs in 10 normal-hearing adults (20 ears). For levels below 60 dB nHL, the gain in amplitude of chirp-ABRs to click-ABRs approaches two, and the effectiveness of chirp-ABRs compares favourably to Stacked-ABRs obtained under similar conditions.
This paper describes how a click stimulus sets up a traveling wave along the basilar membrane, which excites each of the frequency bands in the cochlea, one after another. Due to the lack in synchronization of the excitations, the compound response amplitude is low. A repetitive click-like stimulus can be set up in the frequency domain by adding a high number of cosines, the frequency intervals of which comply with the desired stimulus repetition rate. Straight-forward compensation of the cochlear traveling wave delay is possible with a stimulus of this type. As a result, better synchronization of the neural excitation can be obtained so that higher response amplitudes can be expected. The additional introduction of a frequency offset enables the use of a q-sample test for response detection. The results of investigations carried out on a large group of normal-hearing test subjects (N = 70) have confirmed the higher efficiency of this stimulus design. The new stimuli lead to significantly higher response SNRs and thus higher detection rates and shorter detection times. Using band-limited stimuli designed in the same manner, a "frequency-specific" hearing screening seems to be possible.
This paper describes how chirp stimuli can be used to compensate for the cochlear traveling wave delay in recordings of the ASSR (rate: ~90/s). The temporal dispersion in the cochlea is given by the traveling time, which in this study is estimated from latency-frequency functions obtained from (1) a cochlear model, (2) tone-burst auditory brain stem response ABR-latencies, and (3) derived-band ABR-latencies. These latency-frequency functions are assumed to reflect the group delay of a linear system that modifies the phase spectrum of the applied stimulus. On the basis of this assumption, three chirps are constructed and evaluated in normal-hearing subjects (N = 49). The ASSR to these chirps and to a click stimulus are compared at two levels of stimulation viz. 30 and 50 dB nHL and at a rate of 90/s. The chirps give shorter detection time and higher signal-to-noise ratio than the click. The shorter detection time obtained by the chirps is equivalent to an increase in stimulus level of 20 dB or more. The chirp based on the derived-band ABR-latencies appears to be the most efficient of the three chirps tested here. Overall, the results indicate that a chirp is a more efficient stimulus than a click for the recording of the ASSR in normal-hearing adults using transient sounds at a high rate of stimulation.
This paper describes how the ASSR is expected to be useful for the objective, frequency-specific assessment of hearing thresholds in small children. To detect ASSR close to the hearing threshold, a powerful statistical test in the frequency domain has to be applied. Hitherto so-called one-sample tests are used, which only evaluate the phase, or the phase and amplitude, of the first harmonic frequency (the fundamental). It is shown that higher harmonics with significant amplitudes are also contained in the ASSR spectrum. For this reason, statistical tests that only consider the first harmonic ignore a significant portion of the available information. The use of a q-sample test, which, in addition to the fundamental frequency, also includes higher harmonics in the detection algorithm leads to a better detection performance in normal-hearing and hearing impaired subjects (N = 57). The evaluation of test performance uses both detection rate and detection time.
This paper describes the use of the ASSR as a promising tool for the objective frequency-specific assessment of hearing thresholds in children. The stimulus generally used for ASSR recording (single amplitude-modulated carrier) only activates a small area on the basilar membrane. Therefore, the response amplitude is low. A stimulus with a broader frequency spectrum can be composed by adding several cosines whose frequency intervals comply with the desired stimulus repetition rate. Compensation for the traveling wave delay is also possible with a stimulus of this type, leading to a better synchronization of the neural response and consequently higher response amplitudes especially for low-frequency stimuli. The additional introduction of frequency offset, which minimizes the risks of detecting stimulus artefacts, enables the use of a q-sample test for the response detection, which is important particularly at the lowest frequencies. The results of investigations carried out on a large group of normal-hearing test subjects (N = 70) confirm the efficiency of this stimulus design. The new stimuli lead to significantly improved ASSRs with higher SNRs and thus higher detection rates and shorter detection times.
This paper describes how sequential statistical testing, which usually is applied in an automated response detection algorithm, is time efficient but unfortunately also increases the probability of a false rejection of the null-hypothesis. Therefore, in such test situations the test criterion is normally modified by means of the Bonferroni correction. However, when dealing with dependent or partly dependent data the Bonferroni correction will lead to an over-correction and will therefore not be optimal. A new method to find the optimal test criterion is devised and tested by means of Monte Carlo simulations using real background noise data acquired from clinical ASSR-recordings.
This paper describes an objective quantitative approach to the decision of when to stop averaging in the recording of ABRs. This decision is based on (1) the knowledge of the amplitude distributions of wave V in the ABRs of normal-hearing individuals for varying stimulus levels, (2) calculated estimates of the residual background noise in the average, and (3) the use of a quantitative statistical response detector. Several reasons for terminating an averaging process are presented along with a specific protocol for each of the reasons. These protocols provide a general but consistent framework to address the issue of when to stop averaging and will thus improve the efficiency of clinical ABR testing. Furthermore, it is quite possible to automate the procedure and the decision process.
This paper describes the nature of the residual background noise in ABR averages in normal-hearing subjects. The residual noise is estimated with the Fsp technique. Low-level click stimuli are presented in 2-dB steps in the range from 30 to 48 dB p.e.SPL (approximately from -2 to +16 dB nHL) and for each stimulus level, 10 000 sweeps are acquired and stored for subsequent analysis. The shortcomings of artifact rejection and traditional averaging are demonstrated. It is further demonstrated how weighted averaging can help minimize these shortcomings. Finally, it is analyzed how the number of sweeps per block influences the ability of weighted averaging to control the destructive effects of non-stationary background noise. It turns out that reducing block size from 256 to 32 sweeps per block improves the weighted averaging significantly - but with a small amount only. Minimizing the destructive effects increases the value of statistical techniques used for objective ABR detection or to control the quality of ABR recordings. It is concluded that these techniques in combination improve not only the accuracy of test interpretation but also the efficiency of clinical test time, which is becoming important for the control of medical costs.
This paper describes and analyzes ABRs recorded from ten normal-hearing subjects in response to 100 μs clicks from a TDH 49 earphone at a rate of 48 pps and at levels randomly varied in 2-dB steps between 34 and 52 dB p.e.SPL (approximately 0 - 20 dB nHL). At each level, 10 000 sweeps are averaged using weighted averaging. A running estimate of the signal-to-noise ratio (SNR), FSP, is used to detect the presences of the ABR. The median threshold is found at 38 dB p.e.SPL (approximately 5 dB nHL). The mean averaged background noise level is 11.3 nVrms, and the "true" ABR amplitude function crosses this value at 35.5 dB p.e.SPL (2 – 3 dB nHL), which indicates the level where the SNR = 1. By extrapolation it is found that the ABR amplitude becomes zero at 32 dB p.e.SPL. The perceptual thresholds of the click are estimated by means of a modified block up-down procedure and the median value is found at 33 dB p.e.SPL. The slope of the amplitude function and the magnitude of the averaged background noise are the two factors responsible for the ABR threshold sensitivity which thus depends on both physiological and technical parameters. Therefore, these have to be considered together with the method of detection when the ABR is used as an indicator of the hearing sensitivity.
This paper describes a method to recover the ABR by weighted averaging. The method is an effective technique to deal with the destructive effect of fluctuating, non-stationary background noise, and is based on a statistical approach called ‘Bayesian inference’. The contribution of the individual sweep (or block of sweeps) is weighted inversely proportional to the level of background noise during the acquisition of the sweep. Based on 50 sets of clinical recordings the weighted averaging method is evaluated. Weighted averaging is always as good as or better than traditional averaging, and in about 30% of the cases the weighted averaging improves the recovered ABR significantly over what is obtained by traditional averaging. In these cases the traditional averaging would require 50% more sweeps to be averaged in order to obtain the same precision of the ABR, and the variance of the wave V latency is improved by a factor of approximately two.
This paper describes our early attempt to estimate the signal-to noise ratio of the averaged ABR. In the clinic, ABRs are recovered from the on-going background noise by averaging a number of sweeps. Normally, a test protocol will prescribe a fixed number of sweeps to be averaged and will recommend replications to be obtained. However, since both the ABR and the background noise differ across individual subjects both in magnitude and in other characteristics, such a test protocol can never ensure a given minimum ‘quality’ or signal-to-noise (response-to-noise) ratio, SNR, of the final recovered ABR. Therefore a statistical method is developed in order to estimate the SNR of the recorded ABR during the on-going averaging process. The method calculates the FSP, which is the squared ratio of the estimated magnitude of the ABR to that of the averaged background noise. The method can be employed on-line as an adaptive strategy (1) to estimate the number of sweeps necessary to obtain a given minimum SNR (quality) of the ABR recorded at supra threshold levels, or (2) to automatically detect the presence of an ABR near threshold.
Otoacoustic Emission Assessment
Otoacoustic emissions (OAEs) are responses of the outer hair cells within the cochlea that indicate normal peripheral function of the auditory system. OAE detection is dependent upon the integrity of the ear canal, middle-ear, ossicular chain and cochlea. Any disruption along this path may preclude the detection of OAEs. One such disruption includes the presence of deviant middle-ear pressure. This article reviews some of the literature indicating that compensation for deviant middle-ear pressure improves OAE detection. The authors also present a case that shows pressurizing the ear canal to the pressure of the peak compliance of the middle ear (-167 daPa) via a Titan instrument increases the DPOAE response by ~5 to 10 dB below 2 kHz relative to non-compensated DPOAEs. This suggests that accounting for deviant middle-ear pressure via the Titan OAE module improves detection of OAEs when negative middle ear pressure is present.
Distortion product otoacoustic emissions (DPOAEs) are an objective method for evaluating the integrity of the cochlea; however, middle-ear dysfunction can attenuate the DPOAE response including cases where peak middle-ear pressure deviates from ambient pressure. The effect of deviant middle-ear pressure mostly affects the DPOAE response at frequencies below 1 to 2 kHz. Amongst 12 ototogically normal young adults, the objective of this study was twofold: 1) to quantify the change in DPOAE level across a range (-200 to +200 daPa) of static pressures applied in the ear canal, and 2) to determine the slope of level change across this range in 50 daPa steps. Generally, DPOAEs were largest at 0 daPa for all frequencies of 1, 2, 3 and 4 kHz, and the overall mean DPOAE level reduced by 2.3 dB for each 50 daPa deviation away from ambient pressure. This suggests that accounting for positive or negative pressure in the middle-ear may facilitate the evaluation of cochlear integrity via OAEs in cases that would otherwise preclude OAE detection. The Titan has the ability to measure DPOAEs while also accounting for the presence of deviant middle-ear pressure.
Otoacoustic emissions (OAEs) describe soft sounds measured in the ear canal that originate in the cochlea via transmission through the middle ear. The presence of OAEs indicate normally functioning outer hair cells within the inner ear. However, OAE detection also depends upon the integrity of conductive path via the middle ear. Therefore, a middle ear pathology may preclude OAE detection, even though outer hair cell function is normal. Such middle ear involvement may include the presence of middle-ear pressure that deviates from the ambient pressure in the ear canal, e.g., negative middle ear pressure, which has been shown to attenuate OAEs in the low frequencies. Matching the pressure in the sealed ear canal to the deviant pressure in the middle ear cavity may improve OAE detection assuming that the outer hair cell function is normal. The current study describes the effect of compensating for deviant middle-ear pressure on the amplitude and phase of transient-evoked OAEs (TEOAEs) in 59 children with near-normal and more severe deviant tympanic peak pressure. Subsequent to normalizing abnormal middle ear pressure in pathologic ears, results indicate that the OAEs increase in amplitude and phase lag, and hence, improve detectability. Specifically, ears with mild negative middle ear pressure between -120 and -40 daPa show an average increase of 8 dB near 1 kHz, no level change above 1.5 kHz and a phase increase by 0.4 pi near 1.5 kHz; ears with moderate negative pressure between -200 and -120 daPa show an increase up to 11 dB near 1 kHz and extending up to 2 kHz, and a phase increase by 0.5 pi up to 5 kHz; and finally, ears with the largest negative pressure between -280 and -200 daPa do not show an increase in amplitude but do show a slight increase in phase for the frequencies above 2 kHz over the moderate group. Comparisons made to a Zwislocki middle ear model suggest that the effect of compensating for negative middle ear pressure is a function of a decrease in the stiffness of the middle ear structures. This result supports the notion that compensating for negative middle ear pressure in the measurement of OAEs, as is possible in the Titan TEOAE module, increases the robustness of OAE detection.
Distortion product otoacoustic emissions (DPOAEs) and transient-evoked OAEs (TEOAEs) assist in the evaluation of cochlear function. Negative middle-ear pressure is the most common dysfunction of the middle ear, which can result in hearing loss by diminishing the efficiency of energy transmission via the middle ear cavity. Because OAEs rely on transmission via the middle ear, OAE detection is diminished in the presence of deviant pressure in the ear canal or middle ear. The present study measured DPOAE levels in normal human ears upon voluntarily produced negative middle-ear pressure, and negative and positive ear canal pressures. The author's goal is to compare how negative middle-ear pressure, positive ear-canal pressure, and negative ear-canal pressure, all of the same magnitude, each affect the DPOAE response. Results indicate that positive ear-canal pressure and negative middle-pressure within each of the seven categorical ranges of pressure (e.g., -70 to -95 daPa) produce very similar effects on DPOAE amplitude, which is distinct from the effects of negative ear-canal pressure on the DPOAE amplitude. Specifically, positive ear-canal pressure and negative middle-ear pressure reduce DPOAE level at f2 frequencies from 600 to 1500 Hz, and at 3000 Hz; and increase DPOAE amplitude at 8000 Hz. The effects of applying negative ear-canal pressure have a relatively lesser effect. This suggests that compensating for the presence of middle-ear pressure has the benefit of increasing the ability to detect a DPOAE for frequencies below 2000 Hz and for 3000 Hz.
Evoked otoacoustic emissions (OAE) refer to the acoustic energy recorded in the ear canal generated from the cochlea in response to an evoking stimulus. The evoking stimulus is typically a transient signal or a pair of tones, which produce transient OAEs (TEOAEs) or distortion-product OAEs (DPOAEs), respectively. OAE generation indicates function of the outer hair cells within the cochlea. In addition to the health of cochlear hair cells, the measurement of OAEs depends upon the forward transmission of the evoking acoustic energy from ear canal to the cochlea, and reverse transmission of the cochlear response from the cochlea back to the ear canal. Because the middle-ear is positioned between the ear canal and the cochlea, any dysfunction within the middle ear cavity can alter the OAE. One such common example is negative middle-ear pressure caused by Eustachian tube dysfunction. The objective of this study is to examine the effect of middle-ear pressure on DPOAEs and to validate the effect of compensating for negative middle-ear pressure on DPOAEs. Within a sample of 36 adults with no hearing loss or otologic disease, negative-middle ear pressure reduces DPOAEs for an f2 of 1000 Hz and below. Specifically, negative-middle between -40 and -65 daPa reduces DPOAEs by 4-6 dB, and further reduces DPOAEs down to -12 dB as middle-ear pressure decreases down to -420 daPa. Further, the f2 frequency of 3000 Hz shows a similar reduction in DPOAEs as the magnitude of negative pressure is increased. DPOAEs do not show a significant change for f2 frequencies of 2000, 4000, and 6000 Hz. However, DPOAEs tend to increase for f2 frequencies of 8000 Hz for negative middle ear pressure below -160 daPa, albeit the change is not significant. When pressure applied to the ear canal matches the pressure at peak compliance of the middle ear, the DPOAE response is corrected and resembles the DPOAE response with no negative middle-ear pressure. Finally, the peak and notch of the DPOAE response increases as negative middle-ear pressure decreases, which suggests a change in the resonant attributes of the middle-ear cavity. This study suggests that compensation for deviant middle-ear pressure improves the level of the DPOAE.
Otoacoustic emission (OAE) testing is standard in the diagnostic test battery for infants and toddlers. OAE reflect the response of the outer hair cells and are good indicators for the health of the cochlea. OAEs evoked by a transient signal are referred to as transient-evoked OAEs (TEOAEs). The frequent occurrence of negative middle-ear pressure in infants and toddlers may decrease the TEOAE amplitude. The current study compares TEOAEs within the same child across 11 children from when negative middle-ear pressure is normal to when it is negative. The purpose of this study is threefold: 1) to determine if TEOAE levels in toddlers and infants decrease with the presence of negative middle-ear pressure as identified via tympanometric peak pressure (TPP), 2) to explore the existence of a linear relationship between negative TPP and TEOAE level, and 3) to observe the effect of negative TPP on TEOAE pass rates. Results indicate that TEOAE levels are reduced when TPP is negative and that this reduction is observed similarly across the measured frequency range between 1 and 4 kHz. Also, TEOAE level reduction is not linearly related to negative TPP. Finally, using a TEOAE emission-to-noise pass criteria of 3 dB, the pass rate was affected by negative TPP in 5 or 6% or cases. These results suggest that compensating for negative middle-ear pressure stands the best chance for maximizing the TEOAE level in infants and toddlers.
The measurement of otoacoustic emissions is dependent on both the forward path from the ear canal to the cochlea and the reverse path from the cochlea to the ear canal. Negative middle-ear pressure caused by Eustachian tube dysfunction attenuates the OAE response, particularly for low frequencies. This study explored the effect of compensating for deviant middle-ear pressure on TEOAE amplitude in 59 children with middle-ear pathology. TEOAE amplitudes were compared between measurements made at ambient ear canal pressure and peak middle-ear pressure for the mean broadband response as well as the 1, 2, 3 and 4 kHz response. Peak pressure ranged from -263 to +25 daPa (mean = -85 daPa). TEOAE amplitudes measured at peak pressure were significantly greater than those made at ambient pressure for the overall TEOAE amplitude (2 dB), and the 1 kHz (4.7 dB) and 2 kHz (2.6 dB) bands. This suggests that compensating for deviations in middle-ear pressure for TEOAE measurements may improve OAE detection, thus minimizing false positive results of sensorineural hearing attributed to middle-ear dysfunction. The Titan TEOAE module is capable to measuring OAEs at ambient and tympanic peak pressure.
Effect of negative middle-ear pressure on transient-evoked otoacoustic emissions.
Marshall, L., Heller, L. M., & Westhusin, L. J. (1997). Ear Hearing, 18(3), 218-226.
Otoacoustic emissions (OAEs) are good indicators of inner-ear changes over time. OAEs measured in response to a transient signal are referred to as transient-evoked OAEs (TEOAEs). Because the inner ear, or cochlea, is positioned beyond the middle-ear, changes in middle-ear pressure may influence the measured level of OAEs. The purpose of the present study is threefold: 1) to demonstrate the effect of negative middle-ear pressure on the TEOAE stimulus and response, 2) to demonstrate the effect on the TEOAE subsequent to compensating for negative middle-ear pressure, and 3) to examine the effect on TEOAEs by applying positive ear-canal pressure to simulate negative middle-ear pressure. This study presents a case study over the course of 6 months from a single participant with frequent negative middle-ear pressure. Small amounts of negative middle-ear pressure affect both the stimulus amplitude and the TEOAE response. TEOAE amplitude progressively decreases for frequencies below 3150 Hz as the magnitude of negative middle-ear pressure increases. TEOAE amplitude changes little with changes in middle-ear pressure for frequencies above 3150 Hz. Compensating for negative middle-pressure restores the spectra of the stimulus and TEOAE relative to when middle-ear pressure is near ambient pressure for frequencies above approximately 2 kHz, but increases TEOAE amplitude for frequencies below approximately 2 kHz. Simulating negative middle-ear pressure by applying positive ear-canal pressure approximates similar effects on the TEOAE amplitude as do’s negative middle-ear pressure. These results suggest that compensating for negative middle-ear pressure bypasses the effects of negative middle-ear pressure on the TEOAE.
Otoacoustic emissions (OAEs) are soft signals generated by vibrations in the cochlea, which are transmitted via the middle ear and recorded as acoustic energy in the ear canal. Since OAE transmission occurs via the middle ear, middle ear dysfunction, as in the case of negative middle-ear pressure, can impede the detection of the OAE response. This is not surprising given that negative middle-ear pressure can also increase audiometric hearing thresholds, particularly for frequencies below 2000 Hz. The goal of this paper is to observe the effect of negative middle ear pressures under -100 daPa in ears with hearing thresholds within 30 dB HL. Compared to measurements obtained at ambient pressure, the overall level of transient-evoked OAEs (TEOAEs) are 1.15 to 6.8 dB greater when the ear canal is pressurized to compensate for negative middle-ear pressure. Likewise, TEOAE reproducibility increases from 2 to 41% with compensation for negative middle-ear pressure. Failing to compensate for negative middle-ear pressure reduces the TEOAE response by approximately 15 dB from 500 to 750 Hz, 5 dB from 1000 to 2000 Hz, and has minimal or no change in the TEOAE response from 3000 to 4000 Hz. Finally, compensation for negative middle-ear pressure reduces stimulus ringing in the ear canal and produces a smoother stimulus spectrum. These findings suggest that compensation for negative middle-ear pressure improves TEOAE amplitude, reproducibility and stimulus spectral characteristics. Importantly, this has the effect of reducing false positive TEOAE responses.
Pressure in the middle ear changes along with changes in atmospheric pressure. The tympanic membrane will push inward if atmospheric pressure shifts in the positive direction, or outward if atmospheric pressure shifts in the negative direction. The resulting tympanic membrane distention is a result of a pressure differential across the plane of the tympanic membrane. The pressure differential directly impacts how energy conducts through the middle ear. Since otoacoustic emissions (OAEs) depend on efficient conduction via the middle ear, such pressure differentials influence the measurement of all types of OAEs, including spontaneous (SOAE), transient-evoked (TEOAE), and distortion-product (DPOAE) OAEs. This paper discusses the effect of changes in atmospheric pressure via a pressure chamber on the frequency and amplitude of SOAEs, TEOAEs and DPOAEs within normal-hearing adults. In general, changes in atmospheric pressure reduce the amplitude for all OAE types more in the low frequencies relative to high frequencies, and small changes in atmospheric pressure result in large amplitude reductions below 4 kHz. SOAEs appear to be the most sensitive to pressure changes in the 4 to 5 kHz region. Observed changes in the TEOAE spectrum to pressure variations suggest an increase in stiffness in middle-ear transduction upon the presence of a pressure differential across the tympanic membrane (TM). These results indicate that OAEs are sensitive to ambient pressure changes and as a result, lend credence to the practice of obtaining clinically useful OAE measurements when the pressure differential across the TM is near zero, i.e., at tympanic peak pressure (TPP). Currently, the Titan is the only commercially available OAE device capable of obtaining OAEs at TPP.
Otoacoustic emissions (OAEs) describe cochlear vibrations that are generated in the cochlea and transmitted to the ear canal via the middle ear. Transient-evoked OAEs (TEOAEs) describe these cochlear vibrations in response to an evoking transient stimulus, which is also dependent on transmission through the middle ear to the cochlea. Changes in ear canal pressure change the efficiency of energy transfer through the middle ear and as a result have shown changes in the measured level and spectrum of TEOAEs. The purpose of this study is to describe how changes in ear canal pressure alter the intensity and spectrum of TEOAEs in nine normal hearing adults. TEOAE levels attenuate similarly as ear canal pressure increasingly deviates from ambient pressure in both the positive and negative direction up to ±200 daPa. Additionally, TEOAEs under ear canal pressurization grow more slowly relative to ambient pressure as click intensity is increased. Reproducibility is generally higher near ambient ear canal pressure and for higher click intensities. Finally, the effect of air-pressure produces an effect similar to a high-pass filter with a cut-off of 2600 Hz and slope of 4 dB / octave. This suggests that pressurized TEOAE measurements via the Titan are able to optimize the robustness of the response.
Wide Band Tympanometry / Impedance Measurement / Middle Ear Assessment
Middle ear disease is common in children with Down syndrome. This study aimed to look at absorbance measured at peak tympanic pressure by wideband tympanometry in children with Down syndrome and in children with typical development. The authors were interested into two key areas. Firstly the ability of wideband tympanometry to differentiate between conductive hearing loss and normal hearing in these two groups. Secondly they aimed to see the effect of patent pressure equalisation tubes on absorbance measurements. The results showed that normal hearing children with Down syndrome had absorbance characteristics that were similar to children without Down syndrome with normal hearing. Secondly, they showed Children with Down syndrome and conductive hearing loss had similar absorbance characteristics to their normally developing peers with conductive hearing loss. Lastly, they showed that children with patent pressure-equalizing tubes PETs) had significantly higher absorbance in the low frequency region for both typically developing and Down syndrome groups. They conclude wideband tympanometry is able to distinguish ears with conductive hearing loss and intact eardrums from ears with patent PETs on the basis of wideband patterns in the low frequencies.
Absorbance is a tool which can be used to identify middle ear pathology in infants. It can be measured at tympanic peak pressure or at ambient pressure. This study looked to measure the absorbance characteristics of the middle ear at both ambient and tympanic peak pressure in a group of healthy newborns over a 1-year period. In-line with previous studies looking at absorbance in newborns, the result showed large age effects during the first 6 months of life. As a consequence, the authors recommend that age specific normative data is available up until this time. Between 6 – 15 months these changes are much smaller and thus only a single set of normative data is required. Interestingly, measurements recorded at peak tympanic pressure and at ambient pressure differed at all age groups. These differences were particularly noticeable in newborns. Here, the authors suspect this is a result of the pressurized condition opening the ear canal when the pressure sweep is run from positive to negative pressure, or closing the ear canal walls if run from negative to positive pressure. The authors note that access to age appropriate normative data based on the method used to measure absorbance is essential.
Since the introduction of universal new-born hearing screening (UNHS) research has shown that a large number of babies (90%) that fail screening at birth subsequently pass a more detailed hearing assessment at follow-up. This article proposes the use of wideband acoustic measurements such as power reflectance to detect transient middle ear conditions or wax / debris that can be a source of failure to pass a new-born hearing screening. In this study a total of 30 babies who had a unilateral hearing fail in a UNHS had additional acoustic measurements conducted a 1 day and repeated at 1 month when the new-borns attended for a detailed diagnostic hearing assessment. The study concluded the power reflectance measures are significantly different for ears that pass new-born hearing screening, and ears that refer with middle ear transient conditions. There are several issues that can affect middle ear measurements, such as probes against the ear canal, poor acoustic seal, and other mechanical issues. A preliminary data selection criteria (DSC) is proposed to validate these measures in new-borns. Further research may help develop criteria that can be employed to make decisions on when to offer a further screen, when the transient middle ear condition has resolved, before requiring a more detailed and time consuming diagnostic hearing assessment.
The absorbance of the middle ear can be measured by either pressurizing the system using wideband tympanometry or through a non-pressurized absorbance test. The purpose of the present study was to evaluate sources of variability in absorbance measurements measured at ambient pressure. The authors measured audiometry, tympanometry, and absorbance in a group of 112 subject annually for a period of up to 5 years. The authors also compared the baseline results from this group to a group of 24 adult’s with middle ear pathology as determined by reduced admittance 226 Hz tympanometry. The results showed there were small but statistically significant mean differences in absorbance as a function of age, sex and ear. The test retest variance for absorbance measurements was found to be about 0.1 at 1, 2 and 4 kHz which is similar to previous studies. A particularly useful finding from this study is that ears with negative middle ear pressure showed similar absorbance finding those with abnormal 226 Hz admittance tympanograms. The authors therefore recommend that it may be necessary to measure wideband absorbance in a pressurized condition such as wideband tympanometry in order to obtain an effective differential diagnosis in adults.
Understanding the Developmental Course of the Acoustic Properties of the Human Outer and Middle Ear over the First 6 Months of Life by Using a Longitudinal Analysis of Power Reflectance at Ambient Pressure.
Absorbance can be used to assess the function of the middle ear across a wide range of frequencies. This measurement has been shown to have many clinical applications in particular in the paediatric population. This study aimed to measure the rate at which functional maturation of the middle ear occurs in the first 6 months of life. A secondary aim was to use this information to establish a normative data set. The results identified that absorbance between 600 – 1600 Hz remained relatively constant across the first 6 months of life. However, at frequencies below 400 Hz, the absorbance reduced with maturation; whereas in the high frequencies (> 2000 Hz), the absorbance increased during this time period. These changes are consistent with the changes to the mass and stiffness of the middle ear as described in previous studies. Because there are significant developmental changes during this time, the authors recommend that age specific normative data is used between 0 – 6 months. Therefore when testing absorbance in the Titan it is important to ensure the correct age of the patient is selected in order to see the relevant normative data.
Measures of wideband tympanometry (WBT) improves outcomes in newborn screening programs and diagnostic predictions for middle-ear pathologies in children and infants. This article reviews the literature on WBT and draws comparisons to standard audiologic tests, i.e., otoacoustic emissions (OAE), conventional tympanometry, auditory brainstem responses (ABR), and otoscopy, as a predictor of middle ear status. In general, WBT, which encompasses reflectance, absorbance, wideband tympanometry, and wideband acoustic reflexes, performs at least as well and often better than conventional tympanometry as a predictor of conductive hearing loss or middle-ear pathology. In addition, WBT provides valuable information regarding the type of hearing loss when when combined with ABR and OAE tests and holds potential as a valuable tool for newborn hearing screening programs and pediatric diagnostics. The Titan provides a comprehensive battery of tests that include conventional and wideband tympanometry, OAEs, and screening ABRs making it a powerful diagnostic tool that can provide value in a variety of clinical settings.
Wideband tympanometry (WBT) provides important information that can assist clinicians in making correct clinical decisions. Despite there being fairly consistent normative WBT data for newborns across studies, discrepancies still exist, which highlights the need for further normative data to increase the diagnostic power of WBT. Such discrepancies likely result from study sample, methodology, or instrumentation. In addition, the presence of middle-ear fluid or mesenchyme may confound the establishment of further normative data. Finally, normative data across ear, gender, race and ethnicity are lacking. This article reviews the infant and newborn WBT studies pertinent to childhood maturation from 0 to 12 months of age and makes a call for further investigations to establish refined normative data for WBT with the intention to ultimately improve the diagnostic power of WBT for this population. The Titan is capable of acquiring such data via its well-researched wideband tympanometry module. Further, the Titan offers a research module to assist researchers in managing their data.
This article summarizes the major historical efforts used in the evaluation of the adult human middle-ear system that fall under the heading of “acoustic immittance”. The authors’ goal is to provide a historical overview of middle-ear measurement and introduce wideband tympanometry (referred to as wideband acoustic immittance in this paper). Historically, middle-ear measurement has progressed from the use of ambient-pressure systems in the absence of an acoustic reflex, to the use of systems designed to measure immittance changes over pressure variations for a single frequency probe, and finally, to an extension of the latter implementation but with the use of discrete multi-frequency probes thereby advancing the ability to diagnose diseases of the middle-ear. Wideband tympanometry is the latest in a series of developments of middle-ear assessment and expands upon the multi-frequency approach by offering a greater range and finer resolution across the frequency domain. This enhancement thus furthers the advancement of characterizing middle-ear function and diagnosing middle-ear pathology. The Titan has developed an advanced commercially available platform capable of measuring wideband tympanometry and absorbance.
This article is a literature review that describes how the power absorbance measurement in adults is affected by a series of pathologies including those affecting the tympanic membrane (TM), ossicles, middle-ear cavity, inner ear and intra-cranial pressure. Absorbance consistently demonstrates a sensitivity to changes in the TM, ossicles, middle-ear cavity, inner ear, and intra-cranial pressure including such pathologies as TM perforation, hypermobile TM, tympanosclerosis, middle ear pressure, middle ear effusion, ossicular discontinuity, otosclerosis, superior semicircular canal dehiscence (SCD), and changes in intracranial pressure. Additionally, there is evidence that a combined measure of absorbance and positive air-bone gap can aid in the differential diagnosis between stapes fixation, SCD and ossicular discontinuity. This sensitivity of wideband absorbance to predict the aforementioned pathologies highlights the diagnostic utility of such measures, and suggests that the Titan wideband absorbance module offers comprehensive diagnostic utility for the diagnostic audiology clinic.
This article is a review of eight studies that looked at the effectiveness of wideband absorbance (WA) and tympanometry (WBT) on predicting conductive hearing loss (CHL). Combined, the studies included infants, children, older children, and adults. Overall, WBT was a good predictor of CHL. Specifically, univariate WBT and 1000 Hz tympanometry were similar in predicting CHL in infants and children. Although 226 Hz tympanometry is a better predictor of CHL than an ambient wideband measurement isolated to 250 Hz, multivariate measures of WBT that combines information across frequency is far better than single-frequency (univariate) WBT and tympanometry at predicting air-bone gaps of 15 to 30 dB in children. Additionally, in adults, WBT offers high sensitivity and specificity for predicting CHL due to otosclerosis, superior semicircular canal dehiscence, and ossicular discontinuity. Overall, WBT is noninvasive tool that can measure many frequencies in little time and while offer superior performance relative to single-frequency tympanometry for predicting CHL. This suggests the use of the Titan wideband absorbance module as a diagnostic platform can improve diagnostic prediction of the presence of conductive hearing loss over conventional tympanometry.
This article reviews the relationship between acoustic immittance, and pressure and power absorbance. While pressure absorbance is dependent upon the immittance at the tympanic membrane (TM), the area of the cross-sectional plane of the ear canal, and the distance between the probe location and the TM; power absorbance only depends upon the former two attributes and is independent probe location, which is an advantage of power absorbance. However, because power absorbance is not sensitive to probe location and thus void of the resultant phase information, it lacks the ability to identify sources of error, e.g., acoustic leaks, and fully describe the mechanics and acoustics at the TM. The authors state that the middle ear is best described by combining absorbance, and impedance magnitude and phase. The Titan offers a complete measurement platform capable of calculating absorbance, and the impedance magnitude and phase and thus stands as an advanced comprehensive diagnostic middle-ear device.
Measurement of the middle ear muscle reflex (MEMR) is an integral part of standard clinical audiology practice and provides important information regarding the differential diagnosis of cochlear and retrocochlear pathologies. Measurement of MEMR depends on an immittance platform and typically utilizes the change in admittance for a probe stimulus at a single stimulus frequency. This article summarizes the limitations of measuring MEMRs for a single-frequency probe and discusses the use of a wideband tympanometry procedure as an alternative. The use of wideband reflexes are advantageous over single-frequency reflex tests for several reasons. Namely, 1) that the MEMR can be identified across a wider range of frequencies and is thus less sensitive to maturational effects, and 2) that the MEMR can be measured at lower thresholds, which allows for MEMR decay tests to be run at levels that would otherwise be precluded in single-frequency tests. Detection of wideband tympanometry MEMRs can be automated and objective, and thus well-suited for use in screening programs operated by non-audiologists. The Titan offers wideband tympanometry, and although not yet available, is well positioned to apply this technology for MEMR measurement in the future.
This article discusses how ethnicity, gender, aging, and instrumentation affect the variability of normative wideband tympanometry (WBT) data. Understanding the mechanisms of this variability stand to improve the diagnostic accuracy in the detection of middle ear pathologies. The review of the research suggests that instrument-specific norms do not improve test performance, although the measurement of WBT at ambient peak pressure may have an impact on normative data. Small differences observed across ethnic groups in school-aged children does not warrant ethnicity-specific norms for such a population. However, differences across adult ethnic groups may warrant ethnicity-specific norms, at least for the detection of otosclerosis. The authors speculate that this latter observation in adults may be attributed to body size. Finally, maturational changes as evidenced by differences between school age children and adults do warrant the implementation of age-specific norms. It seems that further study is needed to establish normative data for WBT, particularly for body size and age. Being that the Titan offers WBT, it is well positioned as a leading platform to acquire the necessary data to establish appropriate norms, particularly with the addition of the Research module that facilitates the management of large datasets.
Wideband tympanometry is useful for investigating how acoustic energy flows through the ear as well as diagnosing conductive hearing loss. And, understanding the individual variability is essential for the interpretation of individual measurements. This article discusses the contributions to of variability to intrasubject variability in ear-canal-absorbance-based measurements. Such contributors to intrasubject variability include ear-canal static pressure or middle-ear fluid, probe orientation. Other factors, albeit small, may include additional loss along the ear-canal wall based on probe placement, e.g., more losses may coincide with more lateral probe placement, or non-uniformity of the cross sectional area of the ear canal. However, the one of the larger sources of variability relate to the acoustic seal between the surface of the earphone tip and the walls of the ear canal. If small leaks occur that allow acoustic energy to escape the ear canal, large errors in absorbance and subsequent large intra-subject variability may occur. High absorbance below 500 Hz is a strong indicator of such a leak. Test-retest variability in adults in lower than in infants but can also add to intra-subject variability. This suggests that the wideband absorbance measures in the Titan probe are stable so long as the user controls for acoustic leaks
The paper systematically characterises how absorbance is altered by five specific middle ear disorders. (1) positive and negative static pressure, (2) middle ear fluid, (3) fixation of the stapes footplate, (4) disarticulation of the incudostapedial joint, and (5) tympanic membrane perforations. Absorbance measures were performed on 8 cadaver ears which were manipulated to be representative of conditions 1-4. Retrospective absorbance calculations were made on 11 separate cadaver ears for condition 5. These measurements were then compared to a middle ear model which was modified to represent the five middle ear disorders outlined above. The results showed that the general trends for a given condition were similar between the cadaver measurements and the model middle ear. A detailed discussion of how the measured absorbance for each modified condition differs from the normal state is provided and examples are shown (Adapted versions of these examples can however be found within the Titan software). From reading this paper it becomes clear that different middle ear conditions affect absorbance at different frequencies and the degree of how these frequencies are affected is determined by the extent of the disorder.
The authors of this study propose the use of a non-invasive ear-canal based acoustical measurement to monitor changes in intracranial pressure (ICP) of neurological patients. The goal of the article is to present measurements of DPOAE magnitude and angle, with power reflectance in two extreme posture positions. Previous research has measured changes in auditory function relative to posture, and this is proposed as a reflection of changes in the ICP as the cochlear aqueduct connects the cerebral spinal fluid to the cochlear. Therefore an increase in ICP is felt to be linked to an increase within intra-cochlear pressure. 12 healthy subjects had measurements taken in a supine and negative 45° tilt from horizontal on 5 separate occasions. DPOAE and power reflectance showed low frequency changes relative to posture, and the authors felt this was consistent with an increase in ICP. A noted complication to these measurements is middle ear static pressure, as this has a direct effect on middle ear transmission. However, as a method of long-term non-invasive monitoring of ICP in patients, this method has merit. Further work is need to combine these measures into a metric that can be used to detect and monitor ICP changes over time. The scope to use similar methods to monitor other intra-labyrinthine pressure changes such as in Meniere’s disease is also proposed.
Absorbance data for early school-aged children was measured to determine whether absorbance differs significantly between Caucasian/Chinese children and male/female children. The results showed that gender and ear did not have influence on the absorbance; however, Chinese children showed higher absorbance over the mid frequency range than their Caucasian counterparts. The authors then compared the normative paediatric absorbance data with the measurements taken from children with abnormal middle ear conditions. One-hundred-forty-four normal ears were compared with 30 ears with mild negative middle ear pressure, 24 ears with severe negative middle ear pressure and 42 ears with middle ear effusions. Receiver operating characteristic (ROC) curve analyses showed that absorbance measurements above 800 Hz were better at identifying middle ear effusions than absorbance measurements below 800Hz. The 1250 Hz region showed the largest sensitivity and specificity to diagnosing middle ear effusions of 96% and 95% respectively.
Universal newborn hearing screening (UNHS) outcomes are essential for early hearing detection and intervention programs. However, UNHS outcomes relating to sound conduction via the middle ear may mask underlying sensorineural pathologies and obscure the appropriate invention plan. This study compares the test performance of 1 kHz tympanometry to wideband tympanometry (WBT) and absorbance for the ability to predict the conduction path via the middle ear for 455 ears that pass or refer within in a distortion-product optoacoustic emission (DPOAE) UNHS program. Results indicate that ambient wideband absorbance is the best predictor of the sound conduction pathway and that ears that pass a DPOAE UNHS typically have higher absorbance (i.e., more efficient conductive path) relative to ears that do not pass. Relative to 1 kHz tympanometry, wideband tympanometry and absorbance are better in classifying UNHS outcomes and predicting the conductive path via the middle ear. In sum, WB metrics provide additional information over 1 kHz tympanometry in terms of changes in sound conduction over the first two days of life. This study emphasizes how wideband measurements in conjunction with DPOAEs can be particularly useful within UNHS programs.
Prior to this study, the ability to record wideband tympanometry was both time consuming and clinically unfriendly as measurement systems were unable to automatically control the pump pressure and wideband tympanometry. Therefore, wideband tympanometry had to be performed at fixed static pressure points. This study introduces the methodology behind how wideband tympanometry has been made clinically viable. The aim of the study was to investigate the concept of automatically sweeping the air pressure to measure the wideband tympanometry with the hope that this would reduce test time. It also aimed to see the effects of how pressure sweeps affect test result interpretation. Ambient and pressurized wideband absorbance was obtained from 92 adult ears with normal hearing. The results showed that test time reduced from around 40 seconds using traditional methods to 1.5 – 7 seconds using the modified test procedure. The ambient pressure results showed parity with normative data from previous studies. The pressurized sweep results showed that absorbance at peak tympanic pressure is not significantly affected by sweep speed but when compared to absorbance measured at ambient levels, the peak tympanic pressure shows higher absorbance at frequencies below 2000Hz. This suggests that two sets of normative data should be used to measure absorbance depending on the testing method.
This study looked at how absorbance measurements can be affected by the following variables: 1) measurement location in ear canal, 2) cross sectional area of the ear canal, and 3) the middle ear cavity volume. Nine human cadaver ears were chosen for the experiment which did not have any history of middle ear pathology. The results showed that variations within an individual ear with respect to either measurement location or ear-canal cross-sectional area resulted in relatively small effects on absorbance. However, increasing the middle ear cavity volume was found to increase absorbance below 2000 Hz. Frequencies above 2000 Hz showed more variable results when middle ear cavity volume was increased. The authors conclude that middle ear cavity volume is likely to be the cause of inter-subject variations in absorbance measurements.
Wideband absorbance and acoustic admittance are useful tools in the study of middle-ear function in neonates, infants, older children and adults. Ambient-pressure wideband measures such as reflectance (or absorbance) are better indicators of conductive hearing loss and middle-ear dysfunction relative to standard 226-Hz tympanometry. As such, the use of a single frequency as in conventional tympanometry is not optimal for studying middle-ear function at all the frequencies that are important for human auditory communication. Further, wideband measures of middle-ear function are sensitive to otitis media, otosclerosis, ossicular discontinuity, and tympanic membrane perforation. Finally, wideband tympanometry (absorbance over a range of pressure) contains information that is absent in ambient pressure absorbance. This study compares the performance of 226-Hz tympanometry, wideband tympanometry (WBT), and ambient-pressure absorbance (WBA) in the prediction of conductive hearing loss in 42 normal-hearing ears and 18 ears with a conductive hearing loss across a group comprised of adults and older children above the age of 10 years. Using a fixed specificity of 90%, WBT is most sensitive (94%) followed by WBA (72%) and finally peak compensated 226-Hz tympanometry (28%). The area under the receiver operating characteristic (ROC) curve (where 0.5 represents chance and 1.0 is perfect performance), WBT achieves a 0.95 and WBA a 0.9, which indicates that wideband measures of absorbance both pressurized and ambient are better predictors of conductive hearing loss relative to conventional tympanometry. These results suggest that wideband measures of middle-ear function, as is possible via the Titan, improve clinical diagnostics related to conductive hearing loss over conventional tympanometry.
This study evaluated the accuracy of acoustic response tests in predicting conductive hearing loss (CHL) in 161 ears of subjects aged 2 to 10 years old. Clinical decision theory was applied to the prediction of CHL within this population. Acoustic tests included 1) tympanometric peak-compensated static admittance magnitude (SA) 2) tympanometric gradient at 226Hz 3) admittance-reflectance measurement (YR). Multivariate statistical techniques were discussed and utilised to analyse the data collected. The multivariate data analysis identified that admittance-reflectance responses yielded an output that predicted the presence of CHL with a calculated sensitivity of 90% with a specificity of 94% and is well suited to the prediction of CHL in clinical populations, and the inclusion of a 226-Hz tympanogram may slightly improve overall test performance.
This article discusses the role of the ear canal and its properties when performing impedance and reflectance measurements. Acoustic impedance measurements were conducted in 10 healthy young (18-24 years old) adults. The average reflectance and standard deviation of reflectance was calculated. Significant subject variability in the magnitude of reflectance for the 10 ear canals was measured. The authors felt this variability was due to cochlear and middle ear impedance differences, however the variable length of the ear canal between the measurements point and the eardrum could also have an influence. The authors acknowledge that the ear static pressure was not known at the time of the measurements, and this may have also contributed to some variability in the measurements. The article discusses the important properties of reflectance and transmittance phase and the use of these properties in the further modelling of ear canal impedance or reflectance.
Energy reflectance is the ratio of reflected energy to incident energy. Absorbance represents the amount of energy absorbed by the ear canal and middle ear and equals 1 – reflectance. These metrics are useful for describing the response of the ear canal (and middle ear) to acoustic inputs such as speech. Although impedance at a single frequency is useful for this endeavor, a wideband measurement provides additional detail not captured in discrete frequencies. In addition, development of the ear canal and middle ear are shown to have an impact on input impedance and the reflection coefficient response. Thus, the current study aims to measure ear-canal impedance and energy reflectance from 125 to 10700 Hz in adults and in infants 1, 3, 6, 12, and 24 months of age. Results indicate that middle-ear compliance is lower and middle-ear resistance is higher in infants compared to adults. As such, power transfer via the middle ear is reduced in infants relative to adults and may partially account for differences in measurement of behavioral threshold sensitivity. For all age groups, energy absorbance is low at frequencies below 1 kHz and above 6 to 8 kHz. Absorbance is highest for 1 month olds up to 2 kHz. In general, energy transmission into the middle ear is most efficient, as indicated by absorbance, in the frequency region most important for speech (i.e., 1 to 4 kHz). With increases in age, absorbance decreases in the 4 to 8 kHz range and the frequency boundary where absorbance dips below 0.5 decreases. In sum, interpretation of neonatal tympanograms are difficult to interpret in part because of the vibration of the ear-canal walls. Impedance and absorbance measurements between 2 and 4 kHz may help alleviate such difficulties.
Tympanic membrane motion and impedance measurements become more complex as the stimulating acoustic energy increases above 2 kHz, which complicates evaluation of the underlying middle ear structures relative to lower frequencies. Energy reflectance via the standing wave ratio is relatively less sensitive to such complications. As such, this paper discusses the fraction of incident energy reflected from the eardrum and ear canal via the standing wave ratio (SWR) for comparison against alternative models of the middle ear of the time. This study analyzes 13 normal ears and reports values of SWR between the frequencies of 5 and 10 kHz. In general, the SWR (and associated energy reflection coefficient) is higher in the present study relative to other studies at the time for frequencies in the region of 7 to 8 kHz, corresponding to an energy reflectance of 60-78% as opposed to previously reported values approximating 40%. Importantly, this paper shows the general insensitivity of energy reflectance to probe position in the ear canal, which is one of the limitations of admittance tympanograms. This paper provides a historical glimpse of the early work involving energy reflectance, which eventually helped pave a path for future commercial development of wideband tympanometry as a metric of middle ear assessment.
This document provides guidance and considerations for the assessment of positioning provoked symptoms of dizziness. Indications for testing, contraindications, patient safety issues are highlighted and discussed relative to appropriate test selection. The technique for performing Dix-hallpike, Side-lying, Roll and Rose test are described and illustrated. Considerations for modifying test techniques to adapt testing in order to assess and treat the older or less mobile patients are also covered, including the utlisation of video recording of eye movement to remove optic fixation and improving the accuracy of the diagnosis and differentiating peripheral and central positioning nystagmus. Potential test findings are summarised in tables to help and aid the diagnosis of conditions from the recorded nystagmus. This is a useful summary of positioning testing, with appropriate references for further reading.
The purpose of this study was to compare the VOR responses in healthy, young adults for head impulses generated with outward and inward head thrusts. Two commercially available vHIT systems were used. EyeSeeCam™ by Interacoustics was used to measure the degree of head movement, velocity and gain of the VOR. A slight, but not significant, lower VOR gain was recorded with inward head impulses. However the authors discussed this in relation to alertness, muscular strain, and predictability of inward head impulses. In this small sample no significant differences were measured between recorded VOR responses from either system, and both recordings demonstrated agreement when compared to the gold standard of search-coil technique.
The aim of this study was to measure the relationship between horizontal VOR gain, velocity and age. The study included 63 participants between the ages of 20 to 80 years old. All hVOR measurements were recorded using the EyeSeeCam™ by Interacoustics. Normative data was collected for each decade of age. The study findings established the mean gain values for both 60ms and 80ms instantaneous gain using the EyeSeeCam™ vHIT system. Whilst the study participants showed a reduction in hVOR gain with age, with lower gains being recorded in older adults. The results of this study allow future comparisons between this normal group and patients with vestibular pathology. The authors conclude that the EyeSeeCam™ vHIT system allowed successful quantitative recordings of eye and head movement during head impluses. This was an advantageover search-coil measurements as it is less invasive, simple to set-up, and readily available for clinicians to utilise.
This document offers guidance on the utilisation of eye movement recordings to establish the presence of peripheral vestibular or central pathological changes. The preparation and considerations prior to assessment are discussed, along with patient preparation. Techniques of eye movement recording (ENG/VNG) are reviewed, with guidance specific to each method offered. The minimum requirements for recording eye movements are offered, including gaze, tracking and optokinetic testing. Performing static positioning testing and head shaking testing are also reviewed. In summary, this is an important document to inform professionals in the assessment of eye movements, methodologies, analysis of responses and interpretation of findings in the assessment of the dizzy patient.
This study compared the measurement of VOR in older populations using the 'gold standard' measure of search-coil measurement with the Interacoustics EyeSeeCam™. Six subjects from the older population (age 70 years and older) were recruited. Simultaneous recordings were made using search coils and EyeSeeCam™ during the same manual head impulses. Overt, covert and anti-compensatory eye movements were captured by both methods and demonstrated significant correlation between VOR gain measurements. In this small sample size of older adults vHIT using the EyeSeeCam™ demonstrated significant correlation with the 'gold standard' measurement by search-coil measurement. This study demonstrates the clinical utility EyeSeeCam™ as a portable office based assessment that extends physiologic vestibular testing in the older population.
This article discusses the clinical utility of two assessments of peripheral vestibular function when examinng the vestibular ocular reflex. Caloric testing and measurement is explained in relation to the gold standard of bithermal irrigation testing, total caloric response, unilateral weakness, direction preponderance, and fixation suppression. The appropriate indication of the use of monothermal warm screening testing is also covered in the article. Rotational chair testing is explored as a method of testing both peripheral vestibular systems simultaneously. Sinusoidal harmonic acceleration, Velocity step/step rotation, and VOR fixation are explained relative to their clinical utility. This article is a excellent introduction or revision of the strengths and weaknesses of these two peripheral vestibular assessments methods.
Dizzy patients provide a distinct challenge for clinicians trying to establish a diagnosis of the symptom. This article explores the benefits of using VNG alongside the traditional vestibular / balance tests. Offered is an introduction to vestibular anatomy and the role of peripheral vestibular function. The advantages of VNG (video recording of eye movements) over ENG (corneo-potential recording of eye movements) are discussed, with reference to less preparation time and overcoming some of the technical challenges of ENG. VNG offers an increased ability to observe, capture and record eye movements with greater stability and higher resolution. The VNG recordings can be recalled and analysed without the need to subject the patient to repeated testing to verify the eye movements. Within the article a proposed basic vestibular test battery protocol is offered, with specific advice on appropriate questions to define the presence of a vestibular disorder within the case history. The VNG test battery is discussed with reference to the components of gaze testing, positional and positioning testing. A differential diagnosis of vestibular disorders rarely depends on one specific test, however the value and clinical utility of the role of VNG is encapsulated within this easy to read article.
This study examined the results of video head impulse testing using the Interacoustics EyeSeeCam™ in 4 groups of peripheral vestibular disorders. The 4 conditions included in the study were 1) vestibular neuritis 2) vestibular schwannoma 3) menieres disease 4) bilateral vestibulopathy. The study group include 117 adults (65 females, 52 males). The findings indicated that vHIT was abnormal (reduced gain and refixation saccades) in 79% of cases. The highest abnormality was with VN and bilateral vestibulopathy. The most varibility in respones was seen in patients with MD and VS. However, no differentiation between acute or chronic vestibulopathy was made in the study population and this may have an effect on the presence of reduced gain and refixation saccades with vHIT. The EyeSeeCam™ vHIT is a reliable method to establish peripheral vestibular changes in hVOR.
This document provides valuable guidance on the elements that should be considered and addressed to perform a reliable caloric assessment. In particular contraindication to testing, stimulus parameters, and considerations of eye movement recording techniques are discussed. Evaluation and calculation of canal paresis, directional preponderance and visual fixation are covered. Evidence and considerations for normative data for responses are also considered, with a focus on analysis, reporting of results and appropriate test modification. In summary, a useful text to establish a robust method of assessing peripheral vestibular using thermal caloric irrigation techniques.