TEN stands for Threshold Equalising Noise. This is a type of masking noise that is presented to the ear being tested (as opposed to the non-test ear during masking in conventional pure tone audiometry, to prevent cross hearing of sound).
TEN testing is performed when one suspects a dead region i.e. a region of the cochlea where inner hair cell damage leads to a non-functioning region of the cochlea. Such a scenario would cause the audiologist to increase the sound level during pure-tone audiometry. At some point, the pure tone that would normally be heard in the dead region might still be detected via off-frequency listening i.e. the sound vibration spreads to a functioning region of the cochlea. The action of off-frequency listening would cause an under-estimation of the true hearing loss within the dead region, and this could ultimately affect patient care. Introducing TEN prevents off-frequency listening, hence audiologist is able to accurately diagnose the presence of a dead region.
To learn more about the TEN Test please watch the following video tutorial,
References and caveats
Moore, B.C.J. (2004) Dead regions in the cochlea: Conceptual foundations, diagnosis and clinical applications. Ear and Hearing, 25, pages 98-116.
"We used a 1 kHz calibration tone to set the VU meter on the audiometer and then to check the output from the audiometer we have measured the levels using the a standard coupler-SLM arrangement and TDH 39 headphones connected to the audiometer."
1 kHz tone 60dB (dial) audiometer = 76dBSPL on SLM
speech 60dB (dial) audiometer = 60dBSPL on SLM
Answer: I interpret this to mean you are presenting the 1 kHz reference signal through the speech circuit of the audiometer? In that case then the RETSPL for TDH-39 is 19.5 dB - meaning your measured SPL on the SLM for the 60 dB dial should be 79.5 dB SPL for the 1 kHz pure tone, right? i.e. it seems you’re just a fraction outside the ± 3 dB tolerance with your reference tone.
Aside from that, it seems that maybe your speech level for 60dB HL (dial) is indeed a little low based on the info provided.
IEC 60645-2:1993 Audiometers - Part 2: Equipment for speech audiometry
"We did run the calibration tone through the audiometer’s speech circuit (via Ext A) yes - that is a good point. So does this mean we could add a 3dB correction factor when we use TDH headphones? Or, if we calibrated the VU meter to -3dB would that be similar? How does this work with the speech output which measured at 60 dB SPL with 60dB HL dial on the audiometer? Should the measured output be different for the 1 kHz tone and the speech signal?"
Answer: Well I would say that the most sensible approach would be to go into calibration mode and adjust the audiometer’s output by 3 dB to get the reference tone to within tolerance, rather than use the VU meter.
According to the standard (IEC 60645-2), the specifications and test methods for speech audiometers are “based on the assumption that the calibration signal level of the recorded speech material is the same as the average level of the speech material when measured in a specified manner.”
So I would interpret this to say that in addition to the 3 dB adjustment, the measured output of your speech signal should match the 1 kHz calibration tone.
What that would then mean is that if you sent someone your speech stimuli (e.g. on a CD) and then they put it through their own audiometer they should end up playing the speech stimulus at the same average level as you did, with positive and negative peaks in the speech occurring at the same higher and lower levels accordingly (provided they calibrate the 1 kHz reference signal to the RETSPL level like you (60 dB dial = 79.5 dB SPL).
However, if reference tone and speech level do not match then you just need to state the relationship of the speech level to the reference so that someone else with their audiometer could still play the same speech at the same level as you did.
On this topic, 60645-2 says the following “If the speech and calibration signal are not at the same level the method of calibration shall be described. If the level of the calibration signal and the average level of the speech material are different, calibration and test methods should be modified as recommended by the producer of the speech test material.”
So, there’s really no “should” or “shouldn’t” as to whether the speech and 1 kHz reference tone output should be different; you should either match them or note the difference in order to maintain standardisation for any future studies.
By the way - the use of the 1kHz tone as a reference signal deviates slightly from the standard. I know the technical reasons often is that it gives a stable signal for convenience of measurement in a coupler but it is a discrete frequency and steady state, whereas the speech signal is broadband and fluctuating - 60645-2 says the calibration signal should be a weighted noise, a band of noise or a warble tone centred on 1 kHz with a 1/3 octave bandwidth.
You might consider expanding this question to include screening audiometer, alongside diagnostic and clinical audiometers. The functionality and sophistication increases along a spectrum from screening to clinical audiometers. To some extent the distinction between different types of audiometers in the modern range of audiometers is blurred as there is a great deal of overlap, but in addition to features, certain other factors influence the classification such as the frequency accuracy required and the range of hearing levels that can be tested.
Committees such as ANSI and IEC have published such as definitions for the minimum number of features required to meet a classification. The standards are in close agreement.
For example IEC 60645-1 (2017) outlines four types of audiometer (Type 1 Advanced clinical/research; Type 2 Clinical; Type 3 Basic Diagnostic; Type 4 Screening) and provides a breakdown of the various features that apply for each type.
References and caveats
IEC 60645-1 (2017) Electroacoustics - Audiometric equipment - Part 1: Equipment for pure-tone and speech audiometry
For a complete answer to this issue the reader is strongly urged to read Margolis et al. (2013) – citation below.
The short answer is that there appears to be an average air-bone gap of around 10 dB in normally hearing people and around 14 dB in people with sensorineural hearing loss at 4 kHz. It seems intuitive that with no middle ear disorders this air bone gap should not be there. Traditionally it has been attributed to air conducted radiation propagating down the ear canal, and plugging the ear when measuring bone conduction thresholds has been quite commonplace, although often ineffective (Tate Maltby and Gaszczyk 2015) . The paper by Margolis et al suggests the real cause of the air bone gap is a dependence of bone conduction thresholds at 4 kHz on the extent of sensorineural loss (hence why the effect increases from around 10 dB to around 14 dB with hearing loss). The paper describes how the Reference Equivalent Threshold Force Level (RETFL) used in calibration of bone conduction instruments could be adjusted by the above figures to compensate.
References and caveats
Margolis, R.H.; Eikelboom, R.H. et al. (2013) False air-bone gaps at 4 kHz in listeners with normal hearing and sensorineural hearing loss. International Journal of Audiology, 52 (8), pages 526-532
Tate Malty, M.; Gaszxzyk, D. (2015) Is it necessary to occlude the ear in bone-conduction testing at 4 kHz, in order to prevent air-borne radiation affecting the results? International journal of audiology, 54(12), pp.918-23
Stimulation of the ear via bone conducted vibrations is a well-established way to differentiate conductive from sensorineural hearing loss – but paradoxically, the accurate testing of bone conduction thresholds depends on normal middle ear function. Impairment of bone conducted vibrations, for example by stapes footplate fixation, results in raised thresholds by bone conduction. This effect arises across the frequency range and is related to the resonance properties of the ossicular chain. The result is a “notch” in the bone conduction audiogram which is more pronounced at 2 kHz as described by Carhart in 1950.
Carhart, R. (1950) Clinical application of bone conduction audiometry. Arch Otolaryngol 51, pages 798-808
Fortunately there is an Interacoustics Academy webinar which is aimed at addressing this very topic, and covers in detail several of the tests you mention above – i.e. their principle and clinical applications. Please see “Advanced Tests in Audiometry Parts 1 & 2”.
The short answer is that there are a number of test options included in many models of audiometer (clinical and diagnostic) for use in specific circumstances where routine Pure Tone Audiometry and/or Speech Audiometry is not sufficient to reach a diagnosis or guide audiological management decision making.
Stenger = a test with binaural sound presentation, which utilises a phenomenon of binaural interaction to indicate from which ear a sound is being perceived. This test is useful in cases of suspected unilateral non-organic hearing loss.
Langenbeck = This is a suprathreshold test where noise and pure tones are presented to the same ear, and the intensity of the pure tone that can be discriminated from the noise is measured in order to assess the masking pattern needed to mask a sound.
Weber = A test of sound lateralisation when test signals are presented by bone conduction. This test is useful in identifying asymmetrical hearing loss as the sound will tend to lateralise to the better hearing ear.
Alternate Binaural Loudness Balance (ABLB) = a test with sounds (typically pure tones of the same frequency) alternated between ears. Those in one ear are adjusted in level to match the other for loudness – at which point sounds are said to be balanced. Amongst other applications, this test is useful as part of a test battery for diagnosing retrocochlear disorders. In particular, it is used as a preliminary test for Auditory Brainstem Response interaural latency and intensity comparisons.
Short Increment Sensitivity Index (SISI) = This tests the ability of the listener to detect small increases in loudess. This helps identify recruitment and, amongst other applications, this test is useful as part of a test battery for differentiating cochlear from retrocochlear disorders.
The ‘air-bone gap’ is the difference in sensitivity threshold when measured by air and bone conduction transducers. It is used to differentiate between sensorineural and conductive hearing losses (and combinations of the two). The specific pattern of thresholds can also aid in diagnosing certain causes of conductive or sensorineural losses, for example otosclerosis or noise induced loss. However, while not unheard of, relatively few types of conductive loss would be expected to selectively produce air-bone gaps at 4 kHz1. Moreover, in many cases such air-bone gaps are seen in the absence of any other evidence for conductive hearing loss such as positive symptoms, otoscopic examination and tympanometry, hence the phrase ‘false’.
There have been a number of investigations into the causes of this apparently false reading centred on the Radioear B71 bone vibrator, but it was not until relatively recently that a full explanation was put forward (Margolis et al, 2013). The findings from this study attributed the false air-bone gap at 4 kHz to an error in the calibration reference levels (RETFLs). Other theories such as airborne radiation from the bone transducer that enters the ear canal where it can be heard via the air-conduction route are less able to fully explain the phenomena.
References and caveats
1Age related changes in middle ear function have been described in the literature which may affect the high frequencies in particular (e.g. Feeney and Sanford, 2004), as have partial ear canal collapse/occlusion, partial ossicular disarticulation and non-organic hearing loss (e.g. Mustain and Hasseltine, 1981).
Feeney, M.P. and Sanford, C.A. (2004) Age effects in the human middle ear: Wideband acoustical measures. Journal of the Acoustical Society of America 116 (6) pages 3546-3558
Margolis, R.H. et al (2014) False air-bone gaps at 4 kHz in listeners with normal hearing and sensorineural hearing loss. International Journal of Audiology 52 (8) pages 526-32
Mustain, W.D. and Hasseltine, H.E. (1981) High frequency conductive hearing loss: a case presentation. The Laryngoscope 91, pages 599-603
The term ‘in-situ’ in the context of audiology is often used to refer to when a hearing instrument or other device is being worn. So, in-situ SPL would refer to the characteristics of a sound (sound pressure level at different frequencies), at or near the tympanic membrane when a hearing instrument is in place in (or behind) the ear.
An associated term is in-situ audiometry, which is determining the hearing thresholds of wearers while they are wearing their devices, using stimuli that are generated by the hearing instrument (rather than the audiometer via headphones or inserts). This technique can bypass the calibration errors that might arise when indirectly calculating the SPL at the tympanic membrane using someone’s thresholds as measured in dB HL (since an average value is often used in this calculation). Acoustic characteristics of the hearing aid and its coupling (vents, leakage and ear mould characterstics, for example) are also taken into account automatically. Set against these technical advantages, the technique has a potential time penalty in that thresholds might have to be measured twice (e.g. once for diagnosis of hearing loss, with threshold in dB HL via headphones or inserts and once again for verification of the hearing instrument with threshold in SPL via a hearing instrument.)
It is simply a band of noise that is focused is a very narrow band of frequencies, with steep filter slopes, to produce a noise stimulus suitable for threshold estimation.
For example, this is a spectrum of a paediatric noise centred at 1000 Hz. Equivalent stimuli are available across the audiometric range, and are calibrated in dB HL.
These stimuli are useful in paediatric testing where a noise-like stimuli can sometimes be more “attention grabbing” and because in the soundfield (which is common for paediatric behavioural testing such as Visual Reinforcement Audiometry) the routinely used pure tones are not appropriate due to potential calibration errors (e.g. due to standing waves).
Please note that narrow band noise (1/3 Octave bandwidth) for threshold estimation is not appropriate as the frequency bandwidth is too wide to be considered acceptable, and steeply sloping losses might be underestimated due to off-frequency listening.
For example, below is an illustration of the paediatric noise stimuli centred at 2 kHz, overlaid on an audiogram showing a steeply sloping loss.
It may be seen that a wider bandwidth stimulus could be heard at lower frequencies (e.g. ≤ 1 kHz) where hearing sensitivity is relatively good, but the paediatric noise stimuli do not span this frequency range.
It is well established in the scientific literature (e.g. Zwislocki in the 1960s) that the absolute threshold of a sound depends on its duration. If the sound duration is longer than 500ms then absolute threshold is independent of sound duration, but below 500ms the threshold increases (and loudness decreases) as the duration gets less. This is especially apparent below 200ms and is the reason why most normal hearing people can listen to a click (broadband) at pretty high intensities (say 90 – 100 dB nHL) and not regard it as uncomfortably loud at all. The click is too short to integrate enough energy for the sound to become uncomfortably loud, even at high intensities. If the sound became continuous instead of transient then the click would essentially become a white noise, and then it might begin to feel uncomfortably loud pretty quickly…intensity is measured as energy per unit time. The ear integrates this energy over time, which is known as temporal summation. Perceptual threshold improves by about -3dB per doubling of duration up to 500ms according to the Zwislocki model, but that is an average value. It is very variable amongst individuals.
Well, I’m not really sure they’re used “interchangeably” as such. Either one or the other scale would typically be used. In the sound field, the A-weighting for measuring sound pressure level is often applied to the sound level meter. This provides a measurement that has some resemblance to the 40-phon equal loudness contour, hence the measurements provided will have a closer relationship to loudness than, say, the z-weighting (or zero weighting) measures.
The Reference Equivalent Threshold Sound Pressure Levels (RETSPLs) for sound field testing are provided in ISO 389-7, which are thresholds given under binaural listening conditions and the values it contains are the basis for defining dB HL in these conditions.
Although sounds measured in dB HL are therefore derived using RETSPLs and sounds measured in dB A are measured directly using an SLM, the difference between the two is nonetheless small – on the order of 3dB across most of the audiometric range. This might not be considered a clinically significant amount so it might be acceptable to consider thresholds measured in dB A “as if” they were hearing levels.
References and caveats
ISO 389-7:2005 Acoustics -- Reference zero for the calibration of audiometric equipment -- Part 7: Reference threshold of hearing under free-field and diffuse-field listening conditions
We are assuming you are talking audiometry here (as opposed to areas such as vestibular assessment that are not typically associated with sound booths.
Guidance about the maximum permissible ambient sound pressure levels for AC and BC audiometry are provided in international standards (ISO 8253-1:2010).
Certain models of headphones (e.g. circum-aural models) offer a degree of sound attenuation even without a booth. However, a key consideration when no sound booth is available is the use of audiocups. These are enclosures of the transducer that act to effectively attenuate environmental noise in cases where a south booth is not available.
Most audiometers with the appropriate transducer e.g. DD45/TDH39/TDH49 can be calibrated with audiocups, but they are most commonly associated with screening audiometers that are more likely to be used in settings without a booth e.g. doctors surgery, elderly care homes or domestic visits, or industrial screening of workers in a workplace.
In pure tone audiometry, it would not generally be considered acceptable to proceed routinely in most cases without a sound attenuating booth but that is not always the reality, particularly in the above scenarios. Audiocups can be a great solution. They are cheaper than a booth, and they are easily transportable. However, they do not address all of the issues one might encounter during audiometry so are not necessarily a complete alternative.
Some reasons why audiocups only offer a partial solution to lack of a sound booth are as follows:
Amplivox Audiocups Retrieved from http://www.amplivox.ltd.uk/category/products/audiometry/audiocups/ )
ISO 8253-1: Acoustics. Audiometric Test Methods. Part 1: Basic Pure Tone Air and
Bone Conduction Threshold Audiometry. (Identical to ISO 8253-1)
This is a technical question and without a clear idea of the distortion and other details like the calibration record of the instrument then we would be speculating. However, 20 kHz is a high frequency for testing - it is at the top of the frequency range both for the human ear and most transducers used for audiological testing. If testing on “warble” then that is frequency modulation so the signal has to centre on 20 kHz but then sweep above and below by several hundred Hz. So when the instrument tries to cycle up above 20 kHz then maybe it is reaching above the frequency response of the transducer. Perhaps that is why a distortion is being heard.
The reason we can hear 0 dB HL is that, if you (whoever is listening) have normal hearing then they should be able to hear down to the average level for otologically normal young adults. If the listener is an older adult then indeed it is true that they should not be able to hear the sound due to presbyacusis. However, a younger adult should be able to (assuming their hearing is at or close to the average) as by definition the sound is calibrated according to the average hearing sensitivity of normal hearing young adults. The concept is the same as asking why can't an older adult hear an 8 kHz pure tone at 0 dB HL.....it is just that the age at which presybyacusis is apparent is much lower for high frequencies like 16 and 20 kHz.
The dynamic range for the transducers (e.g. headphones and inserts) is less for the high frequencies. It is variable for different transducers. The effect might be slightly different according to the specific make and model in question but this is the reason why the extended range falls off quite quickly. We would be happy to look into the specifications for your model on request.