Preprint (medRxiv) https://www.medrxiv.org/content/10.1101/2023.06.01.23290351v1
Peer-Review Report by Robert Eikelboom (Reviewer F) https://med.jmirx.org/2024/1/e55554
Peer-Review Report by Anonymous https://med.jmirx.org/2024/1/e55727
Peer-Review Report by Anonymous https://med.jmirx.org/2024/1/e55728
Authors' Response to Peer-Review Reports https://med.jmirx.org/2024/1/e55510
Abstract
Background: High-frequency hearing loss is one of the most common problems in the aging population and with those who have a history of exposure to loud noises. This type of hearing loss can be frustrating and disabling, making it difficult to understand speech communication and interact effectively with the world.
Objective: This study aimed to examine the impact of spatially unique haptic vibrations representing high-frequency phonemes on the self-perceived ability to understand conversations in everyday situations.
Methods: To address high-frequency hearing loss, a multi-motor wristband was developed that uses machine learning to listen for specific high-frequency phonemes. The wristband vibrates in spatially unique locations to represent which phoneme was present in real time. A total of 16 participants with high-frequency hearing loss were recruited and asked to wear the wristband for 6 weeks. The degree of disability associated with hearing loss was measured weekly using the Abbreviated Profile of Hearing Aid Benefit (APHAB).
Results: By the end of the 6-week study, the average APHAB benefit score across all participants reached 12.39 points, from a baseline of 40.32 to a final score of 27.93 (SD 13.11; N=16; P=.002, 2-tailed dependent t test). Those without hearing aids showed a 10.78-point larger improvement in average APHAB benefit score at 6 weeks than those with hearing aids (t14=2.14; P=.10, 2-tailed independent t test). The average benefit score across all participants for ease of communication was 15.44 (SD 13.88; N=16; P<.001, 2-tailed dependent t test). The average benefit score across all participants for background noise was 10.88 (SD 17.54; N=16; P=.03, 2-tailed dependent t test). The average benefit score across all participants for reverberation was 10.84 (SD 16.95; N=16; P=.02, 2-tailed dependent t test).
Conclusions: These findings show that vibrotactile sensory substitution delivered by a wristband that produces spatially distinguishable vibrations in correspondence with high-frequency phonemes helps individuals with high-frequency hearing loss improve their perceived understanding of verbal communication. Vibrotactile feedback provides benefits whether or not a person wears hearing aids, albeit in slightly different ways. Finally, individuals with the greatest perceived difficulty understanding speech experienced the greatest amount of perceived benefit from vibrotactile feedback.
doi:10.2196/49969
Keywords
Introduction
Hearing loss affects 466 million people worldwide [
]. High-frequency hearing loss is one of the most common types of hearing loss and renders high-pitched sounds, such as the voices of women and children, more difficult to hear [ , ]. It can affect people of any age but is more common among older adults and people who have been repeatedly exposed to loud noises [ - ]. This type of hearing loss can be frustrating and disabling, making it difficult to understand speech communication and interact effectively with the world, leading to a decline in quality of life and isolation [ , ].Individuals with high-frequency hearing loss struggle to hear consonants with higher-frequency sound components, such as s, t, and f. As a result of the hearing loss, speech is reported as sounding muffled, most noticeably in noisy environments. Commonly, people with high-frequency hearing loss will report that they can hear but cannot understand [
]. It is often noticed when a person has trouble understanding women’s and children’s voices and detecting other sounds such as the ringing of a cell phone or the chirping of birds. Assistive hearing technologies such as hearing aids and cochlear implants can offer some assistance with understanding speech communication, but they have limitations. One of the most commonly reported disappointments among users of hearing aids and cochlear implants is that they still cannot understand speech, especially in complex environments [ , ].To address the speech understanding limitations associated with high-frequency hearing loss, we have developed a vibrotactile sensory substitution solution in the form of a wristband [
- ]. This device delivers spatially unique vibrations to the wrist in correspondence with target phonemes that are commonly difficult for individuals with presbycusis to detect. The wristband receives sound from the environment through an onboard microphone and uses a machine learning algorithm to filter background noise (BN) and extract target phonemes from speech. Each phoneme signal is mapped to its own unique linear resonant actuator (LRA) in the strap of the wristband where it is felt as a vibration on the skin. There are four LRAs embedded within the wristband strap, giving each target phoneme a unique spatial location on the wrist. Parts of speech that are audible to the user are unconsciously integrated with the spatially unique vibratory signals representing the inaudible portions of speech. The user is then able to understand a complete and meaningful message through the integration of the complementary sensory inputs [ - ].Our prior work in this area demonstrated that when two words are algorithmically translated into spatiotemporal patterns of vibration on the skin of the wrist, they are distinguishable to individuals who are hard of hearing or deaf up to 83% of the time for two words that are similar and up to 100% of the time for two words that are not similar [
, ]. Further studies showed that sound-to-touch sensory substitution devices may help people with hearing impairments, allowing them to access sensory information that is otherwise inaccessible. Weisenberger and Russell [ ] used single-channel vibrotactile aids designed to translate acoustic stimuli into representative vibration patterns on the wrist to improve performance on environmental sound identification tests from 55% to 95% correct and improve performance on single word identification testing from 60% to 90%.In this study, we aimed to demonstrate that a simple wearable sensory substitution device that transforms speech sounds into haptic vibrations on the wrist can help individuals with high-frequency hearing loss perceive a greater ability to understand speech communication throughout their normal daily routine. With further development and refinement, this technology has the potential to improve the quality and productivity of their daily interactions, enable them to enjoy audio-based entertainment such as movies and podcasts, help them understand conversations in complicated acoustic environments, and fill the residual gaps of impairment left by their hearing aids.
Methods
Participants
Participants were recruited via web-based advertising for a paid study related to hearing loss. Eligibility required (1) an age between 18 and 80 years, (2) having access to a mobile device (iOS or Android) and a computer, (3) English as a primary spoken language, and (4) meeting the following criteria for high-frequency hearing loss: a pure-tone audiogram (either from an audiologist in the past 24 mo or from 2 audiogram mobile apps, Mimi and Hearing & Ear Age Test) must show at least 55 dB of hearing loss at 4 kHz averaged across both ears (with neither ears’ 4-kHz threshold being less than 40 dB of hearing loss) and no more than 35 dB of hearing loss averaged across both ears and across 500-Hz and 1000-Hz tones. These specifications were chosen to capture individuals with hearing loss profiles in alignment with high-frequency hearing loss. Candidates who did not have an audiogram from an audiologist were required to provide audiograms from both audiogram mobile apps, which have been demonstrated as comparable to in-clinic testing [
].A total of 16 eligible participants completed the study: 10 male participants, 5 female participants, and 1 nonbinary participant. The average age was 68.8 (SD 11.6) years. The type and severity of hearing loss were determined from pure-tone audiograms. A total of 9 participants provided audiograms from an audiologist and 7 provided audiograms from the two mobile apps. The average pure-tone threshold of both ears at 500 Hz and 1000 Hz was 30 (SD 13) dB and the average pure-tone threshold of both ears at 4000 Hz was 63 (SD 9) dB of hearing loss (
). Demographic data for the participants is shown in .Age (y) | Sex | Hearing aids | Years with hearing loss | Audiogram source | Hearing loss (dB) | ||||||||||||
250 | 500 | 1000 | 2000 | 4000 | 8000 | ||||||||||||
R | L | R | L | R | L | R | L | R | L | R | L | ||||||
B1 | 64 | Male | No | 4 | Audiologist | 10 | 15 | 10 | 15 | 15 | 20 | 15 | 20 | 50 | 55 | 55 | 80 |
B2 | 75 | Male | Yes | 15 | Audiologist | 20 | 35 | 15 | 25 | 15 | 15 | 30 | 50 | 80 | 75 | 80 | 85 |
B3 | 72 | Nonbinary | Yes | 22 | Audiologist | 10 | 25 | 25 | 40 | 45 | 45 | 50 | 55 | 65 | 70 | 100 | 100 |
B4 | 69 | Female | No | 35 | Audiologist | 35 | 35 | 45 | 40 | 45 | 45 | 55 | 60 | 55 | 65 | 55 | 55 |
B5 | 74 | Female | No | 15 | Mobile app | 30 | 28 | 33 | 28 | 20 | 20 | 40 | 35 | 55 | 53 | 70 | 70 |
B6 | 78 | Female | Yes | 10 | Mobile app | 20 | 20 | 28 | 28 | 48 | 48 | 63 | 73 | 78 | 80 | 70 | 70 |
B7 | 27 | Male | Yes | 10 | Audiologist | 10 | 10 | 5 | 5 | 0 | 0 | 5 | 5 | 80 | 80 | 80 | 80 |
B8 | 73 | Male | No | 3 | Mobile app | 33 | 30 | 38 | 35 | 40 | 45 | 45 | 45 | 58 | 60 | 70 | 65 |
B9 | 68 | Male | Yes | 15 | Mobile app | 33 | 25 | 40 | 33 | 45 | 38 | 48 | 43 | 58 | 58 | 70 | 65 |
B10 | 67 | Female | Yes | 10 | Mobile app | 35 | 33 | 43 | 35 | 50 | 48 | 48 | 48 | 55 | 53 | 55 | 70 |
B11 | 76 | Male | Yes | 25 | Mobile app | 33 | 30 | 18 | 23 | 13 | 38 | 58 | 60 | 68 | 68 | 70 | 70 |
B12 | 66 | Female | No | 5 | Audiologist | 25 | 25 | 35 | 35 | 40 | 40 | 60 | 60 | 60 | 60 | 80 | 70 |
B13 | 79 | Male | Yes | 15 | Audiologist | 40 | 35 | 35 | 35 | 40 | 35 | 65 | 50 | 65 | 60 | 65 | 60 |
B14 | 67 | Male | Yes | 10 | Audiologist | 5 | 5 | 5 | 10 | 10 | 10 | 25 | 30 | 55 | 60 | 75 | 75 |
B15 | 74 | Male | No | 5 | Mobile app | 28 | 20 | 43 | 30 | 33 | 25 | 45 | 45 | 65 | 68 | 65 | 70 |
B16 | 71 | Male | No | 20 | Audiologist | 40 | 35 | 40 | 35 | 40 | 40 | 45 | 50 | 50 | 60 | 60 | 60 |
aAudiogram source indicates where the audiogram originated from. Audiologist indicates the audiogram was measured by an audiologist, and mobile app indicates the participant provided two audiograms measured by the Mimi and Hearing & Ear Age Test mobile apps.
bDecibels of hearing loss at 7 pure tones in the left and right ears. Hearing loss values are measured without cochlear implants or hearing aids. Note, 90 dB of hearing loss is the most the test can detect.
cR: right.
dL: left.
Device
Participants wore a haptic wristband (
) that vibrated to indicate the occurrence of specific phonemes. The wristband contained four vibrating motors embedded in the wrist strap, a microphone, a power button, a microcontroller, and a battery.The motors were LRAs that vibrated in a sine wave and were capable of rising from 0% to 50% of their maximum amplitude within 30 milliseconds. The motors vibrated at 175 Hz, the frequency at which human skin has the highest sensitivity [
]. Each motor vibrated at 1.7 GRMS (root mean squared acceleration from gravity; 16.6 m/s2). The motors were separated from one another at a distance of 18.2 mm and 19.2 mm for the small and large wristband sizes, respectively (center-to-center distances). Each motor pad contacted the wearer’s skin on a rectangular area that measured approximately 8.2 mm by 8.5 mm.The top of the wristband was a module that contained the power button, a microphone, and a microcontroller. The microphone captured audio and sent this data to the microcontroller. The microcontroller processed the audio data through a phoneme-detection algorithm and vibrated the motors according to the output of the algorithm. Additional microphone characteristics are provided in
.Algorithm
The algorithm processed incoming audio to determine when any target phoneme was detected. If a target phoneme was detected, the corresponding motor vibrated for 80 ms.
The four target phonemes were /s/, /t/, /z/, and /k/. Each motor on the wristband was assigned to a different target phoneme.
shows the motor assignments for each phoneme. The four phonemes were chosen based on a combination of the following three factors: (1) how difficult each phoneme is for hearing-impaired listeners to hear, (2) how frequently each phoneme occurs in spoken English, and (3) how well our algorithm can detect each phoneme. The difficulty was pooled from several studies of phoneme confusion for hearing-impaired listeners. Phatak et al [ ] asked older hearing-impaired listeners to identify the consonant in a presented consonant-vowel syllable. Woods et al [ ] presented the California Syllable Test, which uses consonant-vowel-consonant syllables, to older hearing-impaired listeners in both aided (with hearing aids) and unaided conditions. Sher and Owens [ ] presented a four-alternative forced-choice test with consonant-vowel-consonant syllables, where either the initial or final consonant differed between choices. Synthesizing the results of these three studies, we found that the following consonants are the most difficult to hear for a listener with presbycusis: /dh/, /th/, /ng/, /v/, /b/, /hh/, /f/, /z/, /s/, and /t/. Of these, /th/ and /ng/ are present in spoken English less than 1% of the time [ ]. Our algorithm performed poorly on /dh/, /b/, /f/, and /hh/.Phoneme Detection
The phoneme detection algorithm was trained using the elastic compute cloud on Amazon Web Services. The training data consisted of a combination of pure LibriSpeech and LibriSpeech rerecorded through the onboard microphone on the wristband. LibriSpeech is a corpus of approximately 1000 hours of English speech with standard American accents sampled at 16 kHz that has been shown to produce excellent performance in speech recognition models trained with it [
]. To produce a corpus of English read speech suitable for training speech recognition systems, LibriSpeech aligns and segments audiobook read speech with the corresponding book text automatically and then filters out portions with noisy transcripts. The purpose of using rerecorded data was to tune the algorithm’s parameters to speech sounds representative of those it would encounter from the wristband’s microphone.The algorithm consisted of feature extraction and inference engine components. The feature extraction module segmented an audio stream captured from the microphone into 32-millisecond frames with 16 milliseconds of overlap. Each audio frame underwent analysis to extract distinct features suitable for phoneme recognition. The features were also subject to further processing that amplified phoneme-specific information contained and ensured robustness toward continuously changing environmental conditions.
The inference engine took these feature vectors and output phoneme predictions. The core of the inference engine was a neural network model that used a real-time temporal convolutional network structure optimized for real-time speech recognition. The full latency from phoneme onset to vibration onset was 170 milliseconds. The algorithm performance is shown below in
.Precision | Recall | F1-score | |
K | 0.86 | 0.75 | 0.8 |
S | 0.86 | 0.89 | 0.88 |
T | 0.85 | 0.65 | 0.74 |
Z | 0.86 | 0.72 | 0.78 |
Macroaverage | 0.86 | 0.75 | 0.8 |
aPrecision is the ability of a classification model to return only the data points in a class. It is calculated by dividing the true positives by the sum of the true positives and false positives.
bRecall is the ability of a classification model to identify all data points in a relevant class. It is calculated by dividing the true positives by the sum of the true positives and false negatives.
cF1-scores are a single metric that combines recall and precision using the harmonic mean. It is calculated by dividing the true positives by the sum of the true positives plus half of the sum of the false positives and false negatives.
Paradigm
Participants wore the wristband every day for 6 weeks. Each day the participants were required to spend at least 1 hour watching television or listening to an audiobook, podcast, or other speech-based media while wearing the wristband and not wearing earbuds or headphones. The instructions were to choose something engaging so their attention would be directed toward understanding what was being said, while the wristband provided the assistive haptic feedback. No further guidelines were enforced for distance from the audio source or volume. The purpose of this required daily exercise was to ensure the participant was immersed in a minimum amount of active listening each day so the brain would learn to integrate the audible speech sounds with the haptic vibratory representations of the inaudible speech sounds to form a complete meaning. In addition to the required hour of practice, participants were encouraged to wear the wristband whenever engaged in conversation or active listening to speech communication.
Tasks
Abbreviated Profile of Hearing Aid Benefit
Before starting the study and at the end of each week during the study, participants completed a modified version of the Abbreviated Profile of Hearing Aid Benefit (APHAB) that did not include 6 questions related to the aversiveness subscale [
]. These questions were removed because they ask about the unpleasantness of sounds heard through a hearing aid, which does not apply to our device. The remaining 18 questions on the APHAB ask questions about one’s ability to understand verbal communication in different scenarios. For example, one of the questions is “When I am in a crowded grocery store, talking with the cashier, I can follow the conversation.” In the conventional questionnaire, participants answer the questions independently about their experiences while using and while not using their hearing aids. In this study, participants answered the questions independently about their experiences while using and while not using the wristband. If the participant regularly wore hearing aids, “with the wristband” referred to wearing the wristband in addition to their hearing aids, and “without the wristband” referred to wearing their hearing aids alone. The test was administered through a web-based questionnaire that captured the data onto a datasheet for analysis. The benefit score is calculated by subtracting the final aided score at the conclusion of the trial from the baseline unaided score that was measured at the beginning of the trial. Lower raw APHAB scores indicate lower levels of disability associated with hearing loss. Higher benefit scores indicate more perceived benefits from the intervention.Final Questionnaire
On the final day of the study, participants answered a questionnaire that asked 2 questions using a Likert scale from 1 to 10: “How much did the Clarify wristband help you understand speech?” and “How likely are you to recommend the Clarify wristband to a friend or colleague?”
Ethical Considerations
The study protocol was approved by Solutions IRB (Protocol #2016/01/7), an independent institutional review board accredited by the Association for the Accreditation of Human Research Protection Programs, Inc. All participants gave written informed consent following the Declaration of Helsinki. Upon completion of the study, participants were given a US $100 Amazon gift card for their time. At the conclusion of the study, all data were deidentified to safeguard participant information.
Results
As shown in
and , after only 1 week of wearing the wristband daily, the average APHAB benefit score (unaided – aided) was 8.61 points, with a baseline score of 40.32 points that dropped to 31.71 points (SD 12.11; N=16; P=.01, 2-tailed dependent t test). Baseline was defined as the unaided APHAB score taken before starting to use the wristband. As a reminder, if the participant regularly used hearing aids, they were asked to answer the unaided questions based on how they felt with their hearing aids on. If the participants never used hearing aids, they were asked to answer the unaided questions based on how they felt without any hearing assistance. The average aided APHAB score continued to trend down for the remaining 5 weeks of the study. By the end of the 6-week study, the average APHAB benefit score had reached a clinically meaningful and statistically significant value of 12.39 points [ ] from a baseline of 40.32 to a final score of 27.93 (SD 13.11; N=16; P=.002, 2-tailed dependent t test). Individual data is presented in .Time wearing the wristband and time exposed to speech were verified through the collection of data from backend logging that records when the wristband is turned on or off and when a phoneme is detected. As seen in
, participants wore the wristband for an average of 12.9 (SD 8.1) hours per day and were exposed to speech for an average of 6.7 (SD 3.3) hours per day.Simple linear regression analysis was used to test if a participant’s baseline APHAB score explains their benefit APHAB score after 6 weeks, indicating that those with greater subjective difficulty understanding speech may stand to benefit the most from the haptic assistance of the wristband (
). The results of the regression indicate that the average baseline score explains 43% of the variation in the average APHAB benefit score at 6 weeks (F1,14=10.55; P=.006). These results are significant at the P<.05 level.We compared participants who used hearing aids to those who did not. A total of 9 participants used hearing aids to help them understand speech, and 7 of the participants did not. Results showed a 10.78 point greater APHAB benefit score at 6 weeks for participants who did not use hearing aids than for participants who did (t14=2.14; P=.10, 2-tailed independent t test;
). While the difference in the benefit score between the two subgroups was not statistically significant, it did reach the 10-point threshold for clinical relevance [ , ]. The small sample size rendered the study underpowered to detect this difference at P<.05, and further study is necessary to validate this finding. Additionally, while the subgroup without hearing aids started the study at a higher level of disability, they ended the study at a lower level of disability than those with hearing aids. The subgroup without hearing aids started with a baseline APHAB score of 44.09 (SD 16.66) points, while the subgroup with hearing aids started with a baseline score of 37.40 (SD 14.61) points. The subgroup without hearing aids concluded the study with an APHAB score of 25.63 (SD 12.51) points, while the subgroup with hearing aids concluded the study with an APHAB score of 29.72 (SD 12.01) points. Another noteworthy difference between the subgroups was that the group who did not wear hearing aids demonstrated both a statistically significant and clinically meaningful aided APHAB benefit score from baseline, while the subgroup that did wear hearing aids did not. The subgroup that did not wear hearing aids ended the study with an average APHAB benefit score from baseline of 18.45 points (SD 11.70 points; n=7; P=.005, 2-tailed dependent t test). The subgroup that wore hearing aids ended the study with an average APHAB benefit score from baseline of 7.67 points (SD 12.730 points; n=9; P=.11, 2-tailed dependent t test).Subscale analyses were performed for ease of communication (EOC), BN, and reverberation (
and ). These subscales are reflective of speech communication under ideal conditions, in noisy environments, and in reverberant environments [ ]. The average benefit score for EOC was 15.44 (SD 13.88; N=16; P<.001, 2-tailed dependent t test). Those who wore hearing aids and those who did not wear hearing aids had similar EOC benefit scores (t14=2.18; P=.60, 2-tailed independent t test). The average EOC benefit score for those with hearing aids was 13.57 (SD 15.71; n=9; P=.03, 2-tailed dependent t test), and the average EOC benefit score for those without hearing aids was 17.83 (SD 11.85; n=7; P=.01, 2-tailed dependent t test). The average benefit score for BN was 10.88 (SD 17.54; N=16; P=.03, 2-tailed dependent t test). The average BN benefit score for those without hearing aids was 16.99 points higher than those with hearing aids (t14=2.14; P=.05, 2-tailed independent t test). The average BN benefit score for those with hearing aids was 3.44 (SD 17.5; n=9; P=.54, 2-tailed dependent t test), and the average BN benefit score for those without hearing aids was 20.43 (SD 15.1; n=7; P=.01, 2-tailed dependent t test). The average benefit score for reverberation was 10.84 (SD 16.95; N=16; P=.02, 2-tailed dependent t test). The average reverberation benefit score for those without hearing aids was 11.12 points higher than those with hearing aids (t14=2.14; P=.20, 2-tailed independent t test). The average reverberation benefit score for those without hearing aids was 17.10 (SD 16.0; n=7; P=.03, 2-tailed dependent t test), and the average reverberation benefit score for those with hearing aids was 5.98 (SD 17.0; n=9; P=.32, 2-tailed dependent t test).Three of our participants requested to continue use of the wristband after the study ended, and hence, they did not fill out the final questionnaire. Of those who did, some had criticisms (“I’m really unsure if the Clarify band was helpful or not”) and some had praise (“It was very beneficial. Thank you”); however, the comments were too few to be statistically meaningful.
Discussion
In this study, we expanded on our prior work that showed deaf and hard of hearing individuals are capable of identifying sound categories through patterns of vibration applied to the wrist [
]. Here, we demonstrated that individuals with high-frequency hearing loss can improve their subjective understanding of speech communication using vibrational representations of high-frequency speech sounds on the wrist. The results demonstrate that after 1 week of wearing the wristband, participants were able to improve their subjective ability to understand conversations during daily interactions. They then continued to improve, at a slower rate, throughout the 6-week study. This reflects prior research findings of an innate ability for those with hearing loss to rapidly learn to interpret tactile vibrations as a substitute for audio information [ ]. The understanding of vibrations is further strengthened and perfected over time with practice as the portions of the auditory cortex that respond to tactile vibration expand [ - ].We further found that participants who started the study with a higher baseline APHAB score experienced a greater improvement in their subjective ability to understand speech by the end of the 6-week trial. Of 16 participants, 14 ended the study with an APHAB score of 40 or below (which translates to perceived difficulty understanding speech less than half of the time). A total of 5 participants started the study with an unaided APHAB score of 50 points or higher; for 3 of them, the final APHAB benefit score was >30 points. One potential explanation for why participants who started the trial with greater difficulty understanding speech experienced greater improvement is that more of their auditory cortex was available for the interpretation of tactile sound representation [
]. It is also possible that participants who started the study with a higher APHAB score had more room for improvement, as higher APHAB scores indicate a higher degree of perceived disability. This could be an interesting topic for future research.Participants without hearing aids demonstrated a trend toward higher self-reported benefit from vibrotactile sensory substitution for speech understanding, though this did not reach statistical significance. Given that this group started the study trending toward a higher APHAB score, we presume the difference is because the hearing aid group already benefits from their technology and therefore has less room for improvement. It is difficult to predict what the interaction between hearing aids and vibrotactile feedback will be because of the differing signal processing techniques used in digital hearing aid technologies. Digital hearing aids convert sound waves into numerical codes before amplifying them. This code contains information about a sound’s frequency and amplitude, allowing the hearing aid to be specially programmed to amplify some frequencies more than others. Digital sound processing capabilities allow an audiologist to adjust the hearing aid to a user’s needs and different listening environments. Digital hearing aids can also be programmed to focus on sounds coming from a specific direction. The wristband may represent sounds that differ significantly from those represented by the hearing aid. Future studies can explore directly connecting the wristband to the user’s hearing aids through a Bluetooth signal so that the wristband’s signals directly correspond with the sounds the user is hearing. For this study, the small sample size rendered the study underpowered to detect differences between those who used hearing aids and those who did not at P<.05. Future studies will be designed to investigate this finding further.
Individuals with hearing impairment have great difficulty understanding speech in the presence of BN. It is one of the primary complaints expressed by many with hearing loss, and one of the most difficult impairments to resolve. Individuals with hearing loss are unable to resolve the closely spaced harmonics of speech sounds to perform a spectral analysis with enough detail to extract the time-frequency portions of the speech that are relatively spared from corruption by the noise background [
]. In hearing aids, the BN modulators have not been shown to be highly effective at helping in these situations [ ]. In this study, we demonstrated that the addition of vibrotactile feedback in the presence of BN enabled individuals who did not wear hearing aids to hear speech communication better based on their subjective experience ( ). Interestingly, the final average BN score for the subgroup without hearing aids was 28.95 (SD 16.15; n=7) and the final average BN score for the subgroup with hearing aids was 40.04 (SD 18.78; n=9), suggesting that those who use hearing aids may benefit from using vibrotactile feedback during conversations with BN instead of using their hearing aids. While our data does not offer conclusive evidence of this due to several limitations, it does offer an area worth further exploration in larger studies.Reverberation is the persistence of a sound after it is produced and is created when the sound is reflected off of surfaces or objects. It is most noticeable when the source of the sound has stopped, but the reflections continue. As the sound reflects off of surfaces and is absorbed by others, the quality of the sound degrades. Every room or outdoor environment has a different level of reverberation due to the construct of the room or area, the reflectiveness of the materials, and the objects in it. Reverberation is natural to every area, but in areas where the reverberation is very high, it can reduce speech intelligibility, especially when BN is also present. Individuals with hearing loss, including users of hearing aids, frequently report difficulty in understanding speech in reverberant, noisy situations [
]. Most hearing aids, both digital and analog, have limited ability to help individuals with hearing loss in areas of high reverberation [ ]. We found that the addition of vibrotactile haptic vibration to the wrist in reverberant environments tended to help the participants without hearing aids more than those with hearing aids, though the difference did not reach statistical significance ( ). One possibility to be tested is that individuals who use hearing aids may find haptic vibrations to be more helpful in reverberant environments when the hearing aids are removed because it would eliminate any conflict between the digital processing of the hearing aid and the vibrational signals that are providing information about the sounds of speech without processing.In the context of the APHAB, EOC describes the effort involved in communication under relatively easy listening environments. The interesting discovery from our results was that individuals who use hearing aids experienced a significant subjective improvement in their understanding of conversations under easy listening conditions. In easy listening environments where hearing aids help the most and perform the least amount of digital signal processing, the addition of haptic vibrations added the greatest amount of additional benefit. Upon completion of the trial, the average EOC score for the subset of participants who were users of hearing aids was 14.65 (SD 6.99; n=9), indicating little to no subjective difficulty understanding speech in easy listening environments. For the subset of participants who were not users of hearing aids, the average EOC score upon completion of the trial was 16.88 (7.73; n=7). Even without the additional help of hearing aids, these participants ended the study with an equivalent subjective capability for understanding speech in easier listening environments, despite starting the trial with a higher level of disability (
).There are limitations to this study. First, the small sample size prevents extrapolation of the results to larger populations; this will be addressed in future studies. We were also limited in our ability to collect speech comprehension data in a noise-controlled environment with standardized volume controls—this is because the testing was done in participant homes instead of a laboratory. As a result, this study depended on self-report data (APHAB), which always has the potential to be influenced by a placebo effect. Another limitation is that some participant audiograms were assessed via phone apps rather than an audiologist’s office; however, it should be noted that these appear to yield roughly equivalent results [
]. We also note that the specific type of hearing loss was not controlled beyond meeting the audiogram requirements. One final thing to note is that participants could move their hand (and, hence, their wristband), meaning that the microphone placement was not standardized in a single position. We do not consider this a limitation of the study, as the study is meant to test whether a vibrotactile wristband can be used to detect sound. The positive results reported here suggest that the mobility of the microphone does not present a problem.We have demonstrated that vibrotactile sensory substitution helps individuals with high-frequency hearing loss improve their subjective understanding of verbal communication. The device demonstrated here is a wristband that delivers spatially distinguishable vibrations to the wrist in correspondence with high-frequency phonemes. We found that while both hearing aid and non–hearing aid users with high-frequency hearing loss reported a benefit, vibrotactile feedback tended to be more beneficial for non–hearing aid users. However, the small sample size rendered the study underpowered to detect this difference at P<.05, and further study is necessary to validate this finding. Finally, our results also demonstrated that those who started the study with a higher APHAB score (greater hearing disability) experienced the greatest amount of benefit from vibrotactile feedback.
Conflicts of Interest
DME is the chief executive officer of Neosensory, a neurotech company. IK and TF are employees of Neosensory, and MVP is a former employee of Neosensory.
Microphone characteristics.
PDF File, 59 KBAbbreviated Profile of Hearing Aids Benefit summary statistics per week for all participants, the subgroup that did not wear hearing aids, and the subgroup that did wear hearing aids. The benefit score is the baseline score minus the final score.
PDF File, 83 KBAbbreviated Profile of Hearing Aids Benefit subscale summary statistics per week for all participants, the subgroup that did not wear hearing aids, and the subgroup that did wear hearing aids.
PDF File, 95 KBReferences
- Olusanya BO, Davis AC, Hoffman HJ. Hearing loss: rising prevalence and impact. Bull World Health Organ. Oct 1, 2019;97(10):646-646A. [CrossRef] [Medline]
- Chang TY, Liu CS, Huang KH, Chen RY, Lai JS, Bao BY. High-frequency hearing loss, occupational noise exposure and hypertension: a cross-sectional study in male workers. Environ Health. Apr 25, 2011;10:35. [CrossRef] [Medline]
- Turner CW, Cummings KJ. Speech audibility for listeners with high-frequency hearing loss. Am J Audiol. Jun 1999;8(1):47-56. [CrossRef] [Medline]
- Chen KH, Su SB, Chen KT. An overview of occupational noise-induced hearing loss among workers: epidemiology, pathogenesis, and preventive measures. Environ Health Prev Med. Oct 31, 2020;25(1):65. [CrossRef] [Medline]
- Hong O, Kerr MJ, Poling GL, Dhar S. Understanding and preventing noise-induced hearing loss. Dis Mon. Apr 2013;59(4):110-118. [CrossRef] [Medline]
- Michels TC, Duffy MT, Rogers DJ. Hearing loss in adults: differential diagnosis and treatment. Am Fam Physician. Jul 15, 2019;100(2):98-108. [Medline]
- Jayakody DMP, Almeida OP, Speelman CP, et al. Association between speech and high-frequency hearing loss and depression, anxiety and stress in older adults. Maturitas. Apr 2018;110:86-91. [CrossRef] [Medline]
- Feng Y, Yin S, Kiefte M, Wang J. Temporal resolution in regions of normal hearing and speech perception in noise for adults with sloping high-frequency hearing loss. Ear Hear. Feb 2010;31(1):115-125. [CrossRef] [Medline]
- Chung K. Challenges and recent developments in hearing AIDS. Part I. Speech understanding in noise, microphone technologies and noise reduction algorithms. Trends Amplif. 2004;8(3):83-124. [CrossRef] [Medline]
- Hickson L, Meyer C, Lovelock K, Lampert M, Khan A. Factors associated with success with hearing aids in older adults. Int J Audiol. Feb 2014;53 Suppl 1:S18-S27. [CrossRef] [Medline]
- Novich SD, Eagleman DM. Using space and time to encode vibrotactile information: toward an estimate of the skin's achievable throughput. Exp Brain Res. Oct 2015;233(10):2777-2788. [CrossRef] [Medline]
- Perrotta MV, Asgeirsdottir T, Eagleman DM. Deciphering sounds through patterns of vibration on the skin. Neuroscience. Mar 15, 2021;458:77-86. [CrossRef] [Medline]
- Eagleman DM, Perrotta MV. The future of sensory substitution, addition, and expansion via haptic devices. Front Hum Neurosci. Jan 13, 2022;16:1055546. [CrossRef] [Medline]
- Cieśla K, Wolak T, Lorens A, Mentzel M, Skarżyński H, Amedi A. Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci Rep. Feb 25, 2022;12(1):3206. [CrossRef] [Medline]
- Weisenberger JM, Russell AF. Comparison of two single-channel vibrotactile AIDS for the hearing-impaired. J Speech Hear Res. Mar 1989;32(1):83-92. [CrossRef] [Medline]
- Yesantharao LV, Donahue M, Smith A, Yan H, Agrawal Y. Virtual audiometric testing using smartphone mobile applications to detect hearing loss. Laryngoscope Investig Oto. Sep 28, 2022;7(6):2002-2010. [CrossRef] [Medline]
- Verrillo RT. Age related changes in the sensitivity to vibration. J Gerontol. Mar 1980;35(2):185-193. [CrossRef] [Medline]
- Phatak SA, Yoon YS, Gooler DM, Allen JB. Consonant recognition loss in hearing impaired listeners. J Acoust Soc Am. Nov 2009;126(5):2683-2694. [CrossRef] [Medline]
- Woods DL, Arbogast T, Doss Z, Younus M, Herron TJ, Yund EW. Aided and unaided speech perception by older hearing impaired listeners. PLoS One. Mar 2, 2015;10(3):e0114922. [CrossRef] [Medline]
- Sher AE, Owens E. Consonant confusions associated with hearing loss above 2000 Hz. J Speech Hear Res. Dec 1974;17(4):669-681. [CrossRef] [Medline]
- Mines MA, Hanson BF, Shoup JE. Frequency of occurrence of phonemes in conversational English. Lang Speech. Jul-Sep 1978;21(3):221-241. [CrossRef] [Medline]
- Panayotov V, Chen G, Povey D, Khudanpur S. Librispeech: an ASR corpus based on public domain audio books. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE; 2015;5206-5210. [CrossRef]
- Cox RM, Alexander GC. The abbreviated profile of hearing aid benefit. Ear Hear. Apr 1995;16(2):176-186. [CrossRef] [Medline]
- Cox RM. Administration and application of the APHAB. Hearing J. Apr 1997;50(4):32. [CrossRef]
- Soto-Faraco S, Deco G. Multisensory contributions to the perception of vibrotactile events. Behav Brain Res. Jan 23, 2009;196(2):145-154. [CrossRef] [Medline]
- Auer ET, Bernstein LE, Sungkarat W, Singh M. Vibrotactile activation of the auditory cortices in deaf versus hearing adults. Neuroreport. May 7, 2007;18(7):645-648. [CrossRef] [Medline]
- Good A, Reed MJ, Russo FA. Compensatory plasticity in the deaf brain: effects on perception of music. Brain Sci. Oct 28, 2014;4(4):560-574. [CrossRef] [Medline]
- Levänen S, Jousmäki V, Hari R. Vibration-induced auditory-cortex activation in a congenitally deaf adult. Curr Biol. Jul 16, 1998;8(15):869-872. [CrossRef] [Medline]
- McArdle R, Wilson RH. Speech perception in noise: the basics. Perspect Hear Hear Disord Res Diagnostics. Feb 1, 2009;13(1):4-13. URL: https://pubs.asha.org/doi/abs/10.1044/hhd13.1.4 [Accessed 2024-01-01]
- Healy EW, Yoho SE. Difficulty understanding speech in noise by the hearing impaired: underlying causes and technological solutions. Annu Int Conf IEEE Eng Med Biol Soc. Aug 2016;2016:89-92. [CrossRef] [Medline]
- Cueille R, Lavandier M, Grimault N. Effects of reverberation on speech intelligibility in noise for hearing-impaired listeners. R Soc Open Sci. Aug 31, 2022;9(8):210342. [CrossRef] [Medline]
- Reinhart PN, Souza PE, Srinivasan NK, Gallun FJ. Effects of reverberation and compression on consonant identification in individuals with hearing impairment. Ear Hear. Mar-Apr 2016;37(2):144-152. [CrossRef] [Medline]
Abbreviations
APHAB: Abbreviated Profile of Hearing Aid Benefit |
BN: background noise |
EOC: ease of communication |
GRMS: root mean squared acceleration from gravity |
LRA: linear resonant actuator |
Edited by Edward Meinert; submitted 14.06.23; peer-reviewed by Anonymous, Anonymous, Robert Eikelboom; final revised version received 23.11.23; accepted 13.12.23; published 09.02.24
Copyright© Izzy Kohler, Michael V Perrotta, Tiago Ferreira, David M Eagleman. Originally published in JMIRx Med (https://med.jmirx.org), 9.2.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIRx Med, is properly cited. The complete bibliographic information, a link to the original publication on https://med.jmirx.org/, as well as this copyright and license information must be included.