top of page
 Case 

hearing aid users' listening experience

There are some things we take for granted when we have learned them, like riding a bike! But have you ever wondered what it is like having to learn how to hear again?

 

This is the reality for a lot of people when getting a hearing aid. The fitting of a hearing aid is done based on a combination of an audiogram and the conversation between a patient and the audiologist. To ensure a correct fit there needs to be a mutual understanding of what sounds right or wrong between the patient and audiologist. But do these different groups understand and talk about sound in the same way?
As a part of my master's program in engineering psychology my group and I worked on developing scales of sound perception to help patients and audiologists gain a better mutual language when talking about sounds.   

This project was made in collaboration with the hearing clinic at Aalborg university hospital and the Better Hearing Rehabilitation (BEAR) project.

About this case

This project is about determining a common vocabulary for describing sound between new hearing aid users and the audiologists adjusting their hearing aids. 

The process in headlines

1. Observation and interview

2. Word elicitation & card sorting

3. Scale development

4. User testing

1. Observation & interviews

Wait for a second! you may know what hearing aid is, but do you know how they work? If not, I think this is going to be very confusing! 

To put it simply, hearing is a result of the brain perceiving sound waves of different frequencies as different sounds. A modern hearing aid is designed to capture the sound of the frequency area where a patient has a hearing impairment and transform these into frequencies that the patient is able to perceive. As a result of this, the world may sound very different to a new hearing aid user! So to ensure the best results in this movement of frequencies the hearing aids need to be properly fitted to the user. 
 

With that out of the way, how does this actually happen? We didn't know, so, therefore, we decided to go out and find out! 

See soundmeme.png

From hearing tests audiologist creates a graph called an Audiogram, to visualize and examine potential hearing impairments. So you could actually argue that they are able to see sound!

To collect information about the process and communication during hearing examinations and hearing aid fittings we participated in and observed fifteen sessions at the department of audiology at Aalborg University hospital. Here we gathered insights into interactions and communication between doctors, patients, and audiologists. To further qualify these insights semi-structured Interviews were carried out after the sessions.

The collected data was transcribed and words and sentences describing aided listening experiences were derived. This resulted in around 400 statements.  

pi1.png

Would it be a tru UX portfolio without post its? Above, alle the statements collected at the department of audiology at Aalborg University hospital

2. Word elicitation study
& card sorting

So there we were with approximately 400 statements to be analyzed! So we did what any good UX partitioner would do! We brought out the post-it notes!  

To condense and concretize the collected data we used an Affinity Diagram. The goal was to find suitable attributes for describing the listening experience.
The collected statements were grouped based on context. Each of the groups was then assigned a headline. This left us with 80 groups, and thereby 80 words to describe the perception of sound in aided listening. But how do these groups relate to each other? we needed the help of the professionals.

Card sorting

To assign the 80 words describing the aided listening experience to possible adjustments in a hearing aid, a moderated workshop was held where three audiologists carried out an open card sort. The audiologist worked together in discussing and categorizing the different sounds. This resulted in 13 categories, of which two were discarded (Tinnitus and Adaptation), due to them not being assessed as relevant for this investigation, leaving 11 groups.

3. Scale development

After the card sorting we were left with 65 attributes distributed into the 11 different groups with 1 to 13 attributes in each. Now we wanted to investigate if patients have a similar understanding of these attributes connectives! 

To do this each of the attributes was transformed into a scale to test for internal consistency between the attributes and groups. 59 of the attributes were made into 7-point Likert scales ranging from strongly disagree to strongly agree. The remaining six attributes were set up as three 7-point bipolar semantic differential scales. A total of 62 scales were created. These were then set up in a survey. 

 

Time to test these out on real people! 

survey.png

Preview of the survey created to be used for testing out how the 65 attributes fit the 11 groups, created from the card sort with the audiologist

4. User testing

To test out the scales, eight hearing aid users and two audiologists filled out the survey.
The surveys were followed up with semi-structured interviews to qualify their understanding of the survey and its perceived usefulness. The attitudes were positive about using the scales as a starting point for the dialogue during examinations. Examples of positive statements were “easy to survey and use”, “quick to fill out” and “the checkboxes are good”. 

Conclusion 

So what did we figure out? Our study showed an acceptable amount of consistency between most of the groups and attributes found through word elicitation and card sorting. Our study showed that it was possible to find a common understanding of sound between patients and audiologists to help the adjustment of hearing aids. The work has been used as the basis for an article by Dorte Hammershøi and Anne Wolff published in Auditory Learning in Biological and Artificial Systems. The article can be found here.

bottom of page