Current music experience
In order to understand hearing impaireds' current music-listening journey, I reached out to individuals via various social platforms asking for chatting opportunities. Specifically, I wanted to learn about their current experience, what music is to them, and how hearing loss affects their experience. I also wanted to verify my hypothesis about whether assistant elements (music visualization, tangibilization, sign language, etc.) could help for improvement.
I very soon realized that my previous perception of the problem was too ostensible and based on inaccurate assumptions. Before I dived to the bottom of the hearing impaireds’ situation, I thought simply adding visual or tangible elements could solve the problem. However, the problem isn’t improving an already-existed music experience, it’s establishing a fairly-enjoyable baseline of music experience.
[interview with hearing impaireds]
From my first round of interviews with 18 hearing impaireds, I discovered:
Deaf community has a vast demographic with diverse needs
Lyrics is critical to understand music and hearing impaired heavily rely on reading it
A relatively large number of hearing impaireds have trouble recognizing music (especially lyrics) with environmental noises, which negatively affects their music experience
Defining target user group
Since the deaf community has varying demands, I created a 2-dimensional diagram to pinpoint my target user group. The diagram has self-defined metrics of “hearing aid vs. cochlear implant (CI)” and “music enjoyment vs. recognition”.
A surprise!🎁
During my interview process, I was honored to be reached out by Yang Yang, the chairman of the China Deaf Association, who expressed her interest in and support for my project. She connects me with the greater hearing-impaired community to interview, and experts to acquire professional advice.
CI users’ problem
From my first round of interviews, most CI users expressed their negative music experiences related to unable to recognize lyrics (fully). I also found out that CI users usually go through a 1-2 years aural rehabilitation after their surgery. However, the current aural rehabilitation only aims to optimize people’s speech-related abilities. For a small number of rehab apps that target that music training, they focus on music pitch, instrument timbre, or melody. None of them is dedicated to lyrics recognition training.
Cochlear CoPilot
[speech listening only]
Speech ID 2
[speech listening only]
Hear Beyond
[music (pitch) listening only]
To verify whether improving lyrics recognition is truly desired by CI users, I created 3 storyboards to test it out. The storyboards depict 3 different solutions of building a baseline music experience during aural rehab. 14 out of 18 CI users showed preference to the lyrics recognition training storyboard.
[a] Introducing singing to the boring speech aural rehab
[b] Providing a vast environmental sound database, contributed by and engage the entire society
[c] Lyrics recognition training
Moving to solution
Now as I decided to create a novel music app to provide lyrics recognition training, but how to design for an effective and efficient training? I consulted an audiologist, and she said: