Enhanced Smart Hearing Aid Simulation with Scene Classification and Frequency Compression

Jasper Flavia, J and Abhigeetha, K and Dharanika, V and Ashavikashini, K and Selvakumar, D (2025) Enhanced Smart Hearing Aid Simulation with Scene Classification and Frequency Compression. 2025 International Conference on Next Generation Computing Systems (ICNGCS). pp. 1-8.

[thumbnail of Enhanced_Smart_Hearing_Aid_Simulation_with_Scene_Classification_and_Frequency_Compression.pdf] Text
Enhanced_Smart_Hearing_Aid_Simulation_with_Scene_Classification_and_Frequency_Compression.pdf - Published Version

Download (318kB)

Abstract

Contemporary hearing aids do not have the capability to learn intelligently from dynamic real-world scenarios and do not provide personalized hearing improvement for people with varied hearing loss types. This paper introduces an improved, real-time smart hearing aid system that incorporates acoustic scene classification, frequency compression, and multi-user personalization on the BioAid framework. Complementing the shortcomings of existing literature, this research suggests an improved and more scalable method for assistive hearing.The system starts with live audio recording with low-latency input devices and extracts Mel Frequency Cepstral Coefficients (MFCCs) and spectral features to identify the acoustic scene based on machine learning algorithms like Support Vector Machines (SVM) and Random Forest classifiers. Upon identification of a scene, the system picks a personalized hearing aid preset from a broadened list of BioAid options, dynamically correlated with each individual's hearing profile and preferences.To confront the universal issue of high-frequency hearing loss, an FFT-based frequency compression module is proposed. This module relocates critical high-frequency elements into lower, more accessible frequency bands and increases speech intelligibility without acoustic context distortion. The inference pipeline of the model is engineered for real-time performance and efficient deployment in TensorFlow Lite or TinyML and is thus ideal for deployment on edge devices like Raspberry Pi or bespoke embedded hearing hardware.The system is tested with public datasets like DCASE2017 for environmental sound and LJ Speech for clean voice, and a custom multi-user dataset to personalize preset mapping. Experimental results show better accuracy in scene classification, real-time adaptability, and successful frequency compression. This work advances intelligent assistive technology by providing a usable and efficient solution for real-world hearing aid deployment

Item Type: Article
Subjects: C Computer Science and Engineering > Machine Learning
E Electronics and Communication Engineering > Signal Processing
Divisions: Electronics and Communication Engineering
Depositing User: Dr Krishnamurthy V
Date Deposited: 13 Dec 2025 10:44
Last Modified: 13 Dec 2025 10:45
URI: https://ir.psgitech.ac.in/id/eprint/1602

Actions (login required)

View Item
View Item