Abstract
In this project, we have explored machine learning approaches for predicting hearing loss thresholds on the brain's gray matter 3D images. We have solved the problem statement in two phases. In the first phase, we used a 3D CNN model to reduce high-dimensional input into latent space and decode it into an original image to represent the input in rich feature space. In the second phase, we utilized this model to reduce input into rich features and used these features to train standard machine learning models for predicting hearing thresholds. We have experimented with autoencoders and variational autoencoders in the first phase for dimensionality reduction and explored random forest, XGBoost and multi-layer perceptron for regressing the thresholds. We split the given data set into training and testing sets and achieved an 8.80 range and 22.57 range for PT500 and PT4000 on the test set, respectively. We got the lowest RMSE using multi-layer perceptron among the other models. Our approach leverages the unique capabilities of VAEs to capture complex, non-linear relationships within high-dimensional neuroimaging data. We rigorously evaluated the models using various metrics, focusing on the root mean squared error (RMSE). The results highlight the efficacy of the multi-layer neural network model, which outperformed other techniques in terms of accuracy. This project advances the application of data mining in medical diagnostics and enhances our understanding of age-related hearing loss through innovative machine-learning frameworks.
Abstract (translated)
在本项目里,我们探讨了预测听觉阈值的非机器学习方法。我们分两个阶段解决了问题陈述。在第一阶段,我们使用3D CNN模型将高维输入 reduce 到低维空间并对其进行解码,将其转化为具有丰富特征空间的原始图像,以表示输入。在第二阶段,我们利用该模型将输入 reduce 到丰富特征,并使用这些特征训练标准机器学习模型以预测听阈值。我们在第一阶段尝试了自动编码器和变分自编码器,用于降维,并探讨了随机森林、XGBoost 和多层感知器对阈值的回归。我们将给定的数据集划分为训练和测试集,在测试集上的PT500和PT4000的测试结果分别为8.80和22.57。我们获得了使用多层感知器最低的均方误差(RMSE)。我们的方法利用了VAE的独特能力,在高度维的神经影像数据中捕捉到复杂和非线性的关系。我们通过各种指标对模型进行严格的评估,重点关注根均方误差(RMSE)。结果显示,多层神经网络模型的效果显著优于其他技术。本项目通过创新机器学习框架将数据挖掘应用于医疗诊断,并通过深入研究年龄相关听觉损失,提高了我们对该领域的理解。
URL
https://arxiv.org/abs/2405.00142