Abstract
Image and multimodal machine learning tasks are very challenging to solve in the case of poorly distributed data. In particular, data availability and privacy restrictions exacerbate these hurdles in the medical domain. The state of the art in image generation quality is held by Latent Diffusion models, making them prime candidates for tackling this problem. However, a few key issues still need to be solved, such as the difficulty in generating data from under-represented classes and a slow inference process. To mitigate these issues, we propose a new method for image augmentation in long-tailed data based on leveraging the rich latent space of pre-trained Stable Diffusion Models. We create a modified separable latent space to mix head and tail class examples. We build this space via Iterated Learning of underlying sparsified embeddings, which we apply to task-specific saliency maps via a K-NN approach. Code is available at this https URL
Abstract (translated)
图像和多模态机器学习任务在分布式数据中解决问题非常具有挑战性。特别是,数据可用性和隐私限制在医疗领域使这些障碍更加严重。目前,图像生成质量的最佳状态由潜在扩散模型持有,使它们成为解决这个问题的理想候选者。然而,还需要解决几个关键问题,例如从代表性不足的类别的数据生成数据和推理过程缓慢的问题。为了减轻这些问题,我们提出了一种基于预训练稳定扩散模型的图像增强方法,该方法基于利用其丰富的潜在空间。我们创建了一个修改后的分离式潜在空间,以混合头和尾类别的样本。我们通过迭代学习底层稀疏表示来构建这个空间,并将其应用于任务特定的显着度图上,通过K-NN方法进行应用。代码可在此处访问:https://www.kaggle.com/your_username/project
URL
https://arxiv.org/abs/2405.01705