Abstract
This study introduces a novel BERT-LSH model that incorporates Locality Sensitive Hashing (LSH) to approximate the attention mechanism in the BERT architecture. We examine the computational efficiency and performance of this model compared to a standard baseline BERT model. Our findings reveal that BERT-LSH significantly reduces computational demand for the self-attention layer while unexpectedly outperforming the baseline model in pretraining and fine-tuning tasks. These results suggest that the LSH-based attention mechanism not only offers computational advantages but also may enhance the model's ability to generalize from its training data. For more information, visit our GitHub repository: this https URL
Abstract (translated)
本研究介绍了一种名为BERT-LSH的新模型,该模型将局部敏感哈希(LSH)集成到BERT架构中,以近似BERT模型的注意机制。我们比较了该模型与标准BERT模型的计算效率和性能。我们的研究结果表明,与标准BERT模型相比,BERT-LSH模型显著减少了自注意力层的计算需求,同时在预训练和微调任务中出人意料地超过了基线模型。这些结果表明,基于LSH的注意力机制不仅具有计算优势,而且可能增强模型从其训练数据中泛化的能力。更多相关信息,请访问我们的GitHub仓库:此链接。
URL
https://arxiv.org/abs/2404.08836