Paper Reading AI Learner

LLM-R2: A Large Language Model Enhanced Rule-based Rewrite System for Boosting Query Efficiency

2024-04-19 13:17:07
Zhaodonghui Li, Haitao Yuan, Huiming Wang, Gao Cong, Lidong Bing

Abstract

Query rewrite, which aims to generate more efficient queries by altering a SQL query's structure without changing the query result, has been an important research problem. In order to maintain equivalence between the rewritten query and the original one during rewriting, traditional query rewrite methods always rewrite the queries following certain rewrite rules. However, some problems still remain. Firstly, existing methods of finding the optimal choice or sequence of rewrite rules are still limited and the process always costs a lot of resources. Methods involving discovering new rewrite rules typically require complicated proofs of structural logic or extensive user interactions. Secondly, current query rewrite methods usually rely highly on DBMS cost estimators which are often not accurate. In this paper, we address these problems by proposing a novel method of query rewrite named LLM-R2, adopting a large language model (LLM) to propose possible rewrite rules for a database rewrite system. To further improve the inference ability of LLM in recommending rewrite rules, we train a contrastive model by curriculum to learn query representations and select effective query demonstrations for the LLM. Experimental results have shown that our method can significantly improve the query execution efficiency and outperform the baseline methods. In addition, our method enjoys high robustness across different datasets.

Abstract (translated)

翻译:查询重写是一种通过修改 SQL 查询的结构而不改变查询结果来生成更有效的查询的方法,一直是一个重要的研究问题。为了在重写过程中保持等价关系,传统的查询重写方法总是根据某些重写规则重写查询。然而,一些问题仍然存在。首先,现有的寻找最优选择或重写规则的方法仍然有限,并且重写过程始终需要大量的资源。涉及发现新重写规则的方法通常需要复杂的证明结构逻辑或广泛的用户交互。其次,当前的查询重写方法通常高度依赖于数据库管理系统成本估算器,这些估算器通常不准确。在本文中,我们通过提出一种名为 LLM-R2 的查询重写新方法来解决这些问题。我们使用一个大型语言模型(LLM)来提出数据库重写系统可能的重写规则。为了进一步提高 LLM 在推荐重写规则方面的推理能力,我们通过课程学习曲线训练对比模型,学习查询表示,并为 LLM 选择有效的查询演示。实验结果表明,我们的方法可以显著提高查询执行效率,并优于基线方法。此外,我们的方法在不同数据集上的鲁棒性很高。

URL

https://arxiv.org/abs/2404.12872

PDF

https://arxiv.org/pdf/2404.12872.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot