
arXiv: 2410.11457
In supervised fine-tuning (SFT) for Text2SQL tasks, particularly for databases with numerous tables, encoding schema features requires excessive tokens, escalating GPU resource requirements during fine-tuning. To bridge this gap, we propose LR-SQL, a general dual-model SFT framework comprising a schema linking model and an SQL generation model. At the core of our framework lies the schema linking model, which is trained on a novel downstream task termed slice-based related table filtering. This task dynamically partitions a database into adjustable slices of tables and sequentially evaluates the relevance of each slice to the input query, thereby reducing token consumption per iteration. However, slicing fragments destroys database information, impairing the model’s ability to comprehend the complete database. Thus, we integrate Chain of Thought (CoT) in training, enabling the model to reconstruct the full database context from discrete slices, thereby enhancing inference fidelity. Ultimately, the SQL generation model uses the result from the schema linking model to generate the final SQL. Extensive experiments demonstrate that our proposed LR-SQL reduces total GPU memory usage by 40% compared to baseline SFT methods, with only a 2% drop in table prediction accuracy for the schema linking task and a negligible 0.6% decrease in overall Text2SQL Execution Accuracy.
Computer Science - Computation and Language, Computer Science - Databases, Computer Science - Artificial Intelligence, Computer Science - Information Retrieval
Computer Science - Computation and Language, Computer Science - Databases, Computer Science - Artificial Intelligence, Computer Science - Information Retrieval
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
