[1] Jin, N., Siebert, J., Li, D., & Chen, Q. (2022). A Survey on Table Question Answering: Recent Advances. China Conference on Knowledge Graph and Semantic Computing.[2] Zhao, W.X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., Liu, P., Nie, J., & Wen, J. (2023). A Survey of Large Language Models. ArXiv, abs/2303.18223.[3] Pasupat, P., & Liang, P. (2015). Compositional Semantic Parsing on Semi-Structured Tables. Annual Meeting of the Association for Computational Linguistics.[4] Chen, W., Wang, H., Chen, J., Zhang, Y., Wang, H., LI, S., Zhou, X., & Wang, W.Y. (2019). TabFact: A Large-scale Dataset for Table-based Fact Verification. ArXiv, abs/1909.02164.[5] Parikh, A.P., Wang, X., Gehrmann, S., Faruqui, M., Dhingra, B., Yang, D., & Das, D. (2020). ToTTo: A Controlled Table-To-Text Generation Dataset. ArXiv, abs/2004.14373.[6] Yu, T., Zhang, R., Yang, K., Yasunaga, M., Wang, D., Li, Z., Ma, J., Li, I.Z., Yao, Q., Roman, S., Zhang, Z., & Radev, D.R. (2018). Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task. ArXiv, abs/1809.08887.[7] Zha, L., Zhou, J., Li, L., Wang, R., Huang, Q., Yang, S., Yuan, J., Su, C., Li, X., Su, A., Tao, Z., Zhou, C., Shou, K., Wang, M., Zhu, W., Lu, G., Ye, C., Ye, Y., Ye, W., Zhang, Y., Deng, X., Xu, J., Wang, H., Chen, G., & Zhao, J.J. (2023). TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT. ArXiv, abs/2307.08674.[8] Zhang, T., Yue, X., Li, Y., & Sun, H. (2023). TableLlama: Towards Open Large Generalist Models for Tables. ArXiv, abs/2311.09206.[9] Zhong, R., Snell, C.B., Klein, D., & Eisner, J. (2022). Non-Programmers Can Label Programs Indirectly via Active Examples: A Case Study with Text-to-SQL. Conference on Empirical Methods in Natural Language Processing.[10] Yang, B., Tang, C., Zhao, K., Xiao, C., & Lin, C. (2023). Effective Distillation of Table-based Reasoning Ability from LLMs. ArXiv, abs/2309.13182.[11] Bian, J., Qin, X., Zou, W., Huang, M., & Zhang, W. (2023). HELLaMA: LLaMA-based Table to Text Generation by Highlighting the Important Evidence. ArXiv, abs/2311.08896.[12] Sun, R., Arik, S.Ö., Sinha, R., Nakhost, H., Dai, H., Yin, P., & Pfister, T. (2023). SQLPrompt: In-Context Text-to-SQL with Minimal Labeled Data. Conference on Empirical Methods in Natural Language Processing.[13] Ni, A., Iyer, S., Radev, D.R., Stoyanov, V., Yih, W., Wang, S.I., & Lin, X.V. (2023). LEVER: Learning to Verify Language-to-Code Generation with Execution. ArXiv, abs/2302.08468.[14] Li, Z., & Xie, T. (2024). Using LLM to select the right SQL Query from candidates. ArXiv, abs/2401.02115.[15] Chen, W. (2022). Large Language Models are few(1)-shot Table Reasoners. ArXiv, abs/2210.06710.[16] Chang, S., & Fosler-Lussier, E. (2023). Selective Demonstrations for Cross-domain Text-to-SQL. ArXiv, abs/2310.06302.[17] Gao, D., Wang, H., Li, Y., Sun, X., Qian, Y., Ding, B., & Zhou, J. (2023). Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation. ArXiv, abs/2308.15363.[18] Nan, L., Zhao, Y., Zou, W., Ri, N., Tae, J., Zhang, E., Cohan, A., & Radev, D.R. (2023). Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies. ArXiv, abs/2305.12586.[19] Zhao, B., Ji, C., Zhang, Y., He, W., Wang, Y., Wang, Q., Feng, R., & Zhang, X. (2023). Large Language Models are Complex Table Parsers. Conference on Empirical Methods in Natural Language Processing.[20] Sui, Y., Zou, J., Zhou, M., He, X., Du, L., Han, S., & Zhang, D. (2023). TAP4LLM: Table Provider on Sampling, Augmenting, and Packing Semi-structured Data for Large Language Model Reasoning. ArXiv, abs/2312.09039.[21] Zhang, H., Cao, R., Chen, L., Xu, H., & Yu, K. (2023). ACT-SQL: In-Context Learning for Text-to-SQL with Automatically-Generated Chain-of-Thought. Conference on Empirical Methods in Natural Language Processing.[22] Ye, Y., Hui, B., Yang, M., Li, B., Huang, F., & Li, Y. (2023). Large Language Models are Versatile Decomposers: Decomposing Evidence and Questions for Table-based Reasoning. Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval.[23] Pourreza, M.R., & Rafiei, D. (2023). DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction. ArXiv, abs/2304.11015.[24] Lei, F., Luo, T., Yang, P., Liu, W., Liu, H., Lei, J., Huang, Y., Wei, Y., He, S., Zhao, J., & Liu, K. (2023). TableQAKit: A Comprehensive and Practical Toolkit for Table-based Question Answering. ArXiv, abs/2310.15075.[25] Kothyari, M., Dhingra, D., Sarawagi, S., & Chakrabarti, S. (2023). CRUSH4SQL: Collective Retrieval Using Schema Hallucination For Text2SQL. ArXiv, abs/2311.01173.[26] Kong, K., Zhang, J., Shen, Z., Srinivasan, B., Lei, C., Faloutsos, C., Rangwala, H., & Karypis, G. (2024). OpenTab: Advancing Large Language Models as Open-domain Table Reasoners.[27] Xue, S., Jiang, C., Shi, W., Cheng, F., Chen, K., Yang, H., Zhang, Z., He, J., Zhang, H., Wei, G., Zhao, W., Zhou, F., Qi, D., Yi, H., Liu, S., & Chen, F. (2023). DB-GPT: Empowering Database Interactions with Private Large Language Models. ArXiv, abs/2312.17449.[28] Wang, T., Lin, H., Han, X., Sun, L., Chen, X., Wang, H., & Zeng, Z. (2023). DBCopilot: Scaling Natural Language Querying to Massive Databases. ArXiv, abs/2312.03463.[29] Wang, B., Ren, C., Yang, J., Liang, X., Bai, J., Zhang, Q., Yan, Z., & Li, Z. (2023). MAC-SQL: A Multi-Agent Collaborative Framework for Text-to-SQL. ArXiv, abs/2312.11242.[30] Jiang, J., Zhou, K., Dong, Z., Ye, K., Zhao, W.X., & Wen, J. (2023). StructGPT: A General Framework for Large Language Model to Reason over Structured Data. ArXiv, abs/2305.09645.[31] Nan, L., Zhang, E., Zou, W., Zhao, Y., Zhou, W., & Cohan, A. (2023). On Evaluating the Integration of Reasoning and Action in LLM Agents with Database Question Answering. ArXiv, abs/2311.09721.[32] Cao, Y., Chen, S., Liu, R., Wang, Z., & Fried, D. (2023). API-Assisted Code Generation for Question Answering on Varied Table Structures. ArXiv, abs/2310.14687.[33] Cheng, Z., Xie, T., Shi, P., Li, C., Nadkarni, R., Hu, Y., Xiong, C., Radev, D.R., Ostendorf, M., Zettlemoyer, L., Smith, N.A., & Yu, T. (2022). Binding Language Models in Symbolic Languages. ArXiv, abs/2210.02875.[34] Zhang, Y., Henkel, J., Floratou, A., Cahoon, J., Deep, S., & Patel, J.M. (2023). ReAcTable: Enhancing ReAct for Table Question Answering. ArXiv, abs/2310.00815.[35] Saha, S., Yu, X.V., Bansal, M., Pasunuru, R., & Celikyilmaz, A. (2022). MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text Generation. ArXiv, abs/2212.08607.[36] Wang, Z., Zhang, H., Li, C., Eisenschlos, J.M., Perot, V., Wang, Z., Miculicich, L., Fujii, Y., Shang, J., Lee, C., & Pfister, T. (2024). Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding. ArXiv, abs/2401.04398.
[37] Yin, P., Neubig, G., Yih, W., & Riedel, S. (2020). TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data. ArXiv, abs/2005.08314.