Prompt Optimization Methods for Large Language Models with Long Text Input

Authors

  • Yi Ren The Institute of Software, Chinese Academy of Sciences, Beijing, China Author
  • Shoubin Li The Institute of Software, Chinese Academy of Sciences, Beijing, China Author

DOI:

https://doi.org/10.62677/IJETAA.2402109

Keywords:

Long text input, Large language model, Prompt, Question-answering system

Abstract

When faced with long text input, the generated results from large language models sometimes fail to meet user expectations. Due to the length and complexity of the input content, users often do not know how to modify the input to obtain the desired results. To address this dilemma, we propose a Prompt optimization method for large language models with long text input. This method determines the influence weights of different semantic segments on the results, providing guidance for users to generate desired text using large language models. Experimental results show that by evaluating the importance of different semantic segments in military question-answering system text and improving the input content, the quality and usability of the generated military question-answering text can be enhanced.

Downloads

Download data is not yet available.

References

Wang X.M. and Li X.L. Research on Key Technologies of Forum Oriented Question-Answering Systems. Computer Systems & Applications, 2023, 32(2): 12-15.

Liu X.M. Exploration of Military Knowledge Graph Construction Methods. Journal of Intelligence, 2023, 35(1): 5-10.

Gray M. Deep Semantic Matching Technology in Big Data Environment. Journal of Software, 2023, 34(3): 405-412.

Radford A., Narasimhan K., Salimans T., et al. Improving language understanding by generative pre-training. 2018.

Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In Advances in neural information processing systems 2017, 30.

Devlin J., Chang M.W., Lee K., et al. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.

Radford A., Wu J., Child R., et al. Language models are unsupervised multitask learners. OpenAI blog, 2019, 1(8): 9.

Brown T., Mann B., Ryder N., et al. Language models are few-shot learners. In Advances in neural information processing systems 2020, 33: 1877-1901.

Christiano P.F., Leike J., Brown T., et al. Deep reinforcement learning from human preferences. In Advances in neural information processing systems 2017, 30.

Ouyang L., Wu J., Jiang X., et al. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 2022, 35: 27730-27744.

Wei J., Bosma M., Zhao V.Y., et al. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.

Thoppilan R., De Freitas D., Hall J., et al. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239, 2022.

Ge B., Tan Z., Zhang Y., et al. Research on Military Knowledge Graph Construction Technology. Journal of Command and Control, 2016, 2(4): 302-308.

Tian X.Y., Zeng G.X., Gao Y.B., et al. Research on Model Reuse Technology Based on Semantic Matching and Combination. Journal of System Simulation, 2021, 33(12): 1-10.

Xiao B., Wu J.P. A Formalized Description Method for Simulation Deduction Scenarios and an Instantiation Method for Deduction Models. Patent: 202310281676, 2023-09-08.

Hou G.C. and Yang L. Research on Generation and Application Technology of Simulation Deduction Scenarios for Naval Battles. Ship Electronic Engineering, 2019, 39(7): 4.

Zhang Q., Chen Q., Li Y., et al. Sequence Model with Self-Adaptive Sliding Window for Efficient Spoken Document Segmentation. In IEEE ASRU 2021.

Downloads

Published

2024-03-26

Issue

Section

Research Articles

Categories

How to Cite

[1]
Y. Ren and S. Li, “Prompt Optimization Methods for Large Language Models with Long Text Input”, ijetaa, vol. 1, no. 2, pp. 26–33, Mar. 2024, doi: 10.62677/IJETAA.2402109.

Similar Articles

11-19 of 19

You may also start an advanced similarity search for this article.