MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models

This article is a series of LLM articles, focusing on the translation of "MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models".

Mind Map: Knowledge Graph Prompt Spark Mind Map in Large Language Model

Summary

LLMs often exhibit limitations in their ability to absorb new knowledge, the generation of illusions, and the transparency of their decision-making processes. In this paper, we explore how to prompt LLM with knowledge graphs (KG) as a remedy to keep LLM informed of the latest knowledge and to elicit reasoning pathways from LLM. Specifically, we build a hint pipeline that gives LLM the ability to understand KG input and make inferences using the combined implicit knowledge and retrieved external knowledge. In addition, we also studied heuristic mind mapping, where LLM performs reasoning and generates answers on the mind map. Research shows that the generated mind map shows the LLM reasoning path based on knowledge ontology, thus bringing prospects for exploring and measuring LLM reasoning in production. Experiments on three question and answer datasets also show that MindMap prompts bring significant experience gains. For example, using MindMap to tip GPT-3.5 consistently yields overwhelming performance over GPT-4. We also demonstrate that by retrieving structured facts from KGs, MindMap can outperform a series of hints using document retrieval methods, benefiting from more accurate, concise and comprehensive knowledge in KGs. To reproduce our results and further extend the framework, we have open sourced the code at https://github.com/wylwilling/MindMap .

1 Introduction

2 Related work

3 methods

4 experiments

5 Conclusion

This paper introduces Knowledge Graph (KG) prompts to 1) empower LLM with the ability to understand KG input, and 2) utilize a combination of implicit knowledge and retrieved external knowledge to facilitate LLM’s reasoning. We then looked at elicitation mind maps, where LLM makes inferences and generates answers with diagrammatic reasons. By conducting extensive experiments on three question answering datasets, we demonstrate that our method MindMap achieves significant empirical gains over ordinary LLM and retrieval-augmented generation methods. We envision this work opening the door for reliable and transparent LLM inference in production.

Guess you like

Origin blog.csdn.net/c_cpp_csharp/article/details/132888474