Author: Wen Yilin
Large language models (LLMs) like GPT-3.5 have demonstrated impressive natural language capabilities. However, their reasoning process is still opaque, prone to hallucinations, and fine-tuning LLM is extremely expensive.
The recently proposed mind map MindMap proposes an advancement idea by introducing knowledge graphs (KG) into LLMs, enabling them to understand KG input and combine internal and external knowledge for reasoning. In addition, we also studied how to extract the mind maps of LLMs as the basis for their reasoning and answer generation.
Paper:
MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Larg Language ModelsLink:
https://arxiv.org/pdf/2308.09729.pdfEnter the NLP group—> Join the NLP exchange group
The goal of this work is to establish a plug-and-play prompting method to elicit the mind map reasoning capabilities of LLM. It's called MindMap because it enables LLMs to understand graphical input to build their own mind maps, supporting evidence-based generation.
MindMap Conceptual Framework
A conceptual demonstration diagram of the MindMap framework is shown below. It consists of three main parts:
Evidence graph mining : First identify the entity set Vq from the original input, and query the source knowledge graph G to construct multiple evidence subgraphs Gq.
Evidence graph aggregation : Next, LLM is prompted to understand and aggregate the retrieved evidence subgraphs to construct the inference graph Gm.
LLM reasoning on mind maps : Finally, we prompt LLM to consolidate the constructed reasoning maps and their implicit knowledge to generate answers, and build mind maps that explain the reasoning process.
Specifically, mind mapping inspired LLM’s mind map, which
Consolidating the facts retrieved from KGs and tacit knowledge obtained from LLM,
Discover new patterns in input KGs,
Produce final output through mind mapping.
We conduct experiments on three datasets to demonstrate that MindMap outperforms a range of hinting methods by a large margin. This work highlights how LLMs learn to reason collaboratively with KGs, combining implicit and explicit knowledge to achieve transparent and reliable reasoning. For specific code details, please refer to github [1] .
What is the contribution of this research?
The contribution of this study is to explore the reasoning capabilities of LLM on graph inputs and emphasize the combination of joint reasoning with implicit and external explicit knowledge.
At the same time, the model raises a question worth studying. For general tasks (without additional retrieval of information), LLM models like GPT-3.5 perform better, while retrieval methods perform very poorly. This indicates that the retrieval method ignores the knowledge learned by LLM to a certain extent. Therefore, when designing a general-purpose LLM, it is very necessary to effectively integrate the knowledge of LLM itself and KG knowledge for collaborative reasoning.
Summarize
Mind maps provide an interpretable channel by prompting the LLM to reason about structured knowledge graph knowledge, using its implicit knowledge and aggregated knowledge graph evidence to track the LLM's reasoning. This visualizes the rationale and factual basis of the thought process of the interrogation model. This transparent reasoning graph is able to detect and avoid potential hallucinations while combining external and internal knowledge.
In this way, the advantages of structured knowledge, emergent reasoning, and neurolinguistic understanding can be combined for more powerful and explainable intelligence!
References
[1]
github: https://github.com/wyl-willing/MindMap
Enter the NLP group—> Join the NLP exchange group