LLM reasoners entry experiment 24 point game

LLM reasoners

Ber666/llm-reasoners

experiment procedure

Experimental sample 24games, examples/tot_game24configure the proxy and open ai api key in inference.py.

First install dependencies

git clone https://github.com/Ber666/llm-reasoners
cd llm-reasoners
pip install -e .

Then in multiple cases, this example uses the 24-point game as an experiment (because this case uses chatgpt-3.5 by default, which is simpler than other experiments that require downloading model parameters).

Place the data set file. You only need one sample at the beginning 1 2 3 4(for just one sample, the program will take several minutes to run to get the final answer)

Modify the path in the code so that it points to the correct file, mainly the 24-point data set, and the json file of the prompt. The modified corresponding code is as follows:

dataset = utils.read_data(file='./data/24.csv')[0:1]
...
def main(batch_size: int = 2,
         prompts: str = './prompts/game24.json',
         disable_log: bool = False,
         model: str = 'gpt-3.5-turbo',
         temperature: float = 0.7,
         **kwargs):

Then configure and run, wait for a few minutes, and after making dozens of requests, you finally see the results.

Debugging found that the program executes most of the time in the for loop of beam_search.py. From reading the code, I guess that each beam is a search path. The program is constantly planning and exploring each search path, trying to find the correct 24-point calculation formula.
Insert image description here

Extended reading

I found that there are other warehouses related to TOT, and they all have 3K starts, which is more than the 300 stars of the current warehouse.

  • https://github.com/princeton-nlp/tree-of-thought-llm
  • https://github.com/kyegomez/tree-of-thoughts
  • https://www.youtube.com/watch?v=ut5kp56wW_4 YK interprets Tree of thoughts

Guess you like

Origin blog.csdn.net/duoyasong5907/article/details/132122720