Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation (2024)

\pdftrailerid

redacted \reportnumber\correspondingauthorYitaoLiangyitaol@pku.edu.cn
<ZihaoWang>zhwang@stu.pku.edu.cn<AnjiLiu>liuanji@cs.ucla.edu<HaoweiLin>linhaowei@pku.edu.cn
<Jiaqi Li>ljqjane@gmail.com<XiaojianMa>xiaojian.ma@ucla.edu

ZihaoWangPekingUniversityAnjiLiuUniversityofCalifornia,Los AngelesHaoweiLinPekingUniversityJiaqiLiBeijingInstituteforGeneralArtificialIntelligenceXiaojianMaBeijingInstituteforGeneralArtificialIntelligenceYitaoLiangPekingUniversity

Abstract

We explore how iterative revising a chain of thoughts with the help of information retrieval significantly improves large language models’ reasoning and generation ability in long-horizon generation tasks, while hugely mitigating hallucination.In particular, the proposed method — retrieval-augmented thoughts (RAT)— revises each thought step one by one with retrieved information relevant to the task query, the current and the past thought steps, after the initial zero-shot CoT is generated.Applying RAT to GPT-3.5, GPT-4, and CodeLLaMA-7b substantially improves their performances on various long-horizon generation tasks; on average of relatively increasing rating scores by 13.63% on code generation, 16.96% on mathematical reasoning, 19.2% on creative writing, and 42.78% on embodied task planning.

The demo page can be found in https://craftjarvis.github.io/RAT.

Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation (1)
Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation (2)

1 Introduction

Large Language Models (LLMs) have achieved fruitful progress on various natural language reasoning tasks(Wei etal., 2022; Yao etal., 2022; Wang etal., 2023a; Zhou etal., 2023; Brown etal., 2020), especially when combining large-scale models (Team, 2022; OpenAI, 2023) with sophisticated prompting strategies, notably chain-of-thought (CoT) prompting(Wei etal., 2022; Kojima etal., 2022). However, there have been increasing concerns about the factual correctness of LLMs reasoning, citing the possible hallucinations in model responses(Rawte etal., 2023) or the intermediate reasoning paths, i.e. CoTs(Dhuliawala etal., 2023). This issue becomes more significant when it comes to zero-shot CoT prompting, aka. “let’s think step-by-step”(Kojima etal., 2022) and long-horizon generation tasks that require multi-step and context-aware reasoning, including code generation, task planning, mathematical reasoning, etc. Factually valid intermediate thoughts could be critical to the successful completion of these tasks.

Several prompting techniques have been proposed to mitigate this issue, one promising direction, Retrieval Augmented Generation (RAG)(Lewis etal., 2020b) seeks insights from human reasoning(Holyoak and Morrison, 2012), and utilizes retrieved information to facilitate more factually grounded reasoning.In this paper, we explore how to synergize RAG with sophisticated long-horizon reasoning. Our intuition is that the hallucination within the intermediate reasoning process could be alleviated through the help of outside knowledge. The resulting prompting strategy, retrieval-augmented thoughts (RAT), is illustrated in Figure1. Our strategy comprises two key ideas. Firstly, the initial zero-shot CoT produced by LLMs along with the original task prompt will be used as queries to retrieve the information that could help revise the possibly flawed CoT. Secondly, instead of retrieving and revising with the full CoT and producing the final response at once, we devise a progressive approach, where LLMs produce the response step-by-step following the CoT (a series of subtasks), and only the current thought step will be revised based on the information retrieved with task prompt, the current and the past CoTs. This strategy can be an analogy to the human reasoning process: we utilize outside knowledge to adjust our step-by-step thinking during complex long-horizon problem-solving(Holyoak and Morrison, 2012). A comparison of RAT and counterparts can be found in Figure2.

We evaluate RAT on a wide collection of challenging long-horizon tasks, including code generation, mathematical reasoning, embodied task planning, and creative writing. We employ several LLMs of varied scales: GPT-3.5(Brown etal., 2020), GPT-4(OpenAI, 2023), CodeLLaMA-7b(Rozière etal., 2023). The results indicate that combing RAT with these LLMs elicits strong advantages over vanilla CoT prompting and RAG approaches. In particular, we observe new state-of-the-art level of performances across our selection of tasks: 1) code generation: HumanEval (+20.94%), HumanEval+ (+18.89%), MBPP (+14.83%), MBPP+ (+1.86%); 2) mathematical reasoning problems: GSM8K (+8.36%), and GSMHard (+31.37%); 3) Minecraft task planning (2.96 times on executability and +51.94% on plausibility); 4) creative writing (+19.19% on human score). Our additional ablation studies further confirm the crucial roles played by the two key ingredients of RAT: revising CoT using RAG and progressive revision & generation. This work reveals how can LLMs revise their reasoning process in a zero-shot fashion with the help of outside knowledge, just as what humans do.

2 Retrieval Augmented Thoughts

Our goal is to support long-horizon reasoning and generation while mitigating hallucination when using LLMs. To have satisfying performance on long-horizon tasks, two ingredients are indispensable. Firstly, access to factual information can be facilitated by retrieval. Secondly, appropriate intermediate steps that outline a scratchpad to finish complex tasks, can be facilitated by CoT. Yet, a naive combination of the two would not necessarily yield improvements. Two questions still persist: (1) what is relevant information to retrieve; (2) how to effectively correct reasoning steps with relevant factual information. To better appreciate our method and why our method can address these two questions, we first provide a brief preliminary introduction of RAG and CoT.

1:Task Prompt I𝐼Iitalic_I, Autoregressive Large Language Model pθsubscript𝑝𝜃p_{\theta}italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT

2:T={T1,T2,,Tn}pθ(|I)T=\{T_{1},T_{2},\ldots,T_{n}\}\leftarrow p_{\theta}(\cdot|I)italic_T = { italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_T start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } ← italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( ⋅ | italic_I ) \triangleright Generate zero-shot initial step-by-step thoughts T𝑇Titalic_T

3:TT1,i1formulae-sequencesuperscript𝑇subscript𝑇1𝑖1T^{\star}\leftarrow T_{1},i\leftarrow 1italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ← italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_i ← 1 \triangleright Draft answer Tsuperscript𝑇T^{\star}italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT initialized with the first thought step T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT

4:repeat

5:QiToQuery(I,T)subscript𝑄𝑖ToQuery𝐼superscript𝑇Q_{i}\leftarrow\text{ToQuery}(I,T^{\star})italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← ToQuery ( italic_I , italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ) \triangleright Generate query Qisubscript𝑄𝑖Q_{i}italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT based on current draft answer Tsuperscript𝑇T^{\star}italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT

6:RiRetrieveFromCorpus(Qi)subscript𝑅𝑖RetrieveFromCorpussubscript𝑄𝑖R_{i}\leftarrow\text{RetrieveFromCorpus}(Q_{i})italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← RetrieveFromCorpus ( italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) \triangleright Retrieve information Risubscript𝑅𝑖R_{i}italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT from corpus or Internet

7:Tpθ(|I,T,Ri)T^{\star}\leftarrow p_{\theta}(\cdot|I,T^{\star},R_{i})italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ← italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( ⋅ | italic_I , italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) \triangleright Revise draft answer Tsuperscript𝑇T^{\star}italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT based on retrieved text Risubscript𝑅𝑖R_{i}italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT

8:TCONCAT(T,Ti+1)superscript𝑇CONCATsuperscript𝑇subscript𝑇𝑖1T^{\star}\leftarrow\text{CONCAT}(T^{\star},T_{i+1})italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ← CONCAT ( italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , italic_T start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT ) \triangleright Append the next thought step Ti+1subscript𝑇𝑖1T_{i+1}italic_T start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT

9:ii+1𝑖𝑖1i\leftarrow i+1italic_i ← italic_i + 1 \triangleright Begin the next revision round

10:untili>n𝑖𝑛i>nitalic_i > italic_n \triangleright Repeat until all the revised thoughts T1:nsuperscriptsubscript𝑇:1𝑛T_{1:n}^{\star}italic_T start_POSTSUBSCRIPT 1 : italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT are obtained

11:return Tsuperscript𝑇T^{\star}italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT \triangleright Output Tsuperscript𝑇T^{\star}italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT as the final generation

2.1 Preliminary

Retrieval-Augmented Generation (RAG) targets the problem of generating fictitious facts by providing LLMs with relevant text extracted from trusted sources. It is primarily used in question-answering (QA) tasks(Lewis etal., 2020b). Specifically, given a set of n𝑛nitalic_n candidate documents R:={Ri}i=1nassign𝑅superscriptsubscriptsubscript𝑅𝑖𝑖1𝑛R:=\{R_{i}\}_{i=1}^{n}italic_R := { italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, RAG aims to retrieve the most relevant ones w.r.t. a query Q𝑄Qitalic_Q, which can be the question/task prompt itself or relevant information generated by LLMs. To achieve this, RAG first extracts semantic-aware embeddings of the documents ri:=emb(Ri)Kassignsubscript𝑟𝑖embsubscript𝑅𝑖superscript𝐾r_{i}:=\text{emb}(R_{i})\in\mathbb{R}^{K}italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT := emb ( italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ∈ blackboard_R start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT (K𝐾Kitalic_K is the size of the embedding) as well as the query q:=emb(Q)Kassign𝑞emb𝑄superscript𝐾q:=\text{emb}(Q)\in\mathbb{R}^{K}italic_q := emb ( italic_Q ) ∈ blackboard_R start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT. emb()emb\text{emb}(\cdot)emb ( ⋅ ) can be implemented with various text embedding models, such as Sentence-BERT(Reimers and Gurevych, 2019). The relevance between the query and a document is measured by their cosine similarity:

sim(Q,Ri)=qriqri.sim𝑄subscript𝑅𝑖𝑞subscript𝑟𝑖norm𝑞normsubscript𝑟𝑖\displaystyle\text{sim}(Q,R_{i})=\frac{q\cdot r_{i}}{\|q\|\|r_{i}\|}.sim ( italic_Q , italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = divide start_ARG italic_q ⋅ italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG ∥ italic_q ∥ ∥ italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ end_ARG .

Based on their relevance, the top-ranked k𝑘kitalic_k documents are then fed into the prompt for LLMs to generate the final answer. With such rich and factual contexts, RAG mitigates the hallucination of LLMs. However, complex reasoning tasks (e.g., those requiring multi-step reasoning) can be difficult to translate into effective search queries, leading to challenges in finding relevant documents and making RAG less applicable. Traditionally, RAG retrieves all relevant information at once. Yet, it overlooks the fact that it is difficult to predict what “facts" or information is required in the subsequent reasoning and generation steps. The task prompt itself is hardly sufficient to provide enough clues for this.

Chain of Thoughts (CoT) prompting is designed to enhance the performance of LLMs under tasks that require complex reasoning steps(Wei etal., 2022), such as multi-step math word problems. Specifically, instead of tasking LLMs to generate the correct answer directly, CoT prompting incentivizes LLMs to first output intermediate reasoning steps, termed thoughts, that serve as a scratch space for the task, before summarizing the thoughts into a final answer. Such behavior of LLMs can either be stimulated in zero-shot by prompting terms that encourage CoT reasoning (e.g., “let’s think step by step”)(Kojima etal., 2022), or triggered by few-shot examples that perform CoT in similar tasks. However, since no direct supervision is posed to the intermediate thoughts, LLMs could make errors due to the lack of relevant domain knowledge(Touvron etal., 2023) or biased by hallucinations(Rawte etal., 2023).

2.2 Our Approach

Our intuition to mitigate the issues of CoT prompting and RAG mentioned above is to apply RAG to revise every thought step generated by CoT prompting. An overview can be found in Figure1 and Algorithm1. Specifically, given a task prompt I𝐼\mathit{I}italic_I, we first prompt LLM to generate step-by-step thoughts in zero shot (“let’s think step-by-step”) T:={Ti}i=1nassign𝑇superscriptsubscriptsubscript𝑇𝑖𝑖1𝑛T:=\{T_{i}\}_{i=1}^{n}italic_T := { italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, where Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT represents the i𝑖iitalic_ith thought step. In long-horizon generation tasks, T𝑇Titalic_T can either be the intermediate reasoning steps, e.g. the pseudo code with comments in code generation, article outline in creative writing, etc., or the draft response itself, e.g. a list of sub-goals in embodied task planning as shown in Figure1.

Since T𝑇Titalic_T could be flawed (e.g., contains hallucination), we proceed to use RAG to revise every generated thought step before generating the final response from these thoughts. Specifically, assuming we have fixed the previous thought steps and now are about to revise T1:isubscript𝑇:1𝑖T_{1:i}italic_T start_POSTSUBSCRIPT 1 : italic_i end_POSTSUBSCRIPT, we begin by converting the text {I,T1,,Ti}𝐼subscript𝑇1subscript𝑇𝑖\{\mathit{I},T_{1},\dots,T_{i}\}{ italic_I , italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } into a query Qisubscript𝑄𝑖Q_{i}italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT:

Qi=ToQuery(I,T1,,Ti),subscript𝑄𝑖ToQuery𝐼subscript𝑇1subscript𝑇𝑖\displaystyle Q_{i}=\text{ToQuery}(\mathit{I},T_{1},\dots,T_{i}),italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ToQuery ( italic_I , italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ,

where ToQuery()ToQuery\text{ToQuery}(\cdot)ToQuery ( ⋅ ) can either be a text encoder or an LLM that translates the task prompt I𝐼\mathit{I}italic_I, the current and the past thought steps T1,,Tisubscript𝑇1subscript𝑇𝑖T_{1},\dots,T_{i}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into a query Qisubscript𝑄𝑖Q_{i}italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT that can be processed by the retrieval system. We adopt RAG to retrieve relevant documents Risubscript𝑅𝑖R_{i}italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT using Qisubscript𝑄𝑖Q_{i}italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, which are then prepended to the prompt to generate a revised thought step Tisubscriptsuperscript𝑇𝑖T^{\star}_{i}italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.

T1:i=pθ(|I,T1,,Ti,Ri).\displaystyle T^{\star}_{1:i}=p_{\theta}(\cdot|\mathit{I},T_{1},\dots,T_{i},R_%{i}).italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 : italic_i end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( ⋅ | italic_I , italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) .

Finally, depending on the actual task, the revised thought steps T1:nsubscriptsuperscript𝑇:1𝑛T^{\star}_{1:n}italic_T start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 : italic_n end_POSTSUBSCRIPT can simply be used as the final model response, e.g., embodied task planning. For tasks like code generation, or creative writing, the LLM will be further prompted to produce the complete response (code, passage) from each revised thought step in a step-by-step fashion.

Note that, when revising the i𝑖iitalic_i-th thought step Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, instead of using the current step Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT only, or the complete chain of thoughts T1,,Tnsubscript𝑇1subscript𝑇𝑛T_{1},\dots,T_{n}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_T start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT to produce the query for RAG, we ensure the query Qisubscript𝑄𝑖Q_{i}italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is produced from the current thought step Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and previous revised thought steps T1:i1superscriptsubscript𝑇:1𝑖1T_{1:i-1}^{\star}italic_T start_POSTSUBSCRIPT 1 : italic_i - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT, i.e., we adopt a casual reasoning to revise the thoughts using RAG:

Qisubscript𝑄𝑖\displaystyle Q_{i}italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT=ToQuery(I,T1:i1,Ti)absentToQuery𝐼superscriptsubscript𝑇:1𝑖1subscript𝑇𝑖\displaystyle=\text{ToQuery}(I,T_{1:i-1}^{\star},T_{i})= ToQuery ( italic_I , italic_T start_POSTSUBSCRIPT 1 : italic_i - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT )
T1:isuperscriptsubscript𝑇:1𝑖\displaystyle T_{1:i}^{\star}italic_T start_POSTSUBSCRIPT 1 : italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT=pθ(|I,T1:i1,Ti,Ri).\displaystyle=p_{\theta}(\cdot|I,T_{1:i-1}^{\star},T_{i},R_{i}).= italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( ⋅ | italic_I , italic_T start_POSTSUBSCRIPT 1 : italic_i - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) .

This allows for the correction of errors in the original thoughts T𝑇Titalic_T by continually consulting different reference texts and ensures that each step of reasoning is informed by the most accurate and relevant information, significantly improving the quality and reliability of the generated output.

Our hypothesis why our method can address the two problems mentioned at the beginning of this section is as follows. Firstly, the most straightforward way to know what information will be used in complex reasoning is to “see” the reasoning steps. Our approach leverages all the generated thoughts along with the task prompt to provide more clues for more effective retrieval. Secondly, some information cannot be directly retrieved, especially information related to the final answer to a hard complex question. Instead, retrieval of information relevant to intermediate questions, which are assumed to be easier, is more accessible. Thanks to the compositional nature of many reasoning tasks, an iterative retrieval process could also be more effective. Thirdly, correcting potential hallucinations needs to be targeted. Revising a complete CoT with RAG could introduce errors at otherwise already-correct steps. Revising every step one by one could be more reliable. The first two points address question (1) and the last point addresses question (2). Quantitative evidence can be found in our ablation studies in Section3.4.

3 Experiments

We test our proposed method RAT on a diverse set of benchmarks that highlight long-horizon generation and reasoning. Existing methods traditionally struggle in those benchmarks; “hallucinated" steps are obvious in LLMs’ outputs. Those steps either fail to stick to the original query or are plainly invalid. We kindly refer readers to subsection 3.3 (case analysis) for a more detailed discussion. Due to space constraints, we do not introduce each benchmark setting, nor do we discuss our results in each benchmark in full length. Rather, this section provides a comprehensive demonstration of our method’s performance and provides a spotlight to provide preliminary empirical analysis about why and when our method works and when it fails.

3.1 Experimental Setups

We adopt four groups of benchmarks.

HumanEvalHumanEval+MBPPMBPP+Average\uparrow
Base ModelsMethodpass@1pass@5pass@1pass@5pass@1pass@5pass@1pass@5pass@1pass@5
DIRECT33.78%40.85%30.85%36.59%39.27%54.27%41.22%48.17%36.28%44.97%
CoT27.86%29.58%25.12%27.83%31.99%55.91%42.19%47.51%31.79%40.21%
RAG_1 shot37.50%47.65%33.66%41.83%35.41%51.63%43.66%50.09%37.56%47.80%
RAG_5 shot38.90%47.90%35.37%42.75%34.06%53.90%43.35%51.08%37.92%48.91%
CodeLlama-7bRAT39.57%51.34%36.22%46.50%40.86%60.63%39.14%48.04%38.95%51.63%
Relative Improvement17.14%25.68%17.41%27.08%4.05%11.72%-5.05%-0.27%7.35%14.80%
DIRECT50.49%72.56%48.09%70.55%60.84%72.95%54.92%64.09%53.59%70.04%
CoT47.31%75.88%41.72%74.85%55.19%65.49%47.69%62.94%47.98%69.79%
RAG_1 shot50.61%76.22%48.22%70.55%55.23%70.54%53.62%68.09%51.92%71.35%
RAG_5 shot45.49%74.39%42.58%70.55%54.39%69.73%55.98%70.10%49.61%71.19%
GPT-3.5RAT59.27%80.49%56.31%76.07%59.31%74.74%59.10%72.61%58.50%75.98%
Relative Improvement17.39%10.93%17.09%7.82%-2.51%2.45%7.61%13.29%9.17%8.48%
DIRECT57.32%78.66%54.36%76.69%60.00%76.07%66.13%78.53%59.45%77.49%
CoT54.87%72.56%51.90%66.25%61.22%74.23%64.42%79.75%58.10%73.20%
RAG_1 shot61.10%79.27%58.04%77.30%58.53%69.94%65.77%77.30%60.86%75.95%
RAG_5 shot62.80%82.93%59.51%79.75%60.12%74.23%63.56%78.53%61.50%78.86%
GPT-4RAT69.33%88.40%64.63%82.21%68.90%79.85%67.36%82.14%67.55%83.15%
Relative Improvement20.94%12.38%18.89%7.20%14.83%4.97%1.86%4.60%13.63%7.31%

MethodMath Reasoning Accuracy*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT\uparrowCreative Writing\uparrowEmbodied Planning\uparrow
GSM8KGSMHardAverage(ΔΔ\Deltaroman_Δ)Win RateTrueSkill Rating(ΔΔ\Deltaroman_Δ)UncertaintyExecutablityPlausibitlity(ΔΔ\Deltaroman_Δ)Uncertainty
DIRECT65.85%51.26%58.56%46.67%24.391.1719.33±2.08%20.572.05
CoT63.82%44.72%54.27(-7.32)%41.67%24.31(-0.0%)1.0949.33±3.05%25.75(+25.2%)2.33
RAG-1 shot61.81%51.26%56.54(+4.17)%38.71%23.99(-1.6%)1.1131.00±5.29%24.97(+21.4%)2.11
RAG-5 shot61.81%56.78%59.30(+4.88)%31.67%23.88(-2.1%)1.2233.00±3.61%25.02(+21.6%)2.11
RAT71.36%67.34%69.35(+16.96)%81.01%29.07(+19.2%)1.0876.67±8.02%29.37(+42.78%)3.37

Code Generation includess HumanEval(Chen etal., 2021), HumanEval+(Liu etal., 2023b), MBPP(Austin etal., 2021), and MBPP+(Liu etal., 2023b). These benchmarks encompass a wide range of programming problems, from simple function implementations to more complex algorithmic challenges, providing a robust testbed for assessing generative capabilities.

Mathematical Reasoning evaluation is conducted on GSM8K and GSM-HARD dataset, which comprises thousands of multi-step mathematical problems(Cobbe etal., 2021; Gao etal., 2022).

Creative Writing tasks are conducted to evaluate the versatility of RAT, including survey, summarization etc., highlighting different aspects of open-ended text generation.

Embodied Planning tasks are evaluated on open-ended environments Minecraft. A set of 100 tasks ranging from simple objectives to challenging diamond objectives are evaluated through MC-TextWorld(Lin etal., 2023).

Evaluation Metrics

For code generation, the classical pass rate pass@k is selected as the evaluation metrics(Chen etal., 2021; Liu etal., 2023b), k𝑘kitalic_k denotes the sampling number. We compute accuracy to evaluate every question in mathematical reasoning tasks, aligning with the established metric for the GSM8K(Cobbe etal., 2021). For embodied planning tasks, we compute the plan execution success rate in MC-TextWorld as executability(Lin etal., 2023). We also conduct human elo rating evaluation to compute the trueskill rating score(Herbrich etal., 2006) for embodied planning (as plausibility) and creative writing tasks. These indicators are better the higher they are.

Baselines

To establish a comprehensive and equitable comparison landscape, we incorporate a suite of baseline methods. Our baselines include the original language models, referred to as DIRECT, and the Retrieval-Augmented Generation (RAG) methodology with n𝑛nitalic_n retrieved examples, instantiated in both single-shot (1 shot) and multi-shot (5 shots) configurations, as documented by Lewis etal. (2020b). Additionally, we examine the zero-shot CoT (CoT) approach, as conceptualized by Kojima etal. (2022), which simulates a step-by-step reasoning process to facilitate complex problem-solving tasks under zero demonstration.For different methods, the same language model is used as base models.To ensure a fair comparison, none of the methods used examples from the benchmark as demonstrations for in-context learning.

RAG Settings

RAT leverages the capabilities of Retrieval-Augmented Generation methods, which enhance the performance of language models by integrating external knowledge sources. Specifically, we employed the codeparrot/github-jupyter dataset as our primary search vector library for code generation and mathematical reasoning tasks.For embodied planning tasks in Minecraft, we utilized the Minecraft Wiki111https://minecraft.wiki/ and DigMinecraft222https://www.digminecraft.com/ websites as the information sources accessible to the LLMs.For open-ended creative writing tasks, we use Google to search the query on the Internet.We utilized OpenAI’s text-embedding-ada-002 API service for all embedding calculations across different methods and base models.

Acknowledging the risk of benchmark contamination (an issue where the code library may contain solutions to the exact problems being evaluated), we adopted a rigorous pre-processing methodology as described by Guo etal. (2024).The potential implications of benchmark contamination, along with the effectiveness of our pre-processing strategy, are discussed in detail in AppendixD.

3.2 Results

The code generation results presented in Table1 and results on other benchmarks presented in Table2 demonstrate the comprehensive evaluation of the RAT across multiple benchmarks.RAT consistently outperforms the other methods across the majority of the benchmarks and metrics, showcasing its superior ability to generate long-horizon context. Notably, in the HumanEval and HumanEval+ benchmarks of code generation, RAT achieves remarkable improvements in pass@1 and pass@5 rates, indicating a significant enhancement in first-attempt accuracy and within the top five attempts. For example, on the HumanEval benchmark, RAT improves pass@1 by up to 20.94% and pass@5 by up to 25.68% relative to the base models’ performances. This trend is observed across different underlying base models, highlighting RAT’s effectiveness regardless of the initial model’s capabilities.For mathematical reasoning tasks, RAT demonstrates a significant relative improvement, with an 8.37% increase in accuracy on GSM8K and a remarkable 31.37% increase on GSMHard, culminating in an overall average improvement of 18.44% when deployed on the GPT-3.5 model.RAT significantly outperforms all other methods on open-ended embodied planning tasks in Minecraft, achieving the highest scores with 76.67±8.02% for executability and 29.37 human rating score for plausibility, demonstrating its superior ability to generate feasible and contextually appropriate plans in the complex open-world environment.RAT’s superior performance also keeps across a broad spectrum of creative writing tasks. Its ability to generate high-quality content in diverse scenarios was demonstrated, highlighting its potential as a powerful tool for enhancing the general creative writing capabilities of LLMs in open-ended scenarios.

The tasks are extremely diverse, while RAT can have consistent improvements over all baselines.These results underline the advantages of RAT’s approach, which leverages iterative refinement of retrieval queries based on evolving reasoning thoughts. This strategy not only enhances the relevance and quality of the information retrieved but also significantly improves the accuracy and efficiency of the generated context.

3.3 Case Analysis

Here we take the embodied planning task and creative writing task to do case analysis.

In a manner analogous to multi-document question-answering tasks(Trivedi etal., 2022a), the task of long-horizon planning in Minecraft is knowledge-dense, requiring consideration of various items for the completion of each task.However, open-world Minecraft knowledge on the internet is fragmented, making task completion often dependent on information from multiple sources. We observed that while language models like ChatGPT can identify necessary items through zero-shot CoT reasoning, inaccuracies in procedural steps are common. For example, ChatGPT inaccurately identified the materials for a crafting table as 4 wood blocks (the right answer is 4 planks), indicating lower executability reliability in CoT plans. Classical RAG algorithms, retrieving the knowledge with the question as a query and focusing on the final target item, inadequately retrieve intermediary items, offering minimal task improvement. Contrastingly, RAT improves upon CoT’s initial answers by continuously refining thoughts with targeted retrieval, aligning closely with task progression and relevant item knowledge. This methodology significantly enhances planning effectiveness by ensuring a comprehensive understanding and retrieval of all items involved in a plan, highlighting the synergy between structured reasoning and dynamic knowledge retrieval in addressing long-horizon planning challenges in Minecraft.

In addressing open-ended creative writing tasks, assessments of LM’s generations typically focus on completeness and accuracy. When tasked with “summarizing the American Civil War according to a timeline”, LMs under DIRECT and CoT prompts often produce significant hallucinations. For example, the statement “The Civil War officially began on April 12, 1860, when Confederate troops attacked Fort Sumter in South Carolina, a Union-held fort” contains incorrect information, where the year 1860 is erroneously mentioned instead of the correct year, 1861.

Direct queries to the internet for this task tend to retrieve limited events, frequently overlooking the accurate start date of the war, April 12, 1861. Moreover, the RAG approach, which tends to summarize content retrieved from searches, often misses this event in its responses, whether it’s RAG-1 or RAG-5. On the other hand, RAT bases its search on a language model’s draft answer, finding that hallucinations usually occur in details, such as specific dates, which do not hinder the search engine from identifying relevant information like “American Civil War starting date”. RAT utilizes the content retrieved to identify and correct errors in the draft answer rather than merely summarizing the retrieved content. Therefore, RAT can achieve a complete generation through reasoning and enhance the accuracy and credibility of the answer by leveraging retrieved knowledge. Experimental results validate the effectiveness of RAT.

3.4 Ablation Study

MethodHumanEvalHumanEval+
pass@1(ΔΔ\Deltaroman_Δ)\leavevmode\nobreak\ \uparrowpass@5(ΔΔ\Deltaroman_Δ)\leavevmode\nobreak\ \uparrowpass@1(ΔΔ\Deltaroman_Δ)\leavevmode\nobreak\ \uparrowpass@5(ΔΔ\Deltaroman_Δ)\leavevmode\nobreak\ \uparrow
Baseline50.6%76.2%48.2%70.5%
CoT+RAG53.9(+3.3)%76.8(+0.6)%51.3(+3.1)%69.3(-1.2)%
RAT59.2(+8.7)%80.4(+7.9)%56.3(+8.2)%76.0(+5.5)%

Ablation on retrieval in RAT

In this ablation study, we investigate the influence of various retrieval strategies on the efficacy of RAT, focusing on the optimization of content retrieval for improving generative outputs. The experimental results, detailed in Table3, highlight the significant advancements achieved through the iterative refinement of retrieval queries in RAT compared to baseline methods. The baseline denoted as RAG-1, employs a direct approach by using the question itself as the retrieval query. In contrast, CoT+RAG enhances this process by utilizing the entirety of the reasoning thoughts output by the language model as the query, aiming for a broader contextual understanding. However, RAT introduces a more dynamic method by employing continuously modified parts of reasoning thoughts as queries, which allows for a more focused and relevant information retrieval process.The comparative analysis shows that RAT surpasses both the baseline and the CoT+RAG method in terms of pass@1 and pass@5 metrics across the HumanEval and HumanEval+ benchmarks. Specifically, RAT demonstrates an 8.7 percentage point increase in pass@1 and a 7.9 percentage point increase in pass@5 over the baseline in the HumanEval benchmark, and similarly impressive gains in the HumanEval+ benchmark. These improvements underscore the effectiveness of RAT’s retrieval strategy, which by iteratively refining next queries based on evolving reasoning thoughts and previous queries, ensures the retrieval of highly pertinent information. This process not only enhances the relevance of the information retrieved but also significantly improves the quality and accuracy of the final generated outputs. The results firmly establish the superiority of RAT’s dynamic retrieval method in leveraging contextual nuances to drive more precise and effective generative processes.

MethodHumanEvalHumanEval+
pass@1(ΔΔ\Deltaroman_Δ)\leavevmode\nobreak\ \uparrowpass@5(ΔΔ\Deltaroman_Δ)\leavevmode\nobreak\ \uparrowpass@1(ΔΔ\Deltaroman_Δ)\leavevmode\nobreak\ \uparrowpass@5(ΔΔ\Deltaroman_Δ)\leavevmode\nobreak\ \uparrow
Baseline47.3%75.8%41.7%74.8%
Non-Causal57.3(+10.0)%78.0(+2.1)%54.9(+13.2)%74.8(+0.0)%
Causal59.2(+11.9)%80.4(+4.6)%56.3(+14.6)%76.0(+1.2)%

Ablation on causal reasoning in RAT

In this ablation study, we systematically examine the impact of causal and non-causal reasoning approaches on the performance of the RAT system, with the Chain of Thought (CoT) serving as our baseline. Our findings, as summarized in Table4, reveal significant enhancements in generation capabilities when incorporating causal reasoning techniques. Specifically, the causal approach, which iteratively performs reasoning and retrieval, leads to notable improvements in both pass@1 and pass@5 metrics across HumanEval and HumanEval+ benchmarks. For instance, the causal method outperforms the baseline (CoT) by 11.9 percentage points in pass@1 and by 4.6 percentage points in pass@5 on the HumanEval dataset. This approach contrasts with the non-causal method, which, although also surpassing the baseline, leverages the initial reasoning thought to directly retrieve all necessary steps and generate the final answer. The causal method’s superior performance underscores the value of sequential reasoning and information retrieval in enhancing the accuracy and reliability of generated outputs. This iterative process likely aids in refining the search and reasoning steps based on continuously updated context, allowing for more precise and relevant information retrieval, which in turn supports more accurate final answers. These results firmly establish the efficacy of causal reasoning in long-horizon problem-solving tasks.

3.5 Robustness of RAT

RAT was rigorously validated across a diverse set of tasks, including code generation, mathematical reasoning, creative writing, and embodied planning. This variety of tasks underscores the generalization capability of RAT, demonstrating its robust performance across highly diverse challenges. Furthermore, all our experimental settings were conducted in a zero-shot manner; we did not design task-specific prompts for RAT, but rather used the simplest possible prompts (which can be found in AppendixB) to articulate questions or instructions for all methods. This approach ensures RAT’s generalization ability in open-ended scenarios.

The diversity of our evaluation was further enhanced by testing RAT across various language models of differing capacities. This included CodeLlama-7b(Rozière etal., 2023), ChatGPT (gpt-3.5-turbo)(Ouyang etal., 2022), and the more advanced GPT-4 (gpt-4) model(OpenAI, 2023). Remarkably, RAT maintained its generalization capability across different scales of language models, showing improvements in benchmarks such as the HumanEval for code generation tasks. Notably, the largest improvement was observed with GPT-4, attributed to its superior ability for in-context learning from retrieved text. On MBPP+, CodeLlama-7b based RAT has demonstrated performance degradation. This decline could be due to the limited in-context learning ability of smaller language models.

For mathematical reasoning tasks, RAT demonstrated a significant relative improvement, with an overall average improvement of 18.44% when applied to the GPT-3.5 model. This trend of improvement persisted with GPT-4, which achieved a remarkable 10.26% relative improvement from DIRECT to RAT. These findings highlight RAT’s robustness and its effective enhancement of language models’ performance across a spectrum of computational and creative tasks.

4 Related Works

Retrieval-augmented Generation (RAG)

Recently, RAG has gained popularity for boosting the performance of LLMs by guiding their generation process using the retrieved knowledge(Zhao etal., 2023). Without updating model parameters that may be expensive(Lewis etal., 2020a) or unstable(Ke etal., 2022b, a), RAG is a cost-effective way for LLMs to interact with the external world(Gu etal., 2018; Lewis etal., 2020a). RAG is widely applied to downstream tasks, such as code generation(Zhou etal., 2022b; Lu etal., 2022; Nashid etal., 2023), question answering(Baek etal., 2023; Siriwardhana etal., 2023), and creative writing(Wen etal., 2023; Asai etal., 2023).

Reasoning-enhanced RAG

Some recent works also leverage reasoning to enhance the performance of RAG(Li etal., 2023b). For example, IRCoT(Trivedi etal., 2022b) exploits CoT to generate better queries for retrieval, IRGR(Ribeiro etal., 2022)performs iteratively retrieval to search for suitablepremises for multi-hop QA, GEEK(Liu etal., 2023a) can choose to query external knowledge or performa single logical reasoning step in long-horizon generation tasks, and ITRG(Feng etal., 2023a) performs retrieval based on the last-step generation. However, these previous RAG methods simply adopt a single query to retrieve the knowledge for question-answering tasks(Gao etal., 2023; Feng etal., 2023b), while our proposed RAT performs retrieval using reasoning and draft answers in an autoregressive way, which significantly improves the performance of RAG in various tasks as demonstrated in Figure2.

Language Model for Reasoning

The advancement of reasoning in language models has seen notable methodologies emerge since CoT was proposed by Wei etal. (2022), which showcased LMs’ ability to generate self-derived problem-solving strategies. This foundational work spurred further innovations such as the least-to-most prompting(Zhou etal., 2022a), zero-shot CoT(Kojima etal., 2022), self-consistency(Wang etal., 2022), zero-shot CoT without prompting(Wang and Zhou, 2024).Moving beyond basic prompting, Creswell etal. (2022) introduced the Selection-Inference framework, while Zelikman etal. (2022) developed STaR to refine reasoning through model finetuning. Creswell and Shanahan (2022) proposed a faithful reasoning model, segmenting reasoning into dedicated steps, similar to Scratchpad’s approach by Nye etal. (2021) for enhancing multi-step computation. Tree-of-Thought(Yao etal., 2023) and Graph-of-Thought(Besta etal., 2023) also expand the reasoning paths into a complex structure instead of linear CoT.These methods usually aim to improve the reasoning ability of LLM by designing prompts or providing feedback from the environment to assist in better planning and decision-making(Wang etal., 2023c; Yao etal., 2022; Shinn etal., 2023; Li etal., 2023a; Zhang etal., 2023).However, RAT takes a different approach by using RAG to access external knowledge that can help LLM with its reasoning process.

5 Conclusion

We have presented Retrieval Augmented Thoughts (RAT), a simple yet effective prompting strategy that synergies chain of thought (CoT) prompting and retrieval augmented generation (RAG) to address the challenging long-horizon reasoning and generation tasks. Our key ideas involve revising the zero-shot chain of thoughts produced by LLMs through RAG with the thoughts as queries, and causally revising the thoughts & generating the response progressively. RAT, a zero-shot prompting approach, has demonstrated significant advantages over vanilla CoT prompting, RAG, and other baselines on challenging code generation, mathematics reasoning, embodied task planning, and creative writing tasks.

Acknowledgments

We thank a grant from CCF-Tencent Rhino-Bird Open Research Fund. One author is funded in part by NSF grants #IIS-1943641, #IIS-1956441, #CCF-1837129, an SRA from Meta and a research gift from Amazon Alexa AI, and a gift from RelationalAI.

Limitations

In this seciton, we discuss three limitations of our RAT as follows.

One limitation of this work is that the performance of RAT relies on the chain-of-thought reasoning and in-context learning (or RAG) capability of the base LLM. Since this work does not involve any model training, the capability of base LLM will not change when applying RAT. Despite RAT achieves significant improvement on powerful LLMs such as GPT-3.5 and GPT-4, the effect on smaller and weaker LLMs such as GPT-2 is questionable. On top of that, it is interesting to further explore how to improve RAT via fine-tuning weaker LLMs(Ke etal., 2023; Lin etal., 2024).

Another limitation of this work is that the performance of RAT also relies on the quality of the retrieved knowledge. When we have an inferior external knowledge base which is irrelevant to the user query, the retrieved knowledge may be unhelpful for LLMs to generate useful information. Also, even if we select a relatively large knowledge base that entails the relevant information, it will be expensive to maintain and retrieve from such a huge knowledge base and also hurts the retrieval precision. An interesting and crucial direction is to study how to build and evaluate the quality of a knowledge base used for efficient and effective retrieval.

It is noteworthy that the above two limitations also apply to the traditional studies on retrieval-augmented generation (RAG). The last limitation of RAT is that we follow CoT to solve the problems in a explicit step-by-step fashion. Sometimes step-by-step thinking may be redundant for straightforward questions, while some questions require more complex reasoning structures (e.g., tree-of-thoughts(Yao etal., 2023)). It is also interesting to explore the better reasoning methods for LLMs in our future work.

Ethics Statement

All datasets and models are publicly accessible except for OpenAI’s GPT series and the text embedding APIs. We have not identified any significant ethical considerations associated with this work. We believe our newly proposed RAT can improve the generation of LLMs in various fields and reduce LLMs’ hallucinations.

\nobibliography

*

References

  • Asai etal. (2023)A.Asai, Z.Wu, Y.Wang, A.Sil, and H.Hajishirzi.Self-rag: Learning to retrieve, generate, and critique through self-reflection.arXiv preprint arXiv:2310.11511, 2023.
  • Austin etal. (2021)J.Austin, A.Odena, M.Nye, M.Bosma, H.Michalewski, D.Dohan, E.Jiang, C.Cai, M.Terry, Q.Le, etal.Program synthesis with large language models.arXiv preprint arXiv:2108.07732, 2021.
  • Baek etal. (2023)J.Baek, A.F. Aji, and A.Saffari.Knowledge-augmented language model prompting for zero-shot knowledge graph question answering.arXiv preprint arXiv:2306.04136, 2023.
  • Baker etal. (2022)B.Baker, I.Akkaya, P.Zhokhov, J.Huizinga, J.Tang, A.Ecoffet, B.Houghton, R.Sampedro, and J.Clune.Video pretraining (vpt): Learning to act by watching unlabeled online videos.arXiv preprint arXiv:2206.11795, 2022.
  • Besta etal. (2023)M.Besta, N.Blach, A.Kubicek, R.Gerstenberger, L.Gianinazzi, J.Gajda, T.Lehmann, M.Podstawski, H.Niewiadomski, P.Nyczyk, etal.Graph of thoughts: Solving elaborate problems with large language models.arXiv preprint arXiv:2308.09687, 2023.
  • Brown etal. (2020)T.Brown, B.Mann, N.Ryder, M.Subbiah, J.D. Kaplan, P.Dhariwal, A.Neelakantan, P.Shyam, G.Sastry, A.Askell, etal.Language models are few-shot learners.Advances in neural information processing systems, 33:1877–1901, 2020.
  • Cai etal. (2023a)S.Cai, Z.Wang, X.Ma, A.Liu, and Y.Liang.Open-world multi-task control through goal-aware representation learning and adaptive horizon prediction.2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13734–13744, 2023a.
  • Cai etal. (2023b)S.Cai, B.Zhang, Z.Wang, X.Ma, A.Liu, and Y.Liang.Groot: Learning to follow instructions by watching gameplay videos.arXiv preprint arXiv:2310.08235, 2023b.
  • Chen etal. (2021)M.Chen, J.Tworek, H.Jun, Q.Yuan, H.P. d.O. Pinto, J.Kaplan, H.Edwards, Y.Burda, N.Joseph, G.Brockman, etal.Evaluating large language models trained on code.arXiv preprint arXiv:2107.03374, 2021.
  • Cobbe etal. (2021)K.Cobbe, V.Kosaraju, M.Bavarian, M.Chen, H.Jun, L.Kaiser, M.Plappert, J.Tworek, J.Hilton, R.Nakano, C.Hesse, and J.Schulman.Training verifiers to solve math word problems.arXiv preprint arXiv:2110.14168, 2021.
  • Creswell and Shanahan (2022)A.Creswell and M.Shanahan.Faithful reasoning using large language models.arXiv preprint arXiv:2208.14271, 2022.
  • Creswell etal. (2022)A.Creswell, M.Shanahan, and I.Higgins.Selection-inference: Exploiting large language models for interpretable logical reasoning.arXiv preprint arXiv:2205.09712, 2022.
  • Dhuliawala etal. (2023)S.Dhuliawala, M.Komeili, J.Xu, R.Raileanu, X.Li, A.Celikyilmaz, and J.Weston.Chain-of-verification reduces hallucination in large language models.arXiv preprint arXiv: 2309.11495, 2023.
  • Feng etal. (2023a)Z.Feng, X.Feng, D.Zhao, M.Yang, and B.Qin.Retrieval-generation synergy augmented large language models.ArXiv, abs/2310.05149, 2023a.
  • Feng etal. (2023b)Z.Feng, X.Feng, D.Zhao, M.Yang, and B.Qin.Retrieval-generation synergy augmented large language models.arXiv preprint arXiv:2310.05149, 2023b.
  • Gao etal. (2022)L.Gao, A.Madaan, S.Zhou, U.Alon, P.Liu, Y.Yang, J.Callan, and G.Neubig.Pal: Program-aided language models.arXiv preprint arXiv:2211.10435, 2022.
  • Gao etal. (2023)Y.Gao, Y.Xiong, X.Gao, K.Jia, J.Pan, Y.Bi, Y.Dai, J.Sun, and H.Wang.Retrieval-augmented generation for large language models: A survey.arXiv preprint arXiv:2312.10997, 2023.
  • Gu etal. (2018)J.Gu, Y.Wang, K.Cho, and V.O. Li.Search engine guided neural machine translation.In Proceedings of the AAAI Conference on Artificial Intelligence, 2018.
  • Guo etal. (2024)D.Guo, Q.Zhu, D.Yang, Z.Xie, K.Dong, W.Zhang, G.Chen, X.Bi, Y.Wu, Y.K. Li, F.Luo, Y.Xiong, and W.Liang.Deepseek-coder: When the large language model meets programming – the rise of code intelligence.arXiv preprint arXiv:2401.14196, 2024.
  • Herbrich etal. (2006)R.Herbrich, T.Minka, and T.Graepel.Trueskill™: a bayesian skill rating system.Advances in neural information processing systems, 19, 2006.
  • Holyoak and Morrison (2012)K.J. Holyoak and R.G. Morrison.The Oxford handbook of thinking and reasoning.Oxford University Press, 2012.
  • Huang etal. (2022)W.Huang, P.Abbeel, D.Pathak, and I.Mordatch.Language models as zero-shot planners: Extracting actionable knowledge for embodied agents.ICML, 2022.
  • Ke etal. (2022a)Z.Ke, H.Lin, Y.Shao, H.Xu, L.Shu, and B.Liu.Continual training of language models for few-shot learning.arXiv preprint arXiv:2210.05549, 2022a.
  • Ke etal. (2022b)Z.Ke, Y.Shao, H.Lin, H.Xu, L.Shu, and B.Liu.Adapting a language model while preserving its general knowledge.In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10177–10188, 2022b.
  • Ke etal. (2023)Z.Ke, Y.Shao, H.Lin, T.Konishi, G.Kim, and B.Liu.Continual pre-training of language models.In The Eleventh International Conference on Learning Representations, 2023.
  • Kojima etal. (2022)T.Kojima, S.S. Gu, M.Reid, Y.Matsuo, and Y.Iwasawa.Large language models are zero-shot reasoners.Advances in neural information processing systems, 35:22199–22213, 2022.
  • Lewis etal. (2020a)P.Lewis, E.Perez, A.Piktus, F.Petroni, V.Karpukhin, N.Goyal, H.Küttler, M.Lewis, W.-t. Yih, T.Rocktäschel, etal.Retrieval-augmented generation for knowledge-intensive nlp tasks.Advances in Neural Information Processing Systems, 33:9459–9474, 2020a.
  • Lewis etal. (2020b)P.Lewis, E.Perez, A.Piktus, F.Petroni, V.Karpukhin, N.Goyal, H.Küttler, M.Lewis, W.-t. Yih, T.Rocktäschel, etal.Retrieval-augmented generation for knowledge-intensive nlp tasks.Advances in Neural Information Processing Systems, 33:9459–9474, 2020b.
  • Li etal. (2023a)C.Li, J.Liang, A.Zeng, X.Chen, K.Hausman, D.Sadigh, S.Levine, L.Fei-Fei, F.Xia, and B.Ichter.Chain of code: Reasoning with a language model-augmented code emulator, 2023a.
  • Li etal. (2023b)X.Li, R.Zhao, Y.K. Chia, B.Ding, S.Joty, S.Poria, and L.Bing.Chain-of-knowledge: Grounding large language models via dynamic knowledge adapting over heterogeneous sources.In The Twelfth International Conference on Learning Representations, 2023b.
  • Lifsh*tz etal. (2023)S.Lifsh*tz, K.Paster, H.Chan, J.Ba, and S.McIlraith.Steve-1: A generative model for text-to-behavior in minecraft.arXiv preprint arXiv:2306.00937, 2023.
  • Lin etal. (2023)H.Lin, Z.Wang, J.Ma, and Y.Liang.Mcu: A task-centric framework for open-ended agent evaluation in minecraft.arXiv preprint arXiv:2310.08367, 2023.
  • Lin etal. (2024)H.Lin, B.Huang, H.Ye, Q.Chen, Z.Wang, S.Li, J.Ma, X.Wan, J.Zou, and Y.Liang.Selecting large language model to fine-tune via rectified scaling law.arXiv preprint arXiv:2402.02314, 2024.
  • Liu etal. (2023a)C.Liu, X.Li, L.Shang, X.Jiang, Q.Liu, E.Y. Lam, and N.Wong.Gradually excavating external knowledge for implicit complex question answering.In Conference on Empirical Methods in Natural Language Processing, 2023a.
  • Liu etal. (2023b)J.Liu, C.S. Xia, Y.Wang, and L.Zhang.Is your code generated by chatGPT really correct? rigorous evaluation of large language models for code generation.In Thirty-seventh Conference on Neural Information Processing Systems, 2023b.
  • Lu etal. (2022)S.Lu, N.Duan, H.Han, D.Guo, S.-w. Hwang, and A.Svyatkovskiy.Reacc: A retrieval-augmented code completion framework.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6227–6240, 2022.
  • Nashid etal. (2023)N.Nashid, M.Sintaha, and A.Mesbah.Retrieval-based prompt selection for code-related few-shot learning.In Proceedings of the 45th International Conference on Software Engineering (ICSE’23), 2023.
  • Nye etal. (2021)M.Nye, A.J. Andreassen, G.Gur-Ari, H.Michalewski, J.Austin, D.Bieber, D.Dohan, A.Lewkowycz, M.Bosma, D.Luan, etal.Show your work: Scratchpads for intermediate computation with language models.arXiv preprint arXiv:2112.00114, 2021.
  • OpenAI (2023)OpenAI.Gpt-4 technical report, 2023.
  • Ouyang etal. (2022)L.Ouyang, J.Wu, X.Jiang, D.Almeida, C.L. Wainwright, P.Mishkin, C.Zhang, S.Agarwal, K.Slama, A.Ray, etal.Training language models to follow instructions with human feedback.arXiv preprint arXiv:2203.02155, 2022.
  • Rawte etal. (2023)V.Rawte, A.Sheth, and A.Das.A survey of hallucination in large foundation models.arXiv preprint arXiv:2309.05922, 2023.
  • Reimers and Gurevych (2019)N.Reimers and I.Gurevych.Sentence-bert: Sentence embeddings using siamese bert-networks.arXiv preprint arXiv:1908.10084, 2019.
  • Ribeiro etal. (2022)D.Ribeiro, S.Wang, X.Ma, R.Dong, X.Wei, H.Zhu, X.Chen, Z.Huang, P.Xu, A.Arnold, etal.Entailment tree explanations via iterative retrieval-generation reasoner.arXiv preprint arXiv:2205.09224, 2022.
  • Rozière etal. (2023)B.Rozière, J.Gehring, F.Gloeckle, S.Sootla, I.Gat, X.Tan, Y.Adi, J.Liu, T.Remez, J.Rapin, A.Kozhevnikov, I.Evtimov, J.Bitton, M.P. Bhatt, C.C. Ferrer, A.Grattafiori, W.Xiong, A.D’efossez, J.Copet, F.Azhar, H.Touvron, L.Martin, N.Usunier, T.Scialom, and G.Synnaeve.Code llama: Open foundation models for code.ArXiv, abs/2308.12950, 2023.
  • Shinn etal. (2023)N.Shinn, B.Labash, and A.Gopinath.Reflexion: an autonomous agent with dynamic memory and self-reflection.arXiv preprint arXiv:2303.11366, 2023.
  • Siriwardhana etal. (2023)S.Siriwardhana, R.Weerasekera, E.Wen, T.Kaluarachchi, R.Rana, and S.Nanayakkara.Improving the domain adaptation of retrieval augmented generation (rag) models for open domain question answering.Transactions of the Association for Computational Linguistics, 11:1–17, 2023.
  • Team (2022)G.P. Team.Palm: Scaling language modeling with pathways.arXiv preprint arXiv: 2204.02311, 2022.
  • Touvron etal. (2023)H.Touvron, L.Martin, K.Stone, P.Albert, A.Almahairi, Y.Babaei, N.Bashlykov, S.Batra, P.Bhargava, S.Bhosale, etal.Llama 2: Open foundation and fine-tuned chat models.arXiv preprint arXiv:2307.09288, 2023.
  • Trivedi etal. (2022a)H.Trivedi, N.Balasubramanian, T.Khot, and A.Sabharwal.Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions.ArXiv, abs/2212.10509, 2022a.
  • Trivedi etal. (2022b)H.Trivedi, N.Balasubramanian, T.Khot, and A.Sabharwal.Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions.arXiv preprint arXiv:2212.10509, 2022b.
  • Wang and Zhou (2024)X.Wang and D.Zhou.Chain-of-thought reasoning without prompting.arXiv preprint arXiv:2402.10200, 2024.
  • Wang etal. (2022)X.Wang, J.Wei, D.Schuurmans, Q.Le, E.Chi, S.Narang, A.Chowdhery, and D.Zhou.Self-consistency improves chain of thought reasoning in language models.arXiv preprint arXiv:2203.11171, 2022.
  • Wang etal. (2023a)X.Wang, J.Wei, D.Schuurmans, Q.V. Le, E.H. Chi, S.Narang, A.Chowdhery, and D.Zhou.Self-consistency improves chain of thought reasoning in language models.In The Eleventh International Conference on Learning Representations, ICLR 2023, 2023a.
  • Wang etal. (2023b)Z.Wang, S.Cai, A.Liu, Y.Jin, J.Hou, B.Zhang, H.Lin, Z.He, Z.Zheng, Y.Yang, X.Ma, and Y.Liang.Jarvis-1: Open-world multi-task agents with memory-augmented multimodal language models.ArXiv, abs/2311.05997, 2023b.
  • Wang etal. (2023c)Z.Wang, S.Cai, A.Liu, X.Ma, and Y.Liang.Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents.arXiv preprint arXiv:2302.01560, 2023c.
  • Wei etal. (2022)J.Wei, X.Wang, D.Schuurmans, M.Bosma, E.Chi, Q.Le, and D.Zhou.Chain of thought prompting elicits reasoning in large language models.36th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022.
  • Wen etal. (2023)Z.Wen, Z.Tian, W.Wu, Y.Yang, Y.Shi, Z.Huang, and D.Li.Grove: a retrieval-augmented complex story generation framework with a forest of evidence.arXiv preprint arXiv:2310.05388, 2023.
  • Yao etal. (2022)S.Yao, J.Zhao, D.Yu, N.Du, I.Shafran, K.Narasimhan, and Y.Cao.React: Synergizing reasoning and acting in language models.arXiv preprint arXiv:2210.03629, 2022.
  • Yao etal. (2023)S.Yao, D.Yu, J.Zhao, I.Shafran, T.L. Griffiths, Y.Cao, and K.Narasimhan.Tree of thoughts: Deliberate problem solving with large language models, 2023.
  • Yuan etal. (2023)H.Yuan, C.Zhang, H.Wang, F.Xie, P.Cai, H.Dong, and Z.Lu.Plan4mc: Skill reinforcement learning and planning for open-world minecraft tasks.arXiv preprint arXiv:2303.16563, 2023.
  • Yuan etal. (2024)H.Yuan, Z.Mu, F.Xie, and Z.Lu.Pre-training goal-based models for sample-efficient reinforcement learning.In The Twelfth International Conference on Learning Representations, 2024.
  • Zelikman etal. (2022)E.Zelikman, Y.Wu, J.Mu, and N.Goodman.Star: Bootstrapping reasoning with reasoning.Advances in Neural Information Processing Systems, 35:15476–15488, 2022.
  • Zhang etal. (2023)C.Zhang, K.Yang, S.Hu, Z.Wang, G.Li, Y.Sun, C.Zhang, Z.Zhang, A.Liu, S.-C. Zhu, etal.Proagent: Building proactive cooperative ai with large language models.arXiv preprint arXiv:2308.11339, 2023.
  • Zhao etal. (2023)R.Zhao, H.Chen, W.Wang, F.Jiao, X.L. Do, C.Qin, B.Ding, X.Guo, M.Li, X.Li, and S.R. Joty.Retrieving multimodal information for augmented generation: A survey.ArXiv, abs/2303.10868, 2023.
  • Zhou etal. (2022a)D.Zhou, N.Schärli, L.Hou, J.Wei, N.Scales, X.Wang, D.Schuurmans, C.Cui, O.Bousquet, Q.Le, etal.Least-to-most prompting enables complex reasoning in large language models.arXiv preprint arXiv:2205.10625, 2022a.
  • Zhou etal. (2023)D.Zhou, N.Schärli, L.Hou, J.Wei, N.Scales, X.Wang, D.Schuurmans, C.Cui, O.Bousquet, Q.V. Le, and E.H. Chi.Least-to-most prompting enables complex reasoning in large language models.In The Eleventh International Conference on Learning Representations, ICLR 2023, 2023.
  • Zhou etal. (2022b)S.Zhou, U.Alon, F.F. Xu, Z.Jiang, and G.Neubig.Docprompting: Generating code by retrieving the docs.In The Eleventh International Conference on Learning Representations, 2022b.

Appendix A Task Details

A.1 Code Generation

Benchmarks

We select HumanEval(Chen etal., 2021), HumanEval+(Liu etal., 2023b), MBPP(Austin etal., 2021), and MBPP+(Liu etal., 2023b) as the code generation evaluation benchmark.These benchmarks are commonly used to test the performance of code generation models, which are briefly introduced below:

  • HumanEval consists of 164 Python programming problems, each with a function signature, docstring, body, and multiple unit tests(Chen etal., 2021).

  • HumanEval+ includes the same programming problems as HumanEval, but with an additional 80 times more unit tests for each of the 164 problems(Liu etal., 2023b).

  • MBPP is a collection of approximately 1,000 Python programming problems that are intended to be solvable by beginner programmers. Each problem includes an English task description, a code solution, and three automated test cases. We assess the sample test set from index 11 to 175(Austin etal., 2021).

  • MBPP+ consists of 399 tasks(Liu etal., 2023b), which are a subset of the original MBPP dataset. Additionally, MBPP+ includes extra unit tests for each of the 399 problems (35 times more than the original MBPP). We utilized the first 164 questions as our test set.

These benchmarks encompass a wide range of programming problems, from simple function implementations to more complex algorithmic challenges, providing a robust testbed for assessing the generative capabilities of various models.

Metrics

We adopt the pass@k metric for evaluating the efficacy of various code generation algorithms, following the methodology proposed by Chen etal. (2021) and extended by Liu etal. (2023b). This metric quantifies the rate at which generated code snippets successfully execute and pass all test cases, where k𝑘kitalic_k represents the number of attempts or samples generated by the model for each problem.This approach allows us to rigorously assess the precision and reliability of code generation models in producing functionally correct code across a diverse set of programming challenges.

Baselines

To establish a comprehensive and equitable comparison landscape, we incorporate a suite of baseline methods and diverse code generation models. Our baselines include the original code generation language models, referred to as DIRECT, and the Retrieval-Augmented Generation (RAG) methodology with n𝑛nitalic_n retrieved examples, instantiated in both single-shot (1 shot) and multi-shot (5 shots) configurations, as documented by Lewis etal. (2020b). Additionally, we examine the zero-shot CoT (CoT) approach, as conceptualized by Kojima etal. (2022), which simulates a step-by-step reasoning process to facilitate complex problem-solving tasks under zero demonstration.To ensure a fair comparison, none of the methods used examples from the benchmark as demonstrations for in-context learning.

The diversity of our evaluation is further enriched by testing across various language models with differing capacities, including CodeLlama-7b(Rozière etal., 2023), along with ChatGPT(gpt-3.5-turbo)(Ouyang etal., 2022), and the more advanced GPT-4(gpt-4) model(OpenAI, 2023).Recognizing the potential format discrepancies in code outputs, especially considering that models like gpt-3.5-turbo and gpt-4 may produce code in markdown format which is not immediately executable, we implement post-processing steps to convert the original language model outputs into a form that can be executed within a sandbox environment. This normalization ensures that all models are evaluated under uniform execution conditions, thereby facilitating a fair and direct comparison of their code generation capabilities. Through this methodological framework, we aim to provide a detailed and nuanced understanding of the performance landscape across a spectrum of LLM-driven code generation approaches.

RAG Settings

RAT leverages the capabilities of Retrieval-Augmented Generation methods, which enhance the performance of language models by integrating external knowledge sources.Specifically, we employed the codeparrot/github-jupyter dataset as our primary search vector library. This dataset is a comprehensive compilation of 452k markdown and code pairs, meticulously extracted from Jupyter notebooks hosted on GitHub BigQuery, representing a rich repository of programming knowledge and examples.We utilized OpenAI’s text-embedding-ada-002 API service for all embedding calculations across different methods and base models.

A.2 Mathematical Reasoning

Benchmarks

Our evaluation framework for assessing mathematical reasoning capabilities leverages two primary benchmarks: the GSM8K dataset, which comprises over 8,000 multi-step mathematical problems(Cobbe etal., 2021), and the GSM-HARD dataset, an adaptation of GSM8K where numbers in the questions are replaced with larger values to increase problem complexity(Gao etal., 2022). This study employs the PAL methodology to scrutinize the mathematical reasoning results, involving the utilization of Large Language Models (LLMs) to parse natural language problems, generate intermediary programmatic solutions, and subsequently execute these solutions via a Python interpreter. The test set for each benchmark consists of samples ranging from index 1 to 200. Uniquely, our approach does not use any examples for in-context learning, differing from the original PAL methods.

Metrics and Baselines

Accuracy serves as our principal metric for evaluation, aligning with the established metric for the GSM8K benchmark. Each question undergoes three execution attempts, with the average score recorded as the final result. The baselines, including DIRECT, CoT, RAG (1 shot), and RAG (5 shots), are consistent with those outlined in code generation, facilitating a comprehensive and comparative analysis across different code generation benchmarks. The RAG settings are consistent with the code generation tasks.

A.3 Embodied Planning

We further conduct experiments on embodied planning benchmarks on open-ended environments Minecraft(Lin etal., 2023).

Benchmarks

The complexity and vast item interconnectivity within the open-world Minecraft present an ideal testbed for evaluating the LLM’s capability to generate long-horizon plans(Yuan etal., 2023; Wang etal., 2023c, b). With thousands of items and intricate relationships between them, obtaining a specific item in survival mode from scratch may involve dozens of intermediate items and their quantitative relationships, such as crafting 1 crafting table from 4 planks. This setting rigorously tests the planning abilities of LLMs instead of low-level control policies(Cai etal., 2023b; Baker etal., 2022; Cai etal., 2023a; Lifsh*tz etal., 2023; Yuan etal., 2024). Moreover, Wang etal. (2023b) have identified instances of hallucinations about Minecraft knowledge in OpenAI’s ChatGPT and a general scarcity of Minecraft-related knowledge in open-source language models, making this task a suitable benchmark for assessing the RAG algorithm’s effectiveness.

The planning prompts are aligned with those used in DEPS(Wang etal., 2023c), structured as Python templates and evaluated using MC-TextWorld as detailed by Lin etal. (2023).A set of 100 tasks were randomly selected for the test set, ranging from simple objectives like obtaining a crafting table to more complex goals such as crafting an iron helmet and even challenging making an enchanting table.The task instruction is formulated as:

  • Give you nothing in the inventory, generate a step-by-step plan for the task of obtaining a {placeholder:acacia_boat} in Minecraft survival mode, and describe the object Minecraft item and its number at every step. For every step, start with ’STEP’ as start.

  • Give you nothing in the inventory, generate a step-by-step plan for the task of obtaining a {placeholder:diamond_pickaxe} boat in Minecraft survival mode, and describe the object Minecraft item and its number at every step. For every step, start with ’STEP’ as start.

There are over 100 tasks involving different Minecraft items.

RAG Settings

For the retrieval component of the RAG algorithm, we utilized the Minecraft Wiki333https://minecraft.wiki/ and DigMinecraft444https://www.digminecraft.com/ websites as the information sources accessible to the LLMs. Data from these websites was cleaned and formatted into markdown text, then segmented into trunks not exceeding 2000 tokens each, with embedding calculations performed using OpenAI’s text-embedding-ada-002 API service.

Evaluation Metrics

Based on the methodology of Huang etal. (2022), our evaluation of open-ended, long-horizon planning in Minecraft focuses on both executability and plausibility.Executability primarily examines whether a plan can be carried out, including the accuracy of each step’s preconditions and effects. The executability is automatically calculated using MC-TextWorld(Lin etal., 2023). However, executability only evaluates if an objective-level plan can be executed, without considering the specific details involved in executing individual objectives. For instance, crafting a wooden pickaxe requires placing a crafting table and arranging three planks and two sticks in a particular pattern, which are important details for human execution but not assessed by MC-TextWorld. Therefore, we complement our evaluation with human ratings to assess the plausibility of plans.

A.4 Creative Writing

To further understand the potential of Retrieval-Augmented Generation (RAG) models in enhancing the creativity and relevance of generated content, we extend our investigation to open-ended text generation tasks within the realm of creative writing.

Benchmarks

The versatility of RAT was tested through a series of creative writing tasks, each chosen to highlight different aspects of open-ended text generation. These tasks include:

  • Write a survey paper to summarize the placeholder:Retrieval-augmented Generation methods for Large Language Models.

  • Describe of placeholder:Jin-Yong’s life.

  • Summarize the placeholder:American Civil War according to the timeline.

For each task, three variants for placeholder were created to ensure a comprehensive evaluation of the model’s performance across different contexts and requirements.

RAG Settings

Differing from previous tasks, creative writing is categorized as an open-ended generation task, demanding a broader scope of information retrieval to aid content generation. To accommodate this, Google was utilized as the search engine, with the top-k web pages converted into markdown text to assist the LLM in generating outputs.This approach allowed LLM to leverage a wide array of information sources.

Baselines and Evaluations

To benchmark RAT’s performance, we compared it against DIRECT, RAG-1 shot, and RAG-5 shot methods, all based on the gpt-3.5-turbo model. The evaluation was conducted by human experts, employing the TrueSkill rating system(Herbrich etal., 2006) to calculate scores for each method. This evaluation framework enabled a comprehensive assessment of each model’s creative output quality, accuracy, relevance, and innovativeness.

Appendix B Prompt Details

Our prompts consist of three parts: prompt for generating initial answer, prompt for generating search query, and prompt for revising answers according to retrieved context.

The process of query generation is omitted in code generation tasks. Instead, we use the generated code draft as a query and compute the embedding of it based on OpenAI Embedding services. For embodied planning and creative writing tasks, we will generate an additional query.

Appendix C TrueSkill Evaluation Framework

Part of the tasks in “Embodied planning” and “creative writing” involve using humans for labeling. Human labelers have 4 choices: “A is better”, “B is better”, “Tie” or “Both are bad”. In this case, “Tie” and “Both are bad” will be counted as a tie.For each task group, we have selected more than 10 professional annotators to provide labels.We use the Python “trueskill” package to calculate the win rate and score. The default score for every method is set as 25.In order to facilitate user understanding and selection, we also provide prompts when entering the system.

Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation (3)

Appendix D Disscussions on Benchmark Contamination

To avoid the code library containing solutions to the exact problems being evaluated) in code generation benchmarks, we adopted a rigorous pre-processing methodology as described by Guo etal. (2024).This process was meticulously designed to remove any direct matches or overly similar code snippets from our search vector library, thereby ensuring that our evaluation remains fair and uncontaminated by pre-existing solutions.This examination aims to underscore the importance of maintaining the integrity of the evaluation process while utilizing external knowledge sources to augment the capabilities of language models in code-generation tasks.

MethodHumanEvalHumanEval+
pass@1pass@5pass@1pass@5
DIRECT40.85%53.65%37.43%48.78%
FINETUNE29.02%40.24%26.34%35.98%
RAT45.73%59.75%43.29%53.66%

To further explore the potential benchmark contamination, we also conducted additional finetuning on CodeLLaMA-7B-Python using the code corpus in Table5.

Appendix E More Results

E.1 Emboddied Planning

Embodied planning involves multiple steps, each of which relies on specific world knowledge and causal knowledge (i.e., preceding steps are usually preconditions for subsequent steps), including recipes for items in Minecraft, tools for performing related actions, and quantity restrictions. Therefore, although the plan generated by ChatGPT may appear complete and correct, there are often errors within the steps that can affect the proper execution of the plan.

We mark the errors found in the generated plan in red.

Although the Zero-shot CoT has generated a step-by-step plan overall, there are many factual errors within it. These include recipe errors in STEP 2, where the crafting table requires planks instead of wood; missing raw materials in STEP 4, as the wooden pickaxe needs 2 sticks but lacks the relevant step in the plan; absence of instructions to use a stone pickaxe to mine iron ore in STEP 8; and an incorrect recipe for golden apple in STEP 12, which should include gold ingots and an apple rather than a water bucket.

There are still errors in the plan generated by RAT, such as the lack of tools before "STEP 8: smelt iron ore into iron ingots", which should be “Mine 8 cobblestone to craft 1 furnace”. However, compared to the errors in ChatGPT, the error rate in the plan has been significantly reduced.

We have also listed the links of the retrieved pages involved in different steps.We can see that the text sources retrieved in each step generated by RAT are usually highly related to the synthesized item of that step. Traditional RAG uses instructions for retrieval and can only find the final step.While RAT can retrieve all links related to intermediate items, which improves the accuracy and plausibility greatly.

StepItemRecipeLink
14x Oak Log-https://minecraft.fandom.com/wiki/Log
216x Oak Planks4x Oak Loghttps://www.digminecraft.com/basic_recipes/make_oak_wood_plank.php
34x Stick2x Oak Plankshttps://www.digminecraft.com/basic_recipes/make_stick.php
41x Wooden Pickaxe3x Oak Planks, 2 Stickhttps://www.digminecraft.com/tool_recipes/make_wooden_pickaxe.php
53x CobblestoneWooden Pickaxehttps://minecraft.fandom.com/wiki/Cobblestone
61x Stone Pickaxe3x Cobblestone, 2 Stickhttps://www.digminecraft.com/tool_recipes/make_stone_pickaxe.php
73x Iron OreStone Pickaxehttps://minecraft.fandom.com/wiki/Iron_Ore
83x Iron Ingot3x Iron Orehttps://www.digminecraft.com/basic_recipes/make_iron_ingot.php
91 Iron Pickaxe3x Iron Ingot, 2x Stickhttps://www.digminecraft.com/tool_recipes/make_iron_pickaxe.php
108x Gold OreIron Pickaxehttps://minecraft.fandom.com/wiki/Gold_Ore
118x Gold Ingot8x Gold Orehttps://www.digminecraft.com/basic_recipes/make_gold_ingot.php
121x Apple-https://minecraft.fandom.com/wiki/Apple
131x Golden Apple8x Gold Ingot, 1x Applehttps://www.digminecraft.com/food_recipes/make_golden_apple.php

E.2 Creative Writing

Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation (2024)
Top Articles
Latest Posts
Article information

Author: Jerrold Considine

Last Updated:

Views: 6035

Rating: 4.8 / 5 (78 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Jerrold Considine

Birthday: 1993-11-03

Address: Suite 447 3463 Marybelle Circles, New Marlin, AL 20765

Phone: +5816749283868

Job: Sales Executive

Hobby: Air sports, Sand art, Electronics, LARPing, Baseball, Book restoration, Puzzles

Introduction: My name is Jerrold Considine, I am a combative, cheerful, encouraging, happy, enthusiastic, funny, kind person who loves writing and wants to share my knowledge and understanding with you.