[LLM] Prompt Engineering

Prompt Engineering Guide

Few-shot prompts

Allows us to provide exemplars in prompts to steer the model towards better performance.

Chain-of-Thought (CoT) Prompting

  • Instructing the model to reason about the task when responding
  • Can be combined with few-shot prompting to get better results
  • Useful for tasks that requires reasoning
1
2
The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1
Answer: Adding all the odd numbers gives 41. The answer is false

Zero-shot CoT

Involves adding “Let’s think step by step” to the original prompt

Generate Knowledge Prompting

  • Using additional knowledge provided as part of the context to improve results on complex tasks such as commonsense reasoning
  • The knowledge used in the context is generated by a model and used in the prompt to make a prediction. The knowledge samples are then used to generate knowledge augmented questions to get answer proposals

Program-aided Language Model (PAL)

Program-aided language models (PAL) uses an LLM to read problems and generate programs as the intermediate reasoning steps.
It offloads the solution step to a runtime such as Python interpreter

ReAct

  • ReAct is a framework where LLMs are used to generate both reasoning traces and task-specific actions in an interleaved manner
    • Generating reasoning traces allow the model to induce, track, and update actions plans and handle exceptions
    • The action step allows to interface with and gather information from external sources such as knowlege bases or environments
  • ReAct allows LLMs to interact with external tools to retrieve additional information that leads to more reliable and factual responses