Autonomous LLM Agent Prompt Libraries

Hamze Ghalebi

Development

9

9

min read

Apr 22, 2025

Apr 22, 2025


Auto-GPT (Significant-Gravitas/Auto-GPT)

LangChain Agents (LangChain Framework)

  • GitHub: langchain-ai/langchain (see langchain/agents module)

  • Description: LangChain provides a framework to build LLM-powered agents that can use tools. It includes pre-defined agent classes (like “ReAct” agents, conversational agents, etc.) which come with prompt templates to instruct the LLM on how to format its reasoning and tool usage (How do I customize the prompt for the zero shot agent ? · Issue #4044 · langchain-ai/langchain · GitHub) (How do I customize the prompt for the zero shot agent ? · Issue #4044 · langchain-ai/langchain · GitHub). For example, the Zero-Shot ReAct agent uses the ReAct (Reason+Act) prompting strategy to let the model chain reasoning and tool calls. LangChain’s agents are highly configurable – developers can use default prompts or supply custom ones, and the library will format the final prompt by injecting things like available tool names/descriptions and the conversation or scratchpad of previous steps.

  • Prompt Template: A typical LangChain agent prompt consists of a prefix explaining the task, a list of tools, instructions on the format, and a suffix with the user’s question. For the standard ReAct agent (a.k.a. “zero-shot react” in LangChain), the built-in prompt is roughly:

    **System (Prompt to agent):**  
    Answer the following question as best you can. You have access to the following tools:  
    {tool_list with name and description of each tool}  
    
    Use the following format:  
    Question: the input question you must answer  
    Thought: you should always think about what to do  
    Action: the action to take, should be one of [{tool_names}]  
    Action Input: the input to the action  
    Observation: the result of the action  
    ...(this Thought/Action/Action Input/Observation sequence can repeat N times)...  
    Thought: I now know the final answer  
    Final Answer: the final answer to the original question  
    
    **Begin!**
    
    

    In this template, LangChain will replace placeholders with the actual tool names and the running “scratchpad” of the agent (prior thoughts and observations) (How do I customize the prompt for the zero shot agent ? · Issue #4044 · langchain-ai/langchain · GitHub) (How do I customize the prompt for the zero shot agent ? · Issue #4044 · langchain-ai/langchain · GitHub). The LLM sees a system message constructed like this, and is expected to follow the Thought → Action → Observation loop format. Each iteration, the LLM’s output is parsed: if it produces an Action, the LangChain executor will call the indicated tool (e.g. search, calculator, API call) and feed the result back as an Observation, appending to the prompt for the next iteration. Finally, when the LLM outputs a Final Answer, that is taken as the agent’s answer. This prompt chaining allows LangChain agents to perform tasks like factual Q&A with web search, math calculations, code execution, etc., all guided by the structured prompt format. Developers can customize these prompts (e.g. to change the tone or format) via the agent_kwargs in LangChain, or even implement completely new agent prompts while still leveraging LangChain’s tooling and parsing logic (How do I customize the prompt for the zero shot agent ? · Issue #4044 · langchain-ai/langchain · GitHub) (How do I customize the prompt for the zero shot agent ? · Issue #4044 · langchain-ai/langchain · GitHub).

BabyAGI (Task-Driven Autonomous Agent)

  • GitHub: yoheinakajima/babyagi

  • Description: BabyAGI is a lightweight Python script that demonstrates an autonomous task management loop. It uses an LLM to create tasks, execute tasks, and prioritize the task list in order to fulfill an overarching objective provided by the user (Deep Dive Part 2: How does BabyAGI actually work?) (Deep Dive Part 2: How does BabyAGI actually work?). Unlike tool-using agents, BabyAGI’s prompts are more about task planning and results processing. It maintains a list of tasks and a memory of past results (optionally via a vector store like Pinecone). Three kinds of prompts are used in each loop iteration, corresponding to three “agents” in the system: the Execution Agent, the Task Creation Agent, and the Prioritization Agent (Deep Dive Part 2: How does BabyAGI actually work?) (Deep Dive Part 2: How does BabyAGI actually work?).

  • Prompt Templates: Each stage of BabyAGI is driven by a distinct prompt template, passed to the LLM with the relevant context:

    • Task Execution Prompt: Given the current objective, context of previous completed tasks, and a specific task to execute, BabyAGI asks GPT to perform the task and return the result. For example:

      
      

      This prompt provides the AI with the high-level goal and any context (e.g. summaries of recent results), then the specific task at hand, ending with “Response:” to cue the assistant to output the result of the task (Deep Dive Part 2: How does BabyAGI actually work?). The assistant’s answer (as plain text) is captured as the task result.

    • Task Creation Prompt: After execution, BabyAGI creates new tasks based on the objective and the result of the last task. It might prompt GPT with something like:

      
      

      This instructs the model to propose new tasks in light of the recent outcome, avoiding duplicates (Deep Dive Part 2: How does BabyAGI actually work?). The output is parsed into new task entries.

    • Task Prioritization Prompt: BabyAGI then reprioritizes the tasks. It might prompt:

      
      

      This makes the AI reorder the task list (and fix formatting) while preserving all tasks (Deep Dive Part 2: How does BabyAGI actually work?). The model’s answer (a numbered list of tasks) replaces the old task list, and the loop repeats with the next task.

    Each of these prompt templates is relatively short but provides crucial context so that GPT can do its part. By chaining these prompts, BabyAGI can autonomously break down a big objective into smaller tasks, execute them one by one, and adjust the plan as it goes (Deep Dive Part 2: How does BabyAGI actually work?) (Deep Dive Part 2: How does BabyAGI actually work?). Use cases demonstrated by BabyAGI include things like researching a topic via iterative subtasks, or any objective that can be decomposed and executed stepwise without external tool use. The prompts show how role and context can be set for specialized behaviors (task creator vs. executor), all within a single script framework.

OpenAgents (XLang OpenAgents Platform)

  • GitHub: xlang-ai/OpenAgents

  • Description: OpenAgents is an open platform that provides multiple specialized agents through a unified chat interface (GitHub - xlang-ai/OpenAgents: [COLM 2024] OpenAgents: An Open Platform for Language Agents in the Wild) (OpenAgents/README.md at main · xlang-ai/OpenAgents · GitHub). It was developed to bring autonomous agent capabilities to everyday users. Notably, it implements at least three agents: Data Agent (for data analysis with Python/SQL), Plugins Agent (with access to 200+ third-party plugins/APIs), and Web Agent (for autonomous web browsing via a headless Chrome extension) (GitHub - xlang-ai/OpenAgents: [COLM 2024] OpenAgents: An Open Platform for Language Agents in the Wild) (OpenAgents/README.md at main · xlang-ai/OpenAgents · GitHub). Each agent is powered by prompt templates and methods tailored to its domain. OpenAgents’ backend uses Flask, and each agent has its own prompt logic and toolkit (for example, Data Agent can execute code, Web Agent can click links or fill forms, etc. through the browser plugin) (OpenAgents/README.md at main · xlang-ai/OpenAgents · GitHub) (OpenAgents/README.md at main · xlang-ai/OpenAgents · GitHub). The prompts are structured to clearly define the agent’s identity and capabilities to the LLM, so that the user can interact in natural language and the agent will decide how to use its tools.

  • Prompt Design: For each agent, OpenAgents defines a system prompt that establishes the agent’s role and how it should behave. According to the project’s documentation, the system messages emphasize a “friendly and intuitive assistant” persona and inform the model about the tools at its disposal. For example:

    • Plugins Agent – System prompt (excerpt): “You are XLang Plugins Agent, a friendly and intuitive assistant developed by the XLang team to guide you through every aspect of your work and daily life.” The agent is aware of the plugins it has and is instructed to use them appropriately in the right order ([PDF] arXiv:2310.10634v1 [cs.CL] 16 Oct 2023) ([PDF] OpenAgents: An Open Platform for Language Agents in the Wild). This means the LLM will receive a message explaining that it can call plugin APIs (the backend likely intercepts certain formats or keywords to invoke plugins), and it should decide which plugin to use based on user requests (with an auto-selection mechanism for convenience (OpenAgents/README.md at main · xlang-ai/OpenAgents · GitHub)). The prompt encourages the agent to be proactive and helpful with the variety of tools.

    • Web Agent – System prompt (excerpt): “You are Open Web Agent, a friendly and intuitive assistant to guide you through every aspect of your work and your daily life.” ([PDF] OpenAgents: An Open Platform for Language Agents in the Wild) This agent’s prompt would also mention its browsing ability (e.g. that it can navigate web pages, click buttons, input text). By setting this context, the model knows it can respond with actions like “Opening the website…” or “Filling the form with provided info”, which the OpenAgents system then carries out via the browser extension.

    • Data Agent – System prompt likely defines it as an expert in data manipulation: for instance, “You are XLang Data Agent, an assistant that can analyze data. You can write and execute Python or SQL code to accomplish the user’s requests, and output results or charts.” (While the exact wording isn’t given here, the idea is the prompt grants the AI the ability to use a coding tool and reminds it to output results in a helpful format (OpenAgents/README.md at main · xlang-ai/OpenAgents · GitHub).)

    Under the hood, each agent’s prompt is combined with user queries in the chat. The OpenAgents paper notes that these prompts help the agents handle “data analysis, API tools, and web browsing” tasks in a user-friendly way (OpenAgents: An Open Platform for Language Agents in the Wild | OpenReview). For example, a user asking “Analyze this CSV file for trends” to the Data Agent will trigger the agent’s system prompt (with its coding abilities) plus the user request, so the model might produce Python code as a response. The OpenAgents platform then executes that code and returns the output. Similarly, the Web Agent’s prompt allows it to autonomously decide to click links or extract info when the user says “Find me the latest news on climate change”. In summary, OpenAgents illustrates prompt-driven specialization: one LLM, but different system prompts tailor it into a data scientist, a plugin-hub assistant, or a web navigator as needed.

CAMEL (Communicative Agents for “Mind” Exploration)

  • GitHub: LightAI/camel

  • Description: CAMEL is an open-source framework for multi-agent collaboration via role-playing prompts (CAMEL Role-Playing Autonomous Cooperative Agents). Unlike a single agent, CAMEL sets up two (or more) AI agents with complementary roles (e.g. AI User and AI Assistant) and has them converse to solve a given task. This is achieved entirely through carefully crafted system prompts that define each agent’s identity and the ground rules of interaction. CAMEL provides a library of prompt templates for various roles and scenarios (e.g. software engineer, doctor, etc.), and a coordinator to initialize a conversation between the agents (camel.prompts package — CAMEL 0.2.45 documentation) (camel.prompts package — CAMEL 0.2.46 documentation). The approach has been used to get more robust or creative outcomes, since one agent can ask clarifying questions or propose approaches and the other executes them.

  • Prompt Templates: In CAMEL, each agent gets a system prompt called an “inception prompt” that firmly instructs it on its role. For example, one agent might be the Assistant (who solves the task) and the other the User (who gives instructions). The CAMEL prompt templates could be exemplified as:

    • Assistant’s system prompt:

      
      

      This system message (filled in with specific roles, e.g. “software engineer” as assistant_role and “project manager” as user_role, and the concrete task) primes the Assistant agent to never deviate from its role, to focus on the given task, and to expect iterative instructions (camel.prompts package — CAMEL 0.2.46 documentation) (camel.prompts package — CAMEL 0.2.46 documentation). It establishes a protocol: the assistant should provide a solution and then prompt the user for the next instruction until the task is done.

    • User’s system prompt: (given to the AI acting as the “user” in the chat)

      
      

      This prompt ensures the second agent behaves like a human user or boss figure, providing incremental instructions rather than dumping the whole task or going silent.

    Once these system prompts are set, the two AIs start chatting: the AI User gives an instruction, the AI Assistant replies with a detailed solution (ending in "Next request."), the user agent then gives the next instruction, and so on. The prompt rules (no role flipping, one instruction at a time, etc.) are reinforced in every message to keep the conversation on track (camel.prompts package — CAMEL 0.2.46 documentation). CAMEL’s library includes many preset roles and even a “critic” role for feedback (camel.prompts package — CAMEL 0.2.46 documentation). This prompt-driven dialog strategy has been used for complex tasks like code generation (where the AI user acts as a problem describer and the AI assistant writes code) and for making the pair more resilient to prompt injection or going off-task (CaMeL offers a promising new direction for mitigating prompt ...). It showcases how dynamic composition of prompts (two system messages + the evolving conversation) can coordinate multiple agents toward a common goal.

Each of the above projects demonstrates the power of prompt engineering in autonomous agents. By structuring system prompts with role definitions, tools, and formatting rules, these frameworks enable LLMs to go beyond single-turn Q&A and instead engage in goal-directed behavior – whether it’s browsing the web, writing code, or managing a task list. In summary, open-source agent frameworks use carefully designed prompt templates (often split into sections like role, instructions, tools, examples, format, etc.) and fill in user-specific details at runtime to dynamically compose prompts. This allows the agent to interpret user objectives, interact with external resources, and iteratively produce outputs that accomplish complex tasks with minimal human intervention.

Sources: The example prompts and descriptions above were drawn from the official repositories and documentation of each project, including Auto-GPT’s prompt configuration (Behind Auto-GPT: A Contact-based Prompt Engineering Practice | by Andy Yang | Medium) (Behind Auto-GPT: A Contact-based Prompt Engineering Practice | by Andy Yang | Medium), LangChain’s agent prompt formats (How do I customize the prompt for the zero shot agent ? · Issue #4044 · langchain-ai/langchain · GitHub) (How do I customize the prompt for the zero shot agent ? · Issue #4044 · langchain-ai/langchain · GitHub), BabyAGI’s task handling prompts (Deep Dive Part 2: How does BabyAGI actually work?) (Deep Dive Part 2: How does BabyAGI actually work?), OpenAgents’ project info (OpenAgents/README.md at main · xlang-ai/OpenAgents · GitHub) (OpenAgents/README.md at main · xlang-ai/OpenAgents · GitHub) and research paper, and the CAMEL role-playing prompt definitions (camel.prompts package — CAMEL 0.2.46 documentation) (camel.prompts package — CAMEL 0.2.46 documentation). Each illustrates a unique style of prompt composition for autonomous LLM agents.

Latest Articles

Latest Articles

Stay informed with the latest guides and news.