a simple prototype system of Prompt Enhancement Engineering (PEE)

Okay, based on the concept of  we can build . We will demonstrate how to use a Large Language Model (LLM) itself to automatically enhance a user's raw prompt.

We will use Python and the OpenAI API for this example, as they are simple to use and powerful.

System Overview

This simple PEE system will follow the core process outlined in our white paper draft:

  1. Input: Receive a user's raw prompt.

  2. Parsing and Planning: Use an LLM (LLM-as-a-Planner) to analyze the raw prompt and generate an enhancement plan.

  3. Enhancement: Based on the enhancement plan, automatically construct a more complete and optimized prompt.

  4. Execution and Output: Send the enhanced prompt to an LLM and get the final result.

Preparation

Before you begin, make sure you have Python installed and an OpenAI API Key.

  1. Install the library:

    Bash
    pip install openai
    
  2. Set up the API Key:

    Python
    import openai
    import os
    
    # It's recommended to use environment variables
    openai.api_key = os.environ.get("OPENAI_API_KEY")
    

Core Code Implementation

We can break down the system into a few independent functions, with each function responsible for a specific step.

Module 1: Prompt Parsing and Planning

This module is the brain of the PEE system. We use a dedicated "system prompt" to instruct the LLM to act as a "Prompt Optimization Expert."

Goal: Have the LLM analyze the user's raw prompt and output a structured enhancement plan, for example, in JSON format.

Python
def get_enhancement_plan(raw_prompt):
    """
    Uses an LLM to analyze the raw prompt and generate an enhancement plan.
    """
    system_prompt_planner = """
    You are a prompt enhancement expert. Your task is to analyze the user's raw prompt and generate an enhancement plan for it.
    Please output in JSON format, with the following fields:
    - "task_type": The type of task (e.g., "Content Creation", "Data Analysis", "Code Generation").
    - "clarity_suggestions": A list of suggestions to clarify or add details.
    - "cot_required": A boolean value indicating whether to add a "Chain-of-Thought" instruction.
    - "persona_suggestions": A suggested role for the LLM (e.g., "Senior Software Engineer", "Marketing Expert").
    - "output_format": The recommended output format (e.g., "Markdown", "JSON", "Code Block").
    """

    response = openai.chat.completions.create(
        model="gpt-4", # Use a more powerful model for planning
        messages=[
            {"role": "system", "content": system_prompt_planner},
            {"role": "user", "content": f"Please generate an enhancement plan for the following prompt:\n\n'{raw_prompt}'"}
        ],
        response_format={"type": "json_object"}
    )
    import json
    return json.loads(response.choices[0].message.content)

Module 2: Prompt Enhancement and Construction

This module automatically constructs a new, enhanced prompt based on the plan generated in the previous step.

Goal: Translate the suggestions from the plan into actual prompt text.

Python
def build_enhanced_prompt(raw_prompt, plan):
    """
    Constructs the final prompt based on the enhancement plan.
    """
    enhanced_prompt_parts = []

    # 1. Add Persona
    if plan.get("persona_suggestions"):
        enhanced_prompt_parts.append(f"You are a {plan['persona_suggestions']}.")

    # 2. Add Task Details and Clarity
    if plan.get("clarity_suggestions"):
        enhanced_prompt_parts.append(
            "Please strictly adhere to the following instructions:\n" + "\n".join([f"- {s}" for s in plan['clarity_suggestions']])
        )

    # 3. Add the Original Task
    enhanced_prompt_parts.append(f"Your main task is: {raw_prompt}")

    # 4. Add Chain-of-Thought (CoT)
    if plan.get("cot_required"):
        enhanced_prompt_parts.append("To ensure accuracy, think through your answer step by step before giving your final conclusion.")

    # 5. Add Output Format requirements
    if plan.get("output_format"):
        enhanced_prompt_parts.append(f"Please use {plan['output_format']} format for your output.")

    return "\n\n".join(enhanced_prompt_parts)

Module 3: Execution and Result Retrieval

This is the final step, where the enhanced prompt is sent to the LLM and the final result is returned.

Python
def execute_enhanced_prompt(enhanced_prompt):
    """
    Executes the enhanced prompt and returns the result.
    """
    response = openai.chat.completions.create(
        model="gpt-3.5-turbo", # You can choose a different model as needed
        messages=[
            {"role": "user", "content": enhanced_prompt}
        ]
    )
    return response.choices[0].message.content

Full Running Example

Now, let's connect all the modules to create a complete workflow.

Python
if __name__ == "__main__":
    # The user's raw prompt
    user_prompt = "Write an article about the future trends of AI."

    print("----- Original Prompt -----")
    print(user_prompt)
    print("\n" + "="*30 + "\n")

    # Step 1: Generate an enhancement plan
    print("----- Generating a prompt enhancement plan... -----")
    enhancement_plan = get_enhancement_plan(user_prompt)
    print("Enhancement Plan:\n", enhancement_plan)
    print("\n" + "="*30 + "\n")

    # Step 2: Build the enhanced prompt based on the plan
    print("----- Building the enhanced prompt... -----")
    enhanced_prompt = build_enhanced_prompt(user_prompt, enhancement_plan)
    print("Enhanced Prompt:\n", enhanced_prompt)
    print("\n" + "="*30 + "\n")

    # Step 3: Execute the enhanced prompt and get the final output
    print("----- Calling the LLM to generate the final content... -----")
    final_output = execute_enhanced_prompt(enhanced_prompt)
    print("Final Output:\n", final_output)

How to Extend This System?

This simple PEE system is just a starting point. You can expand and improve it in the following ways:

  • Introduce a Prompt Database: Store common enhancement templates and strategies in a database for easy management and reuse.

  • Support More Enhancement Strategies: Add more modules, such as:

    • Dynamic context insertion (e.g., retrieving relevant information from a vector database).

    • Task decomposition (breaking a complex task into multiple subtasks and generating prompts for each).

    • Multi-turn conversation management.

  • Add Evaluation and Feedback Mechanisms: Automatically or manually evaluate the LLM's output quality and feed the results back into the PEE system for self-optimization.

  • Integrate Multiple LLMs: Support switching between different models (e.g., GPT-4, Claude, Gemini) and provide model-specific optimization features.

评论

此博客中的热门博文

Draft White Paper on Prompt Enhancement Engineering

Why do we need Prompt Enhancement Engineering?