Prompt Builder

Craft your prompt below, then click Generate to get a response from the AI.


Introduction

A prompt is the input you provide to a Large Language Model (LLM) to get a specific output. Crafting an effective prompt involves model choice, wording, structure, and context—it’s a creative and iterative process.

Prompt engineering is the process of designing high-quality prompts that guide LLMs to produce accurate and relevant outputs.

LLM Configuration

Most LLMs come with various configuration options that control the output. Effective prompt engineering requires setting these optimally for your task.

Temperature

Controls the degree of randomness in the output. Lower temperatures (e.g., 0.1) are good for prompts that expect a more deterministic, factual response. Higher temperatures (e.g., 0.9) can lead to more diverse or creative results.

Top-K

Restricts the model's output to the K most likely tokens. A low Top-K value makes the output more predictable, while a high value allows for more creativity.

Top-P

Selects tokens based on their cumulative probability. It provides a more dynamic way to control randomness compared to Top-K.


Basic Prompting Techniques

Zero-Shot Prompting

The simplest prompt type: description only, no examples.

Classify the following movie review as POSITIVE, NEUTRAL, or NEGATIVE.

Review: "Her" is a disturbing masterpiece. I wish there were more movies like this.
Sentiment:

One-Shot & Few-Shot Prompting

Provide one (one-shot) or multiple (few-shot) examples to teach the model a pattern.

Parse the pizza order into JSON.

EXAMPLE:
I want a small pizza with cheese and pepperoni.
JSON: {"size": "small", "ingredients": ["cheese", "pepperoni"]}

Now, I would like a medium pizza with mushrooms.
JSON:

Advanced Prompting Techniques

Chain-of-Thought (CoT) Prompting

Encourage the model to think step-by-step for complex reasoning tasks.

When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old. How old is my partner? Let's think step by step.

Step-back Prompting

Prompt the LLM to first consider a general question related to the specific task, then feed that answer into a subsequent prompt.

Self-Consistency

Run the same prompt multiple times to generate diverse reasoning paths, then choose the most common answer.


Code Prompting

LLMs can write, explain, translate, and debug code. Be specific in your requests.

Example: Writing a Bash Script

Write a code snippet in Bash, which asks for a folder name. Then it takes the contents of the folder and renames all the files inside by prepending the name 'draft' to the file name.

Best Practices

  • Provide Examples: Use few-shot prompts to guide formatting.
  • Design with Simplicity: Keep prompts clear and concise.
  • Be Specific About the Output: Define structure and style.
  • Use Instructions over Constraints: Tell the model what to do.
  • Experiment: Vary wording, order, and examples.
  • Document Your Attempts: Track results for iterative improvement.