Temperature
Controls the degree of randomness in the output. Lower temperatures (e.g., 0.1) are good for prompts that expect a more deterministic, factual response. Higher temperatures (e.g., 0.9) can lead to more diverse or creative results.
Craft your prompt below, then click Generate to get a response from the AI.
A prompt is the input you provide to a Large Language Model (LLM) to get a specific output. Crafting an effective prompt involves model choice, wording, structure, and context—it’s a creative and iterative process.
Prompt engineering is the process of designing high-quality prompts that guide LLMs to produce accurate and relevant outputs.
Most LLMs come with various configuration options that control the output. Effective prompt engineering requires setting these optimally for your task.
Controls the degree of randomness in the output. Lower temperatures (e.g., 0.1) are good for prompts that expect a more deterministic, factual response. Higher temperatures (e.g., 0.9) can lead to more diverse or creative results.
Restricts the model's output to the K most likely tokens. A low Top-K value makes the output more predictable, while a high value allows for more creativity.
Selects tokens based on their cumulative probability. It provides a more dynamic way to control randomness compared to Top-K.
The simplest prompt type: description only, no examples.
Classify the following movie review as POSITIVE, NEUTRAL, or NEGATIVE.
Review: "Her" is a disturbing masterpiece. I wish there were more movies like this.
Sentiment:
Provide one (one-shot) or multiple (few-shot) examples to teach the model a pattern.
Parse the pizza order into JSON.
EXAMPLE:
I want a small pizza with cheese and pepperoni.
JSON: {"size": "small", "ingredients": ["cheese", "pepperoni"]}
Now, I would like a medium pizza with mushrooms.
JSON:
Encourage the model to think step-by-step for complex reasoning tasks.
When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old. How old is my partner? Let's think step by step.
Prompt the LLM to first consider a general question related to the specific task, then feed that answer into a subsequent prompt.
Run the same prompt multiple times to generate diverse reasoning paths, then choose the most common answer.
LLMs can write, explain, translate, and debug code. Be specific in your requests.
Write a code snippet in Bash, which asks for a folder name. Then it takes the contents of the folder and renames all the files inside by prepending the name 'draft' to the file name.