[Prompt]
Prompts are guides for generating responses or completing tasks in LLM, called large-scale language models.
How you write this prompt will affect the accuracy and format of the answers your model returns.
In other words, “Prompt Engineering” is
The process of effectively creating and tailoring prompts to an artificial intelligence (AI) model to achieve the desired results .
And to write a prompt, there are basic things as follows.
Creating a prompt......
- Select an appropriate model that can provide the desired answer
- Understand the rules for creating prompts according to the model
- Derive the desired result through iterative prompt engineering
- Start with a simple zero-shot prompt and continue to make modifications Correct the accuracy of the answer through
- Define the accuracy and form of the answer by adding several examples after zero shot
- If the answer is not satisfactory after adjusting the context and examples of the prompt, fine-tune the model.
[Format definition for writing prompts]
In order to get the desired answer, you need to write a prompt, that is, the prompt must contain “prior information” other than the “question” you want to convey.
Let’s look at two examples of this type of writing.
First, if you specify an output format for questions, contests, and answers,
Although not all of these components are necessary, doing so allows you to instruct your response with some background and information.
Question :
Context :
Answer :
Second, when the prompt for questions and answers is defined without constructing a separate context.
In this case, there is no predefined content, so the question determines how the answer is provided.
Q:
A:
Now, if we look at the elements of writing a prompt in more detail, there are two types of Content and Instruction within the prompt.
Both are ways to get the answers you want.
Content
refers to the background or overall context of the information provided to the model.
This provides additional context for a given situation, helping the model produce more accurate and meaningful responses.
Understanding of previous conversation content, situational explanation for response, external information and explanation, user purpose and intention, and specific domain knowledge are used for this purpose.
Instruction
refers to the part that explicitly instructs the model to perform a specific task or generate a response in a desired direction. Instruction plays an important role in controlling the model's behavior and achieving the desired results.
Therefore, when writing an Instruction, you must concisely use a clear statement and specify the desired format.
[Prompt Sample]
The example below is a sample prompt that requests three emails of high importance among my emails .
Instruction Instructs the model to list the subjects of emails with high importance
Context Provides the user's intention to request the task of finding emails with high importance among emails
Prompt with Instruction:
“Please find the top 3 most important emails you have received and list their subject lines.”
Context:
- User: "Find and tell me important things about my email."
[Focus the prompt on what you want]
Additionally, what is important when writing a prompt is what you are asking for.
In other words, you need to focus on clearly what you want to do rather than what you don't want to do .
If there are two prompts below, the first is the more efficient prompt .
- Please explain the concept of LLM to a 15-year-old child in 3-5 sentences.
- Explain about LLM. Keep it short and avoid using too complex language.
[Fine-tuning the model after adjusting the prompt]
When creating answers through this prompt creation process, fine tuning of the model may ultimately be necessary.
In general, there are three main variables for fine-tuning a model, and the answer is adjusted through them. The lower the temperature, the more realistic the results
.
A lower temperature means that the next token is always selected. For fact-based question answers, it is recommended to set the temperature to a low value, such as 0.
top_p
This parameter controls nuclear sampling, a technique for generating text that balances consistency and variety. Typically set to a value between 0 and 1. To ensure that your model produces realistic answers, set top_p to a low value, such as 0.
Repetition Penalty:
Repetition penalty reduces repetition in the model's output. High values minimize repetition and increase variety, while low values allow repetition.
Stop Sequence:
You can use a stop sequence to control the output. For example, when a specific statement is encountered, further output can be ignored or stopped.
This post is getting too long... I'm not going anywhere... I'll end here.