Published on

Creating a Unique Personality for Your OpenAI Chatbot

Authors

You can shape your Chatbot's personality by crafting effective system instructions for the OpenAI Assistant.

In this post, you’ll learn how to craft system instructions—often called the “system prompts” or “system message”—and see how these instructions work differently across OpenAI models like GPT-4 and GPT-3.5.

Haven't created your Assistant on the OpenAI Platform yet? Check out this guide to get started. If you’ve already set up let’s dive in!

1. System Instructions Best Practices

Effective system instructions are foundational to prompt engineering. By carefully crafting them, you can harness powerful language models—like GPT‑4 or GPT‑3.5—to remain within desired bounds, produce higher-quality responses, and consistently represent the persona or expertise you need.

A. Be Clear and Specific

  1. Define the Role

    • Example: “You are a friendly English tutor who explains grammatical rules in plain language.”
    • Why: This helps the model adopt the right persona—whether that’s a teacher, legal advisor, or comedy act.
  2. Outline the Style / Tone

    • Example: “Use simple, concise sentences. Provide step-by-step reasoning. Avoid unnecessary jargon.”
    • Why: Makes sure your assistant stays consistent in how it sounds and shares information.
  3. Set Limitations and Boundaries

    • Example: “Do not provide disallowed content such as personal medical diagnoses or legal rulings. Instead, provide general information or disclaimers.”
    • Why: Helps ensure the model does not accidentally violate content policies or produce off-topic content.
  4. Specify Output Format

    • Example: “Answer in bullet points where possible, and include references at the end. Use markdown formatting for code blocks.”
    • Why: Promotes consistent, predictable results that follow a certain structure.

B. Align with Policy and Compliance

  • If your project or domain requires disclaimers (legal, medical, financial), instruct the model to include them.
  • Remind the model to avoid giving definitive advice in regulated fields. Instead, it should provide general information and prompt the user to consult a professional if necessary.

C. Provide Context Upfront

  • If the conversation requires specialized knowledge (e.g., content related to nuclear physics or advanced coding tasks), you can embed relevant context in the system message.
  • The more context you give, the less the model has to “guess” from its general training data.

D. Keep It Concise but Complete

  • The system instructions should be thorough yet not overly verbose. Overly complex or contradictory instructions can confuse the model.
  • A good rule of thumb: provide enough detail so the model knows precisely how to behave, but not so much that it overwhelms or introduces conflicts.

E. Iterate and Test

  • Prompt engineering often requires experimentation.
  • Test your system instructions with various user queries to ensure it behaves the way you expect.
  • Refine iteratively based on output.

2. How System Instructions Relate to Different Models

A. GPT‑4 vs. GPT‑3.5

  1. GPT‑4

    • Generally more powerful in following complex instructions, maintaining context, and reasoning carefully.
    • More tolerant of nuanced or intricate prompts.
    • Often requires less explicit steering if you want advanced reasoning or creativity, but clear constraints still help maintain focus and compliance.
  2. GPT‑3.5

    • Very capable, but can be more prone to misinterpretation or “hallucination” if instructions aren’t specific.
    • May need more strict and explicit instructions to reliably follow certain constraints (e.g., a required format).

B. Consistency Across Models

  • Regardless of which GPT model you use, the hierarchy of instructions remains the same:

    1. System instructions (highest priority)
    2. User instructions
    3. Assistant messages (past conversation context)
  • System instructions carry over from GPT‑4 to GPT‑3.5 essentially in the same manner; the difference is how effectively each model can follow and apply them.


3. Practical Examples

Below are two example system prompts, illustrating different levels of detail and constraints.


Example 1: Brief System Instruction

System Instruction:

You are a helpful assistant that provides clear, concise answers. Use friendly and professional language. If asked about legal or medical issues, include a disclaimer that you are not a licensed professional.

Notes:

  • Clarity: States the tone/style (“friendly and professional”).
  • Boundaries: Has a broad instruction about disclaimers for regulated fields.
  • Simplicity: Relatively short, leaving flexibility for the model.

Example 2: More Detailed System Instruction

System Instruction:

You are an expert research assistant with a formal writing style. Your task is to help users by providing well-explained answers to academic or technical questions. When answering:

  1. Always provide context or definitions for specialized terms.
  2. When coding, include fully functional code samples in Markdown code blocks.
  3. If asked for legal or medical advice, politely explain that you are not a professional in those fields and offer generic best practices or disclaimers.
  4. Use bullet points or enumerated lists to organize complex explanations.
  5. Never share personal opinions—remain neutral and factual.

Notes:

  • Specific: Tells the assistant exactly what to do in key areas (e.g., coding, disclaimers, style).
  • Well-Bounded: Includes what not to do (“never share personal opinions”).

4. Check for unintended consequences

Sometimes a prompt can unintentionally restrict the model too much or cause contradictory instructions.

  • Balance: If your instructions are extremely strict, the model might produce limited or repetitive content. If they’re too broad, it can drift from your intended style or domain.
  • Periodically review: As your conversation evolves, ensure that the system instructions still match your project’s goals. You can update them if your direction changes.