Portfolio/Learn/Prompt Engineering: The Art of Talking to LLMs
Machine Learning & AIBeginner

Prompt Engineering: The Art of Talking to LLMs

Master the techniques that make LLMs perform — chain-of-thought, few-shot learning, system prompts, structured output, and the patterns used by top AI engineers.

18 min read
March 15, 2026
Prompt EngineeringChain-of-ThoughtFew-ShotLLMPython

Why Prompting Matters

The same LLM can produce brilliant or terrible outputs depending entirely on how you prompt it. Prompt engineering is the skill of crafting inputs that reliably elicit the desired behavior. Before fine-tuning, before RAG, before building a complex pipeline — try better prompting. It's free, instant, and often sufficient.

Zero-Shot, Few-Shot, and Many-Shot

Zero-shot: describe the task without examples. Few-shot: provide 2-5 examples of input-output pairs. Many-shot: provide dozens of examples. Few-shot dramatically improves performance on structured tasks like classification, formatting, and extraction. The examples teach the model your exact expectations.

python
# Zero-shot (works for simple tasks)
zero_shot = "Classify this review as positive or negative: 'The food was amazing!'"

# Few-shot (much more reliable for consistent format)
few_shot = """Classify each review as positive or negative.

Review: "The food was terrible and cold."
Classification: negative

Review: "Excellent service, will come back!"
Classification: positive

Review: "Mediocre at best, overpriced."
Classification: negative

Review: "The pasta was divine, best I've had in years."
Classification:"""

# The model will output "positive" with high confidence

Chain-of-Thought (CoT) Reasoning

Adding 'Let's think step by step' or providing reasoning traces dramatically improves performance on math, logic, and multi-step problems. CoT forces the model to show its work, catching errors that would occur with direct answers. This is one of the most impactful prompting techniques.

python
# Without CoT — model often gets wrong answer
prompt_no_cot = "If a store has 3 apples and gets 2 shipments of 5 apples each, then sells 7, how many are left?"

# With CoT — model reasons through it correctly
prompt_cot = """If a store has 3 apples and gets 2 shipments of 5 apples each,
then sells 7, how many are left?

Let's solve this step by step:
1. Starting apples: 3
2. Apples from shipments: 2 × 5 = 10
3. Total before selling: 3 + 10 = 13
4. After selling 7: 13 - 7 = 6

The answer is 6."""

# For new problems, just add "Let's think step by step:" at the end
# The model will generate its own reasoning chain

CoT works best on models with 7B+ parameters. Smaller models may generate plausible-looking but incorrect reasoning chains. Always validate CoT outputs for critical applications.

System Prompts and Role Setting

System prompts set the model's persona, constraints, and output format before any user interaction. They're the most powerful lever for controlling behavior. Be specific about what the model should and shouldn't do, the format you expect, and how to handle edge cases.

python
system_prompt = """You are a senior Python code reviewer. Your job is to:

1. Identify bugs, security issues, and performance problems
2. Suggest specific fixes with code examples
3. Rate severity as: CRITICAL, WARNING, or SUGGESTION
4. Be concise — no praise, no filler, just actionable feedback

Output format:
## [SEVERITY] Issue title
**Line:** <line number>
**Problem:** <one sentence>
**Fix:**
```python
<corrected code>
```

If the code has no issues, respond with: "No issues found."
Do NOT add features, refactor style, or suggest improvements beyond correctness."""

# This produces consistent, structured, actionable reviews
# compared to a generic "Review this code" prompt

Structured Output with JSON Mode

When you need machine-parseable output, explicitly request JSON and provide the schema. Most modern LLMs support JSON mode natively. Combined with few-shot examples, this creates reliable structured extraction pipelines.

python
from openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4o",
    response_format={"type": "json_object"},
    messages=[
        {
            "role": "system",
            "content": """Extract entities from text. Return JSON with this schema:
{
    "people": [{"name": str, "role": str}],
    "organizations": [{"name": str, "type": str}],
    "locations": [{"name": str, "context": str}]
}"""
        },
        {
            "role": "user",
            "content": "Satya Nadella, CEO of Microsoft, announced a new AI lab in London."
        }
    ]
)

import json
entities = json.loads(response.choices[0].message.content)
# {"people": [{"name": "Satya Nadella", "role": "CEO"}],
#  "organizations": [{"name": "Microsoft", "type": "technology company"}],
#  "locations": [{"name": "London", "context": "new AI lab location"}]}

Top prompting patterns to master: (1) Role + Task + Format, (2) Few-shot examples, (3) Chain-of-thought, (4) Structured output with schema, (5) Self-consistency (generate multiple answers, take majority vote). These five cover 90% of production use cases.