Guide

Prompt engineering for SEOs

Andrew Askins
#SEO#AI
A description of my image

In this post, I’m going to go over specific prompt engineering principles you can use as an SEO to get more out of Large Language Models (LLMs) like GPT-4 and Claude. But before we get into that, it’s important to understand how LLMs work.

If there’s one thing I hope you’ll take away from this blog post it’s this: LLMs don’t store facts, they store probabilities (I stole this phrase from my friend Mark Hennings).

LLMs don’t store facts, they store probabilities - Mark Hennings

Think of an LLM as a sophisticated slot machine. Each time you pull the lever (enter a prompt), the machine doesn’t produce a pre-determined answer but rather spins through billions of probabilistic connections to generate the most likely appropriate response.

This probabilistic nature means that prompting is about increasing the likelihood of getting useful outputs, not guaranteeing perfect results. Good prompts give the LLM more information (or context) which helps to narrow its potential range of responses.

But you will never narrow the range to one; there is always an element of randomness involved. And like a slot machine, each LLM has biases inherent in its probabilities that you may have to overcome.

Understanding this fundamentally changes how we approach AI tools:

Getting the most from AI requires understanding how to tilt those probabilities in our favor, but always remember you’re still working with a sophisticated slot machine.

Ways to influence the LLM slot machine (range of LLM outputs)

Prompt engineering isn’t the only way we can influence LLMs.

Prompt Engineering

The art and science of crafting inputs to get better outputs. This is the focus of this post and the most accessible approach for most SEOs — it doesn’t require any coding or special tools (although tools can help).

Retrieval Augmented Generation (RAG)

This combines the general knowledge of LLMs with specific information retrieved from data sources you choose. In the same way that you do research before writing a blog post, gathering information and providing it to the LLM is a good way to influence the outputs.

If you’ve ever used Perplexity, it runs an old school search with your prompt first to find relevant information, then summarizes the results with an LLM. That’s Retrieval Augmented Generation.

But don’t forget the probabilities that are still involved in generating the output. You are essentially just providing a very large prompt with lots of additional context to the slot machine.

Fine-tuning

Fine-Tuning is the process of training an existing “pre-trained,” general purpose LLM on additional data to specialize it for specific tasks. You’re essentially creating a copy of the original model with a new set of weights or probabilities influenced by the additional training material you provide. This is more technical and expensive than either prompt engineering or RAG.

For most SEO applications, prompt engineering offers the best balance of accessibility and results. That’s why we’ll focus there for the rest of this post.

The five core principles of prompt engineering (I read the book so you don’t have to)

When we started building MetaMonster, I knew I needed to get WAY better at prompt engineering. So I bought the book, “Prompt Engineering for Generative AI,” by James Pheonix and Mike Taylor. In it, they lay out 5 core principles of prompt engineering which closely mirror the recommendations in OpenAI’s own guide to prompt engineering.

If you learn these principles and keep coming back to them, you should start to see improvements in the outputs you’re getting pretty quickly.

1. Give Direction

LLMs perform better when they have detailed instructions on what you are looking for, the style they should follow, and what constitutes success. A good shortcut to tweak the style of the output is to use role-playing - give the LLM a persona to mimic.

Techniques:

SEO Example:

You are an expert SEO copywriter who specializes in writing meta descriptions that improve click-through rates. Your meta descriptions should:
1. Include the primary keyword naturally
2. Create a sense of urgency
3. Be between 120-155 characters
4. Include a clear value proposition

Based on this product page about organic dog food, write 3 alternative meta descriptions.

This prompt establishes expertise, sets parameters, and gives clear direction on what makes a good output.

2. Specify Format

Similar to giving direction, defining the specific structure you want the output to be in will improve consistency and usability of LLM outputs.

SEO Example:

Create a content brief for an article targeting the keyword "best gaming laptops 2025." Structure your response as follows:

## Primary Keyword
[Primary keyword and search volume]

## Secondary Keywords
[List 5-7 related keywords with search volumes]

## Search Intent
[Analyze the search intent in 2-3 sentences]

## Suggested H1
[Propose an H1 tag]

## Suggested Outline
[Create a detailed outline with H2s and H3s]

## Competitors to Beat
[List 3 top-ranking URLs with brief notes on their approach]

## Content Differentiators
[Suggest 3 ways to make this content stand out]

By specifying exactly what you want in the output, you’ll get a structured content brief that’s immediately useful rather than a wall of text you need to reformat.

3. Provide Examples

One of the most powerful techniques in prompt engineering is showing the model examples of what you want. In the world of prompt engineering this is called “few-shotting” or “multi-shotting.”

SEO Example:

Generate 5 SEO-optimized title tags for a blog post about prompt engineering for content marketers. Each title should include the keyword "prompt engineering," have an emotional hook, and be under 60 characters.

Examples of great title tags:
1. "5 SEO Tools That Actually Boost Rankings in 2025" (48 chars)
2. "Why Your Google Analytics Setup Is Costing You Money" (52 chars)
3. "How We Increased Organic Traffic 172% in 90 Days" (47 chars)

4. Evaluate Quality

Remember that prompts will always have a degree of randomness. Which means you can luck into a good output with a bad prompt. To see how well your prompt is really working, you need to test it multiple times and then have a standard way of evaluating quality.

Techniques:

SEO Example:

Imagine you have a prompt to generate a page title. Rather than running it one page at a time inside of ChatGPT, you could run it across your entire website with Screaming Frog’s custom JavaScript feature (or with the custom prompt feature we’re building for MetaMonster). Then sort the responses alphabetically and you’ll be able to spot biases in the outputs more easily.

5. Divide Labor

Complex tasks often benefit from being broken down into steps. This “chain of thought” approach helps LLMs tackle problems more systematically. You see this at play in a lot of the popular new reasoning models that “think out loud.”

SEO Example:

Help me analyze these 5 keywords for a client in the home fitness equipment niche. For each keyword:

Step 1: Identify the search intent (informational, transactional, navigational, or commercial investigation)
Step 2: Estimate the difficulty to rank (consider the likely competitors)
Step 3: Suggest the most appropriate content format (blog post, product page, comparison guide, etc.)
Step 4: Recommend one strategic advantage to focus on to outrank competitors

Keywords:
1. "affordable home gym equipment"
2. "peloton alternatives under $500"
3. "how to set up a home gym in a small apartment"
4. "best strength training equipment for beginners"
5. "home gym maintenance tips"

By explicitly asking the LLM to work through steps, you get more thoughtful, comprehensive analysis than if you’d simply asked for “thoughts on these keywords.”

Bonus: Generate multiple outputs in a single prompt

Another way I often use the probabilistic nature of LLMs to my advantage is by telling the AI to generate multiple outputs rather than a single output. I can then evaluate the options, and select the one that I think is best like I might when working with a junior designer. This is easier to implement for one-off tasks than for large-scale work.

SEO Example:

Generate 5 SEO-optimized title tags for a blog post about prompt engineering for content marketers. Each title should include the keyword "prompt engineering," have an emotional hook, and be under 60 characters.

Examples of great title tags:
1. "5 SEO Tools That Actually Boost Rankings in 2025" (48 chars)
2. "Why Your Google Analytics Setup Is Costing You Money" (52 chars)
3. "How We Increased Organic Traffic 172% in 90 Days" (47 chars)

This same example from earlier asks for 5 title tags instead of just one. This gives you a chance to evaluate multiple options, and increases the likelihood that one will work for what you need. Or better yet, mix and match the best parts of multiple options!

You can use this principle easily in MetaMonster, click generate multiple times until you find an option you like. Then tweak it and hit publish or export!

Putting It All Together: A Comprehensive SEO Prompt

Let’s combine these principles into a single, powerful prompt for SEO content creation:

You are an expert SEO content strategist who specializes in creating content that ranks well while providing exceptional value to readers. Your content is known for being comprehensive, well-structured, and engaging.

Create a rough draft of a 1,500-word blog post targeting the keyword "prompt engineering for SEO." The post should explain how SEOs can use prompt engineering to improve their work.

Step 1: Analyze the search intent behind this keyword and identify 5 related keywords to incorporate
Step 2: Create a detailed outline with H2s and H3s
Step 3: Write an introduction that hooks the reader and clearly states what they'll learn
Step 4: Write the full article following your outline
Step 5: Suggest a meta title and description for this post

Format your response as follows:
## Search Intent Analysis
[Your analysis]

## Related Keywords
[List of 5 keywords]

## Content Outline
[Detailed outline]

## Full Article
[Complete article with proper headings]

## Metadata
Title: [SEO title under 60 characters]
Description: [Meta description under 155 characters]

Examples of great blog post introductions:
1. "When I started using AI for content creation, I wasted hours on poorly constructed prompts. After three months of testing and 200+ articles, I've discovered what actually works for SEO-focused content. Here's everything I've learned about prompt engineering for search optimization..."

2. "Prompt engineering isn't just for AI researchers anymore. It's becoming a critical skill for SEOs who want to leverage AI without sacrificing rankings. In this guide, I'll show you exactly how to craft prompts that generate content Google (and your visitors) will love..."

This comprehensive prompt:

Advanced prompt parameters

Beyond the core principles we’ve discussed, many AI platforms allow you to adjust a few technical parameters alongside your prompts. Think of these as the dials and knobs that let you fine-tune how the LLM’s probability machine works. They aren’t super necessary for most use-cases, but can be helpful if you aren’t getting the output you want by just tweaking the text of your prompt.

Temperature controls randomness and the perception of “creativity.” Higher values (0.7-1.0) produce more varied, random outputs, while lower values (0.1-0.3) yield more predictable, focused responses. For factual SEO tasks like meta description generation, try lower temperatures. If you’re finding the wording to be too repetitive, or for brainstorming content angles, go higher.

Top P (nucleus sampling) determines how many probable next tokens the model considers. A value of 0.1 means only the tokens comprising the top 10% probability mass are considered. This is an alternative to temperature for controlling output variability.

Top K limits the model to sample from only the K most likely next tokens. This helps avoid unlikely or nonsensical completions. Try increasing it and see what kind of wild results you get back.

Max Tokens caps how long the response can be. Setting appropriate limits can help focus the AI on delivering concise, relevant information.

Presence Penalty reduces repetition by penalizing tokens that have already appeared in the text. This can be helpful when generating varied content ideas.

Frequency Penalty discourages the model from repeating the same phrases too often, which is particularly useful for longer content generation.

In practice, temperature is the parameter most SEOs should experiment with first. Start with 0.3 for factual tasks and 0.7 for creative ones, then adjust based on results.

Balancing AI assistance with human expertise

At MetaMonster we fundamentally believe that the best uses for AI are getting humans unstuck and freeing us up to do more creative, meaningful work (not less). Even with the best prompts AI output will be more generic, less opinionated, and less interesting than something created by a thoughtful, creative marketer.

Generic is fine for documentation, summaries (hello meta descriptions), and even rough drafts. But it doesn’t work for breaking through the noise, and there’s about to be a lot more noise.

As you experiment with these techniques, keep track of what works. Build your own library of effective prompts. And most importantly, bring your unique human perspective to refine and improve what the AI suggests.

Let MetaMonster handle the prompt engineering for you

While prompt engineering is an awesome skill to have that can significantly improve your SEO workflows, it’s still time-consuming to craft the perfect prompts for every task. And maybe it’s just not your jam.

That’s exactly why we built MetaMonster—an SEO crawler that fixes issues for you (or at least gives you a really solid starting point).

With MetaMonster, you can generate meta descriptions and page titles with a single click, without worrying about the perfect temperature setting or prompt construction.

We’ve spent lots of time testing (principle number 5) and refining our prompts based on thousands of real-world examples, so you don’t have to. Our tool automatically extracts the relevant context from your pages and applies carefully engineered prompts that follow all the principles outlined above.

And we’re adding more generation types soon, plus custom prompts for those of you who do want to dive deeper and master the craft of prompt engineering.

Clean up missing metadata in minutes instead of months.

Generate SEO-optimized metadata for your clients fast with the SEO crawler that fixes issues for you.

Start 7-day free trial