Data Struggling
Cheatsheet, Lifesavers, LLMs

Surviving the Promptpocalypse: A Guide to Better LLM Conversations

Promptpocalypse

Lee Boonstra just released a beautiful 68-page Prompt Engineering whitepaper that I will summarize here.

You’ve surely heard the buzz about LLMs and Agentic AI over the last couple of years. Think of them as that brilliant, slightly chaotic intern you once had. Super smart, tons of potential, but if you mumble your instructions, you’ll get a beautifully written report on the wrong topic. Prompt engineering is basically learning to talk to this intern, your instruction manual for the AI. You’re basically giving it a starting point and hoping it predicts the next words in a way that makes sense for *your* task. 

https://datastruggling.com/wp-content/uploads/2025/05/audio.mp3?_=1

Taming the Beast: LLM Output Settings

Before you even *think* about typing your prompt, you gotta fiddle with the LLM’s knobs and dials. It’s like tuning your guitar before a gig – essential stuff.

Your Prompting Toolkit: Tricks of the Trade

Alright, settings locked in. Time to actually *write* something. Here are some techniques that are more effective than just yelling at your screen:

1. Zero-Shot Prompting: The “Figure It Out Yourself” Method

The simplest of all. Just give the LLM a task description and let it rip. Example: “Classify this customer feedback as positive or negative.” Keep that temperature low if you don’t want any funny business.

Zero-Shot Example (Movie Review Classification):

Prompt: Classify movie reviews as POSITIVE, NEUTRAL, or NEGATIVE.
Review: “Her” is a disturbing study revealing the direction humanity is headed if AI is allowed to keep evolving, unchecked. I wish there were more movies like this masterpiece.
Sentiment:
Output: POSITIVE

 

2. One-Shot & Few-Shot Prompting: “Here, Let Me Show You”

Sometimes, the intern needs an example. Or a few. One-shot gives one example; few-shot gives several. This is your golden ticket for getting the output in a specific format or style.

Few-Shot Example (JSON from Pizza Orders):

Prompt: Parse a customer’s pizza order into valid JSON:

EXAMPLE:
I want a small pizza with cheese, tomato sauce, and pepperoni.

JSON Response:
“`json
{
“size”: “small”,
“type”: “normal”,
“ingredients”: [[“cheese”, “tomato sauce”, “pepperoni”]]
}
“`
Now, I would like a large pizza, with the first half cheese and mozzarella. And the other tomato sauce, ham and pineapple.
JSON Response:

Output:
“`json
{
“size”: “large”,
“type”: “half-half”,
“ingredients”: [[“cheese”, “mozzarella”], [“tomato sauce”, “ham”, “pineapple”]]
}
“`

Tip: Use good, varied examples. One wonky example can send your LLM down a rabbit hole. Include those pesky edge cases too!

 

3. Setting the Scene: System, Contextual, and Role Prompting

Role Prompting Example (Humorous Travel Guide):

Prompt: I want you to act as a travel guide. I will write to you about my location, and you will suggest 3 places to visit near me in a humorous style. My suggestion: “I am in Manhattan.”
Output Snippet: 1. Behold the Empire State of Mind: Ascend to the dizzying heights of the Empire State Building… Prepare to feel like King Kong atop the Big Apple, minus the giant ape-sized banana.

 

4. Chain of Thought (CoT) Prompting: “Show Your Work, AI!”

This one’s a biggie for tasks that need some actual thinking. You get the LLM to spell out its reasoning steps *before* it gives the final answer. Just adding “Let’s think step by step” can work wonders, especially for math problems where LLMs tend to, shall we say, *improvise* the answers.

CoT Example (Solving a Word Problem):

Prompt: When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old. How old is my partner? Let’s think step by step.

Output Snippet:
1. Let’s assume my current age is “x” years.
2. When I was 3 years old, my age was 3 years.
3. My partner’s age at that time was 3 times my age, which means my partner’s age was 3 * 3 = 9 years.

6. Therefore, my partner’s current age is 9 + 17 = 26 years.

Why is CoT cool? It’s pretty easy to do, often works well, and you can see *how* the AI got its answer (or where it went off the rails).

 

5. The Advanced Playbook (When You’re Feeling Brave):

ReAct Gist (Metallica Kids Counter):

Prompt: How many kids do the band members of Metallica have?

LLM’s Inner Monologue (Simplified):
Thought: Okay, Metallica has 4 members.
Action: Search “How many kids does James Hetfield have?”
Observation: He has three kids.
Thought: Cool, 1 down, 3 to go. Next: Lars Ulrich… (and so on, until it adds them all up)

Your AI Coding Buddy (Handle With Care)

These LLMs aren’t just for words; they can sling code too – write it, explain it, translate it between languages, even help you debug your messes.

Example (Getting a Bash Script Written):

Prompt: Write a code snippet in Bash that asks for a folder name. Then, it takes the contents of the folder and renames all the files inside by prepending the name draft to the file name.

Output: (Lo and behold, a fairly decent, commented Bash script!)

Crucial Warning: ALWAYS, ALWAYS review and test code generated by an AI! It’s a smart tool, not a magic wand. It can and will make mistakes.

The Data Struggler’s Guide to Prompting Glory (aka Best Practices)

Want to suck less at prompting? Here’s the cheat sheet:

The Punchline

Prompt engineering isn’t magic. It’s an iterative grind of testing, tweaking, and figuring out how to have a sensible conversation with these incredibly powerful (and occasionally bizarre) AI models. It’s part science, part art, and a whole lot of “let’s try this and see what happens.”

So, go forth and prompt. May your outputs be relevant, your hallucinations minimal, and your data struggles slightly less struggle-y.

Related posts

Struggling with Hive… What can I do?

cetrulin
7 years ago

Scraping stock prices using Alpha Vantage and Google Finance

cetrulin
8 years ago

Working in a Big Data Project using the terminal

cetrulin
7 years ago
Exit mobile version