May 8, 2026
14 Views
0 0

Everything I know about AI, I learned from a genie

Written by

A content designer’s guide to not wasting your wishes on something stupid

I dream of Jeannie promotion picture via Amazon Prime
I Dream of Jeanie

I was a lucky child. My parents and grandparents spent hours reading and reciting fairytales to me. Most of them weren’t age-appropriate, of course, involving children being eaten and loved ones dying on every other page. But there was always enough moral to the story to distract from the horrors and drive home a message. Trying to understand what it all meant kept me up at night ruminating.

I recently had an epiphany: these stories were excellent preparation for what I’m currently facing in my career: Dealing with artificial intelligence. Understanding it. Giving it instructions, it simply cannot misinterpret. Getting something valuable from it in return. Judging that.

In fact, everything I know about AI can be traced back to early stories about genies with magical powers and how to treat them.

Aladdin’s 3 rules for wishes.

In the classic fairy tale, Aladdin, the genie has three simple rules:

The genie cannot kill anyone, make people fall in love, or bring people back from the dead. Today’s LLMs have similar, basic guardrails and constraints that set the scene.

  • Rule #1: Can’t kill anyone. The genie won’t be your assassin, and neither will an LLM. Ask it to give you step-by-step instructions for making a weapon or synthesizing something dangerous, and it will refuse. It has a hard line around content that could hurt someone. No matter how cleverly you phrase the wish.
  • Rule #2: Can’t make people fall in love (but it might try anyway). The genie can’t manufacture real feelings. And LLMs, too, are built with guardrails against generating content designed to psychologically manipulate or deceive (flattery, false urgency, emotional exploitation). In theory. In practice, this is the rule the genie bends most. Ask an LLM to write a “compelling” anything and it will reach for every rhetorical lever available: validation, social proof, a sense of scarcity. It won’t fall in love with you, but it will absolutely tell you your idea is great when it isn’t, if that’s what your prompt implies you want to hear. The wish gets granted. Just not honestly. As I’ve argued in a previous article, most LLMs actually need stronger guardrails for this rule.
  • Rule #3: Can’t bring people back from the dead (Can’t create something truly “real”). The genie can’t cross the line between the magical and the real. LLMs similarly can’t reach into the real world and do things: they can’t actually fix your relationship, cook your meals, or verify that what they’re telling you is true. They generate plausible-sounding output. What’s real and what happens next is still on you and based on your own judgment and abilities. More about that later.

You get 3 wishes, and that’s it.

In Aladdin, as in many other stories involving genies, there’s a limit to the number of wishes you get. Usually, it’s 3.

This mimics context window limits.

There’s only so much the LLM can hold in mind at once, just like a genie can only serve you in a specific context. Once the limit is reached, you have to start a new session (with your LLM) or move on (from your genie, duh).

There’s also a harder constraint hiding inside this one: you can’t wish for more wishes.

LLMs can’t rewrite themselves mid-conversation, learn from your chat, or become smarter on the fly. The model you’re talking to is frozen in time. And once you hit your token limit for the session — or the day, or the month — that’s it until you pay more or wait. At least the genie just goes back in the lamp. It doesn’t charge you a subscription fee to come back out.

So much about the baseline comparison between Genies and LLMs. But it gets even better. Because the fairytales actually teach us a lot about prompting, too.

Choose your words carefully.

A good prompt is well-thought through, follows good information hierarchy, and provides just the right amount of context (not too much to burn unnecessary tokens, not too little, the output sucks). The language is plain and simple, no vagueness, no room for misinterpretation.

How do I know this? I watched enough I Dream of Jeannie growing up.

So what does a well-formed wish actually look like?

Be specific about the outcome, not just the action. Don’t say “make this shorter.” Say “cut this to three sentences while keeping the main point and the warning in paragraph two.” You’re stating what done looks like.

Give it a role. Genies respond to context. So do LLMs. “Rewrite this” produces something generic. “Rewrite this as a senior content designer simplifying a legal clause for a non-native English speaker” is likely to produce something useful.

State what you don’t want. The genie has no idea you didn’t want Monopoly money. You only said “a million dollars.” Tell AI what failure looks like: “don’t add bullet points,” “don’t soften the tone,” “don’t invent statistics.” Negative constraints are often more powerful and to the point than positive instructions.

A good wish (and prompt) is the discipline of being precise. As a content designer, this is already your job.

The lamp was made for you.

Yet, there are many ways AI (or a genie) might misinterpret your words and go off the rails, like:

Literal interpretation:

  • Asking a genie to “live forever” might result in being turned into a statue or living in an unending state of torment, rather than (your imagined) eternal youth and happiness.
  • Asking the AI to “make this email shorter” might make it delete sentences until the email is three words long. Technically shorter. It did exactly what you said, just not what you meant.
  • Or: you ask it to “fix the bug in my code”, and it deletes the failing test instead of fixing the underlying problem. No test, no failure. Wish granted. Has happened to me more than once…

Physical misinterpretation:

  • Wishing for “super speed” could result in burning up due to air friction, as the genie grants the speed but not safety precautions.
  • Asking AI to “write a persuasive version of this”. It does, but strips out all the nuance, caveats, and honesty that made your original argument trustworthy. You didn’t ask for credible.
  • Or: you ask it to “simplify this medical information for patients”, and it simplifies so aggressively that it omits a critical dosage warning. Simpler, yes. Safe, no.

Sometimes, you might get a magic carpet. Other times, you might lose it all.

The fairytale genie isn’t malicious. It’s not trying to trick you. It just has no theory of mind about what you actually want. Neither does an LLM. It’s doing exactly what you asked. And doing that is sometimes the most dangerous thing of all.

Screenshot from the Disney movie Aladdin showing a magic carpet in the air
Disney’s Aladdin

Sometimes, things go smoothly and you’re impressed with how quickly you and your lil AI friend can produce something great. It’s like a magic carpet ride. I can show you the world…

Other times, output obviously doesn’t meet the quality bar, or worse: it’s full of hallucinations and half-baked arguments presented in a confident tone and voice. Whether the output AI produces is valuable and factual is dependent on a lot of factors: the quality of the model, the provided context, the prompt, to name a few.

It always requires strong human judgment to ensure everything is correct and makes sense. It ends the same way as every genie story:

Not with the wishes themselves, but with what the wisher learned from making them.

The greedy ones waste all three and end up back where they started. The clever ones get something real. Why? Because they knew exactly what they wanted before they opened their mouth.

AI isn’t magic. But it rewards the same thing the old stories always did: clarity of thought, precision of language, and enough humility to know that the output is only as good as the wish.

If you’re a content designer, you’ve been practicing for this your entire career.

So: apply common sense, fact check, review, rewrite, and be careful out there, or you might just end up back where you started.

Nicole is a Content Designer turned Design Director based in Stockholm, Sweden. She potters, writes poetry, and raises little girls in a house by a meadow. You can follow her writing here or get it directly to your inbox via her publication, eggwoman. Nicole is on Linkedin.

Sources and inspiration


Everything I know about AI, I learned from a genie was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Leave a Comment