← All posts
UI Prompting

Screenshot to Code: How to Turn Any UI Into a Working Prompt

March 14, 2026 · 7 min read · by Pedro

You see a UI somewhere — a Dribbble shot, a competitor's product, a design someone posted on X. It is exactly what you want to build. So you take a screenshot and paste it into Claude or GPT-4 with the message "build this."

What comes back is close but wrong. The colors are off. The spacing is different. The hover states are invented. The font weights are guessed. You spend the next hour describing things that were already visible in the screenshot.

The problem is not that the AI cannot read screenshots. It can. The problem is that "build this" is not a prompt — it is a request without a spec. The AI sees the screenshot and still has to make dozens of decisions you did not make for it.

What you actually need to extract

Before you can write a prompt from a screenshot, you need to extract the right information. Here is the full list:

Colors

Every background, border, text, shadow, and gradient — as hex or rgba values. Use a color picker tool if needed.

Typography

Font family, size in px, weight (400/500/600/700/800), line-height, letter-spacing, and opacity for each text element.

Spacing

Padding and margin in px. If you cannot measure exactly, estimate based on the visible proportions.

Border & radius

Border width, color, and border-radius in px for every element.

Effects

Box-shadow layers, backdrop-filter blur values, opacity, and any gradients.

Interaction states

What changes on hover, focus, active, and disabled — even if the screenshot only shows the default state.

Animation

If anything moves or transitions, estimate the duration in ms and the easing curve.

How to structure the prompt

Once you have extracted the details, structure the prompt like this:

Act as a world-class frontend engineer. Recreate this UI exactly (React + Tailwind): LAYOUT — Container: max-width 380px, padding 24px — Border-radius: 20px COLORS — Background: rgba(12,12,18,0.92) — Border: 1px solid rgba(255,255,255,0.07) — Primary text: #ffffff, opacity 0.92 — Secondary text: rgba(255,255,255,0.45) TYPOGRAPHY — Title: 18px, weight 700, letter-spacing -0.03em — Subtitle: 13px, weight 400, line-height 1.6 EFFECTS — backdrop-filter: blur(32px) — box-shadow: 0 20px 60px rgba(0,0,0,0.5) INTERACTIONS — Card hover: border → rgba(255,255,255,0.14), transform translateY(-2px) — Button hover: opacity 0.85 — Transition: all 0.2s ease Single file. No external deps. Production-ready.

The part most people skip

Interaction states. The screenshot shows you the default. It does not show you hover, focus, active, or loading states. But your prompt needs to specify all of them — otherwise the AI invents them and they never match your intent.

For any interactive element, always add: what changes on hover, what the transition timing is, and what the active/pressed state looks like. Three extra lines that prevent an hour of corrections.

The faster way

Extracting all of this manually from every screenshot takes 10-15 minutes. tknctrl does it in under 40 seconds — upload the screenshot, and it extracts every color, spacing value, typography detail, effect, and interaction state automatically, then writes the full prompt ready to paste into Claude, GPT-4, Cursor, or v0.

Upload a screenshot. Get a production prompt.

tknctrl extracts every detail and writes the prompt for you.

Try it free →