AIopenaigptstructured-outputai

How to use Draftship with the OpenAI API

Use OpenAI's Chat Completions or Responses API to generate Draftship-ready email blocks. Tool use, structured outputs, and per-recipient batch generation patterns.

HTML import path
API only
Merge syntax
Generate placeholder variables; substitute via your ESP
Image hosting
external
Best for
Engineering teams already on OpenAI's stack who need batch email drafting at scale.
Watch out for
Free-form output drifts. Always use Structured Outputs (JSON Schema) when generating Draftship blocks.

OpenAI's API is the most common LLM behind email-drafting code. The right pattern is Structured Outputs: define a JSON schema for your Draftship block format, pass it on every call, and the model returns conforming JSON. No parsing free-form text.

Why Structured Outputs matter

Free-form prompts produce drift. One run gives you "subject:" and another gives you a paragraph that buries the subject in the body. Structured Outputs (introduced in 2024) let you bind the model to a JSON schema. The output validates against the schema before returning to your code.

STEP 1Define block JSONschemasubject, preheader,blocks[]STEP 2Send prompt +schema to APIresponse_format:json_schemaSTEP 3GPT returnsconforming JSONParsed automaticallySTEP 4Map JSON toDraftship blocksPer block.typeSTEP 5Save to template,send via ESPOutlook-safeDraftship + OpenAI structured outputs
Draftship and OpenAI structured output flow

The schema for a Draftship-ready email

json
{ "type": "object", "properties": { "subject": { "type": "string", "maxLength": 70 }, "preheader": { "type": "string", "maxLength": 100 }, "blocks": { "type": "array", "items": { "oneOf": [ { "type": "object", "properties": { "type": { "const": "heading" }, "level": { "type": "integer", "minimum": 1, "maximum": 3 }, "text": { "type": "string" } }, "required": ["type", "level", "text"] }, { "type": "object", "properties": { "type": { "const": "text" }, "html": { "type": "string" } }, "required": ["type", "html"] }, { "type": "object", "properties": { "type": { "const": "button" }, "label": { "type": "string", "maxLength": 30 }, "href": { "type": "string", "format": "uri" } }, "required": ["type", "label", "href"] } ] } } }, "required": ["subject", "preheader", "blocks"] }

This shape mirrors Draftship's internal block model. After parsing, each entry maps one-to-one onto a Draftship block.

API call with Structured Outputs

bash
curl -X POST https://api.openai.com/v1/chat/completions \ -H 'Authorization: Bearer sk-...' \ -H 'Content-Type: application/json' \ -d '{ "model": "gpt-4.1", "messages": [ { "role": "system", "content": "You write marketing emails." }, { "role": "user", "content": "Draft a launch email for..." } ], "response_format": { "type": "json_schema", "json_schema": { "name": "draftship_email", "strict": true, "schema": { "...your schema..." } } } }'

With strict: true, the model is constrained to produce only valid JSON. Invalid generations are auto-retried client-side.

Prompt patterns that ship

The prompt body is more important than the model choice. Three patterns:

  • Brand-anchored: include 2-3 real past emails as voice anchors. Cuts generic-marketing tone immediately.
  • Constraint-heavy: "Subject under 50 chars, no emoji, no exclamation." More constraints = less drift.
  • Negative examples: "Avoid these words: leverage, unlock, journey, synergy." Direct list of banned terms.

Image generation via DALL-E or GPT Image

OpenAI's image models can generate hero images for email. For Draftship's Image block, the workflow:

1. Prompt DALL-E or the image API for an asset at 1200x600 (2x retina-ready). 2. Save the URL or download to your CDN. 3. Paste the URL into Draftship's Image block.

Don't hotlink to OpenAI's URLs in production; they expire. Always download and host on your own CDN.

Test send checklist

  • Validate JSON structure before mapping to Draftship blocks.
  • Run the rendered email through the size checker.
  • Send a test through your ESP.
  • Verify all AI-generated links are real, not hallucinated.

When to use OpenAI vs Claude vs Gemini

All three produce passable email copy with the right prompts. Choose based on:

  • Pricing tier you're already on
  • Voice fit (test with the same prompt across all three)
  • Tooling integration (Structured Outputs, tool use, file handling)

For a side-by-side, see Use Draftship with Claude and Use Draftship with Gemini.

FAQ

Frequently asked questions

Should I use Chat Completions or the Responses API?
The Responses API is the newer endpoint and supports more features. For pure email generation, Chat Completions still works fine. Use Responses if you need tool use and persistent conversation state.
Will GPT hallucinate URLs?
Yes. Always pass the real URL in the prompt and require the model to use that exact URL. Validate links before publishing.
Can I generate emails in multiple languages with one prompt?
Yes. Add 'Generate in {{ language }}' to the prompt and pass the language as input. For multilingual sends, generate per-language variants and store them as separate Draftship templates.
How do I keep token costs down?
Use gpt-4.1-mini or smaller models for high-volume drafting. Reserve the larger models for high-stakes one-off emails. Cache prompts that don't change between runs.
Can OpenAI write Outlook-safe HTML directly?
Yes, with a careful prompt. But the output rarely matches what Draftship's exporter produces. The cleaner path is generate JSON blocks, paste into Draftship, and let Draftship handle the Outlook-safe export.
Try it yourself

Design in Draftship. Paste into OpenAI API.