Day 2 – Prompt as an API Contract (Structured Outputs)

On Day 1, we learned that LLMs are stateless text prediction engines accessed through APIs. Today, we go deeper into something critical for production systems:

A prompt is not a question. It is an API contract.

As backend engineers, especially coming from Drupal and PHP, we must stop thinking of prompts as "chat messages" and start treating them as strict input-output contracts.

This shift is what separates hobby AI usage from production AI engineering.


What Does "Prompt as an API Contract" Mean?

In Drupal or Symfony, when we build an API endpoint, we define:

  • Request structure
  • Validation rules
  • Response schema
  • Error handling

Example in Drupal (conceptually):

  • Controller receives request
  • Service processes logic
  • Returns structured JSON
  • Validation ensures safe output

With LLMs, the prompt replaces traditional backend logic. The structure you define inside the prompt determines the reliability of the output.

If your prompt is vague → output is unpredictable.
If your prompt defines a schema → output becomes structured and usable.


Why Structured Output Is Critical

In production systems:

  • We cannot parse random prose.
  • We cannot rely on natural language.
  • We must enforce JSON.

Structured output allows:

  • JSON parsing
  • Type validation
  • Safe storage in database
  • Predictable integration with Drupal entities

Think of it as defining a DTO (Data Transfer Object) in PHP.


RPA, Agents, LLM, Machine Learning – Where They Fit

While learning AI, it is easy to mix terminology. Let’s clarify clearly:

LLM (Large Language Model)

  • Predicts next tokens
  • Text-based processing
  • Used for summarization, classification, extraction

You used this on Day 1.

Machine Learning (General ML)

  • Broader field
  • Includes regression, clustering, neural networks
  • Often requires training models

We are not training models. We are consuming them via APIs.

AI Agent

  • Uses LLM as reasoning engine
  • Can call tools
  • Can perform multi-step workflows
  • Maintains short-term memory

We will build agents later in the roadmap.

RPA (Robotic Process Automation)

  • Automates repetitive tasks
  • Traditionally rule-based
  • When combined with LLM → becomes intelligent automation

Example:

  • Read email
  • Extract structured data
  • Trigger Drupal update

That is AI + RPA combined.


Practical: Building a Strict Structured Prompt

We will now modify our Python function to behave like a real API contract.

Below is the improved version with deep comments explaining Python structure and comparing to PHP/Drupal.

import os
import json
from dotenv import load_dotenv
from openai import OpenAI

# Load environment variables from .env file
# Similar to reading settings.php environment variables in Drupal
load_dotenv()

# Read API key from environment
# Equivalent to getenv() in PHP
api_key = os.getenv("OPENAI_API_KEY")

# Defensive validation
# Like checking required configuration before service bootstraps
if not api_key:
    raise ValueError("Missing OPENAI_API_KEY in environment.")

# Create client object
# Comparable to instantiating a service in Drupal
client = OpenAI(api_key=api_key)


def analyze_content(text: str) -> dict:
    """
    Python function definition.

    text: str  -> Type hint (like declaring string in PHP 8)
    -> dict    -> Return type hint (like : array in PHP)

    In Drupal terms:
    - This is like a service method.
    - It processes data and returns structured output.
    """

    # Prompt as contract
    # We define EXACT schema
    prompt = f"""
You are a backend content classification API.

Return ONLY valid JSON with this exact structure:

{{
  "summary": string,
  "category": one of ["security", "content", "configuration", "unknown"],
  "confidence_score": number between 0 and 1
}}

Content:
{text}
"""

    # Make API call
    # Equivalent to Guzzle HTTP client in PHP
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "You are a strict JSON-only API."},
            {"role": "user", "content": prompt}
        ],
        temperature=0.2
    )

    # Extract text response
    raw_output = response.choices[0].message.content

    try:
        # Convert JSON string to Python dictionary
        parsed = json.loads(raw_output)

        # Optional validation step (production mindset)
        if not isinstance(parsed.get("confidence_score"), (int, float)):
            parsed["confidence_score"] = 0.0

        return parsed

    except json.JSONDecodeError:
        # Fallback pattern
        # Similar to returning safe default in Drupal service
        return {
            "summary": None,
            "category": "unknown",
            "confidence_score": 0.0,
            "error": "Invalid JSON returned"
        }


if __name__ == "__main__":
    test_input = "Improper role permissions may expose sensitive configuration settings."

    result = analyze_content(test_input)
    print(json.dumps(result, indent=2))

Python Structure Compared to PHP / Drupal

Python ConceptPHP EquivalentDrupal Equivalent
def function()function name()Service method
dictassociative arrayRender array / config array
try/excepttry/catchException handling
os.getenv()getenv()Environment config
client.chat.completionsGuzzle HTTP callExternal service call

This makes Python very easy for you — it is not a new paradigm.


Engineering Mindset Upgrade Today

We learned:

  • Prompts must define structure
  • AI output must be validated
  • Temperature affects reliability
  • Defensive programming is mandatory

We also clarified differences between:

  • LLM
  • ML
  • Agent
  • RPA

This removes confusion as we go deeper into advanced topics.


Why This Matters for Drupal Engineers

In a Drupal system:

AI should:

  • Classify content
  • Extract metadata
  • Assist workflows

AI should NOT:

  • Decide permissions
  • Override business rules
  • Modify critical system state without validation

AI is a processing layer.
Drupal remains the authority.


Summary of Day 2

Today we transformed prompts into structured API contracts.

We moved from:
"Ask AI a question"

to:
"Define strict response schema and validate it like a backend engineer."

This mindset is foundational for:

  • RAG systems
  • AI agents
  • Enterprise automation
  • Drupal + AI integrations

Day 3 will focus on:

Building this into a FastAPI service with request validation and preparing it for Drupal integration.

The journey continues with structure and discipline.