Add invisible, cryptographic watermarking to every piece of AI-generated content your platform produces. Three endpoints, zero storage, full provenance.
Lyra embeds self-authenticating watermarks into AI-generated content — text, images, and audio. The watermark is invisible to humans but machine-verifiable. All metadata (model origin, timestamp, HMAC authentication tag) lives inside the content itself, so verification requires zero server-side state.
Text & Code
KGW Z-score statistical bias + invisible Unicode steganography. Works with any language.
kgw_statistical_payload_steganographyImages (PNG)
DCT frequency-domain perturbation + R-channel LSB payload. Imperceptible to humans.
dct_lsb_dual_layerAudio (WAV)
FFT mid-frequency band embedding + sample LSB payload. Inaudible modification.
fft_lsb_dual_layer| Bytes | Field | Description |
|---|---|---|
| [0:2] | Magic | 0x574D ("WM") — identifies Lyra payloads |
| [2:6] | Timestamp | Unix uint32 big-endian — when the watermark was created |
| [6:26] | Model Name | UTF-8, zero-padded to 20 bytes — AI model identifier |
| [26:30] | Auth Tag | HMAC-SHA256(bytes[0:26], K)[:4] — 32-bit integrity tag |
WM_ID = SHA256(K ‖ ts_bytes ‖ model_bytes) — deterministic, same on embed and verify.
Get watermarking working in under 2 minutes.
curl -X POST https://hackeurope-lyra.onrender.com/api/watermark \
-H "Content-Type: application/json" \
-d '{
"data_type": "text",
"data": "The transformer architecture revolutionized NLP...",
"watermark_strength": 0.8,
"model_name": "GPT-4o"
}'curl -X POST https://hackeurope-lyra.onrender.com/api/verify \
-H "Content-Type: application/json" \
-d '{
"data_type": "text",
"data": "<paste the watermarked_data from step 1>"
}'⚠ Security Critical
The WATERMARK_SECRET_KEY is the root of trust. Anyone with this key can forge or verify watermarks. In production:
export WATERMARK_SECRET_KEY="your-256-bit-hex-key-here"
# Generate a strong key:
python3 -c "import secrets; print(secrets.token_hex(32))"Embed a self-authenticating watermark into content. Returns the watermarked content with cryptographic metadata.
| Field | Type | Required | Description |
|---|---|---|---|
| data_type | string | ✓ | "text" "image" "audio" |
| data | string | ✓ | UTF-8 text or base64-encoded binary (image/audio) |
| watermark_strength | float | — | 0.0 – 1.0 (default 0.8). Higher = more detectable but less invisible. |
| model_name | string | null | — | AI model identifier (e.g. "GPT-4o", "Claude 3.5"). Embedded in the payload. |
{
"watermarked_data": "The transformer architecture...",
"watermark_metadata": {
"watermark_id": "e06e3676784...",
"embedding_method": "kgw_statistical_payload_steganography",
"cryptographic_signature": "c0def6e455a...",
"fingerprint_hash": "bf29ff33711...",
"model_name": "GPT-4o"
},
"integrity_proof": {
"algorithm": "HMAC-SHA256",
"timestamp": "2026-02-21T14:30:58.555Z"
}
}Verify whether content contains a watermark. Completely stateless — all proof is extracted from the content itself.
| Field | Type | Required | Description |
|---|---|---|---|
| data_type | string | ✓ | "text" "image" "audio" |
| data | string | ✓ | The content to verify (same format as watermark request) |
| model_name | string | null | — | Optional hint (the payload already contains the real model name) |
{
"verification_result": {
"watermark_detected": true,
"confidence_score": 0.9412,
"matched_watermark_id": "e06e3676784...",
"model_name": "GPT-4o"
},
"forensic_details": {
"signature_valid": true,
"tamper_detected": false,
"statistical_score": 4.827351
},
"analysis_timestamp": "2026-02-21T15:12:03.221Z"
}watermark_detected — true if either statistical test or payload HMAC passes
confidence_score — 0.0–1.0, combines statistical and cryptographic signals
signature_valid — true if embedded HMAC tag verified with your key
tamper_detected — true if statistical signal exists but HMAC is broken (content was modified)
statistical_score — Z-score (text) or correlation coefficient ρ (image/audio)
Health check endpoint for load balancers and monitoring.
{ "status": "ok", "mode": "stateless", "registry": "none" }Watermark every LLM response before it reaches the end user. Drop this middleware into your API gateway or backend.
import openai
import httpx
LYRA_URL = "https://hackeurope-lyra.onrender.com"
client = openai.OpenAI()
def generate_watermarked(prompt: str, model: str = "gpt-4o") -> dict:
"""Generate text with OpenAI, then watermark it before returning."""
# 1. Generate with OpenAI
completion = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
)
raw_text = completion.choices[0].message.content
# 2. Watermark the output before it leaves your server
resp = httpx.post(f"{LYRA_URL}/api/watermark", json={
"data_type": "text",
"data": raw_text,
"model_name": model,
"watermark_strength": 0.8,
})
result = resp.json()
# 3. Return watermarked text to the end user
return {
"text": result["watermarked_data"],
"watermark_id": result["watermark_metadata"]["watermark_id"],
}
# Usage
response = generate_watermarked("Explain quantum computing")
print(response["text"]) # ← user sees this (invisibly watermarked)
print(response["watermark_id"]) # ← you store this for auditimport OpenAI from "openai";
const LYRA_URL = "https://hackeurope-lyra.onrender.com";
const openai = new OpenAI();
// Middleware: watermark every AI response before sending
async function watermarkMiddleware(
rawText: string,
modelName: string
): Promise<{ text: string; watermarkId: string }> {
const res = await fetch(`${LYRA_URL}/api/watermark`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
data_type: "text",
data: rawText,
model_name: modelName,
watermark_strength: 0.8,
}),
});
const result = await res.json();
return {
text: result.watermarked_data,
watermarkId: result.watermark_metadata.watermark_id,
};
}
// Express route example
app.post("/api/chat", async (req, res) => {
const { prompt } = req.body;
// Generate
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
});
const rawText = completion.choices[0].message.content!;
// Watermark before responding
const { text, watermarkId } = await watermarkMiddleware(rawText, "gpt-4o");
res.json({ text, watermarkId });
});💡 Key Principle
Always watermark server-side, before the response leaves your backend. If you watermark on the client, the user already has the un-watermarked text. The watermarking step should sit between your AI model call and the HTTP response.
Watermark AI-generated images before serving them. Send the image as base64-encoded PNG.
import openai
import httpx
import base64
client = openai.OpenAI()
LYRA_URL = "https://hackeurope-lyra.onrender.com"
def generate_watermarked_image(prompt: str) -> bytes:
"""Generate image with DALL·E, watermark it, return PNG bytes."""
# 1. Generate image (base64 response)
response = client.images.generate(
model="dall-e-3",
prompt=prompt,
response_format="b64_json",
size="1024x1024",
)
image_b64 = response.data[0].b64_json
# 2. Watermark the image
resp = httpx.post(f"{LYRA_URL}/api/watermark", json={
"data_type": "image",
"data": image_b64,
"model_name": "dall-e-3",
"watermark_strength": 0.8,
})
result = resp.json()
# 3. Return watermarked PNG bytes
watermarked_b64 = result["watermarked_data"]
return base64.b64decode(watermarked_b64)
# Usage
png_bytes = generate_watermarked_image("A sunset over mountains")
with open("output.png", "wb") as f:
f.write(png_bytes)
# Later: verify the image
with open("output.png", "rb") as f:
verify_resp = httpx.post(f"{LYRA_URL}/api/verify", json={
"data_type": "image",
"data": base64.b64encode(f.read()).decode(),
})
print(verify_resp.json()) # watermark_detected: true⚠ Image Format
Images must be PNG (lossless). JPEG compression destroys the LSB payload layer. If your pipeline outputs JPEG, convert to PNG before watermarking.
Watermark text-to-speech or AI-generated audio. Send WAV files as base64.
import openai
import httpx
import base64
client = openai.OpenAI()
LYRA_URL = "https://hackeurope-lyra.onrender.com"
def generate_watermarked_audio(text: str) -> bytes:
"""Generate TTS audio, watermark it, return WAV bytes."""
# 1. Generate speech
response = client.audio.speech.create(
model="tts-1-hd",
voice="alloy",
input=text,
response_format="wav",
)
audio_bytes = response.content
audio_b64 = base64.b64encode(audio_bytes).decode()
# 2. Watermark the audio
resp = httpx.post(f"{LYRA_URL}/api/watermark", json={
"data_type": "audio",
"data": audio_b64,
"model_name": "tts-1-hd",
"watermark_strength": 0.8,
})
result = resp.json()
# 3. Return watermarked WAV bytes
return base64.b64decode(result["watermarked_data"])
# Usage
wav_bytes = generate_watermarked_audio("Hello, this is AI-generated speech.")
with open("speech.wav", "wb") as f:
f.write(wav_bytes)⚠ Audio Format
Audio must be WAV (uncompressed PCM). MP3/AAC/OGG lossy compression will destroy the LSB payload. Convert to WAV before watermarking.
The API is a simple REST interface — no SDK required. Here are ready-to-use examples for common languages.
import requests
LYRA = "https://hackeurope-lyra.onrender.com"
# Embed
resp = requests.post(f"{LYRA}/api/watermark", json={
"data_type": "text",
"data": "AI-generated content here...",
"model_name": "gpt-4o",
})
watermarked = resp.json()["watermarked_data"]
# Verify
resp = requests.post(f"{LYRA}/api/verify", json={
"data_type": "text",
"data": watermarked,
})
print(resp.json()["verification_result"]["watermark_detected"]) # Trueconst LYRA = "https://hackeurope-lyra.onrender.com";
// Embed
const embedRes = await fetch(`${LYRA}/api/watermark`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
data_type: "text",
data: "AI-generated content here...",
model_name: "gpt-4o",
}),
});
const { watermarked_data } = await embedRes.json();
// Verify
const verifyRes = await fetch(`${LYRA}/api/verify`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
data_type: "text",
data: watermarked_data,
}),
});
const result = await verifyRes.json();
console.log(result.verification_result.watermark_detected); // truepackage main
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
)
const lyraURL = "https://hackeurope-lyra.onrender.com"
func watermark(text, model string) (string, error) {
body, _ := json.Marshal(map[string]interface{}{
"data_type": "text",
"data": text,
"model_name": model,
"watermark_strength": 0.8,
})
resp, err := http.Post(lyraURL+"/api/watermark", "application/json", bytes.NewReader(body))
if err != nil {
return "", err
}
defer resp.Body.Close()
var result map[string]interface{}
json.NewDecoder(resp.Body).Decode(&result)
return result["watermarked_data"].(string), nil
}
func main() {
wm, _ := watermark("Hello from Go!", "gpt-4o")
fmt.Println(wm)
}# 1. Embed watermark
WATERMARKED=$(curl -s -X POST https://hackeurope-lyra.onrender.com/api/watermark \
-H "Content-Type: application/json" \
-d '{"data_type":"text","data":"Hello world","model_name":"GPT-4o"}' \
| jq -r '.watermarked_data')
# 2. Verify watermark
curl -s -X POST https://hackeurope-lyra.onrender.com/api/verify \
-H "Content-Type: application/json" \
-d "{\"data_type\":\"text\",\"data\":\"$WATERMARKED\"}" \
| jq .┌─────────────────────────────────────────────────────────────┐ │ Your Application │ │ │ │ ┌──────────┐ ┌───────────────┐ ┌───────────────┐ │ │ │ AI Model │────▸│ Lyra API │────▸│ End User │ │ │ │ (GPT-4o, │ │ /api/watermark│ │ (sees normal │ │ │ │ DALL·E, │ │ │ │ content) │ │ │ │ TTS) │ │ Embeds: │ │ │ │ │ └──────────┘ │ • Payload │ └───────┬───────┘ │ │ │ • HMAC tag │ │ │ │ │ • Model ID │ │ │ │ └───────────────┘ │ │ │ ▼ │ │ ┌───────────────┐ ┌───────────────┐ │ │ │ Lyra API │◂────│ Content │ │ │ │ /api/verify │ │ republished │ │ │ │ │ │ anywhere │ │ │ │ Extracts: │ └───────────────┘ │ │ │ • Model name │ │ │ │ • Timestamp │ No database needed. │ │ │ • Tamper flag│ Data carries proof. │ │ └───────────────┘ │ └─────────────────────────────────────────────────────────────┘
Deploy the Lyra API behind your AI pipeline. Three endpoints, zero storage, full content provenance for every piece of AI-generated content your platform produces.