Skip to content

HTTP API Reference

TapPass exposes an OpenAI-compatible REST API. You can use it from any language or tool that speaks HTTP.

Use the URL provided by your platform team:

https://tappass.example.com

All requests require an API key in the Authorization header:

Authorization: Bearer tp_abc123...

Get your API key from your platform team.

POST /v1/chat/completions

OpenAI-compatible chat completions endpoint. Works with any OpenAI client library in any language.

Terminal window
curl -X POST https://tappass.example.com/v1/chat/completions \
-H "Authorization: Bearer tp_abc123..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "What are the GDPR requirements?"}
]
}'
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The GDPR has several key requirements..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 145,
"total_tokens": 157
},
"tappass": {
"classification": "PUBLIC",
"blocked": false,
"session_id": "ses_abc123",
"turn_index": 1
}
}

The tappass field is added to the standard OpenAI response. It includes governance metadata.

Add "stream": true to the request body:

Terminal window
curl -X POST https://tappass.example.com/v1/chat/completions \
-H "Authorization: Bearer tp_abc123..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello"}],
"stream": true
}'

Streaming uses Server-Sent Events (SSE), identical to the OpenAI format.

Pass flags via the X-TapPass-Flags header:

Terminal window
curl -X POST https://tappass.example.com/v1/chat/completions \
-H "Authorization: Bearer tp_abc123..." \
-H "X-TapPass-Flags: mode=observe, pii=mask, budget=dev" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello"}]
}'

See the Governance Flags guide for all available flags.

When a request violates policy, you get HTTP 403:

{
"error": {
"type": "policy_block",
"reason": "PII detected in request",
"classification": "RESTRICTED"
}
}
EndpointMethodPurpose
/health/readyGETReadiness check
/health/liveGETLiveness check
Terminal window
curl https://tappass.example.com/health/ready
# {"status": "ok"}
const response = await fetch("https://tappass.example.com/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": "Bearer tp_abc123...",
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
}),
});
const data = await response.json();
console.log(data.choices[0].message.content);
// Use any OpenAI Go client, just change the base URL
config := openai.DefaultConfig("tp_abc123...")
config.BaseURL = "https://tappass.example.com/v1"
client := openai.NewClientWithConfig(config)
require "net/http"
require "json"
uri = URI("https://tappass.example.com/v1/chat/completions")
req = Net::HTTP::Post.new(uri)
req["Authorization"] = "Bearer tp_abc123..."
req["Content-Type"] = "application/json"
req.body = { model: "gpt-4o", messages: [{ role: "user", content: "Hello" }] }.to_json
res = Net::HTTP.start(uri.hostname, uri.port, use_ssl: true) { |http| http.request(req) }
puts JSON.parse(res.body)["choices"][0]["message"]["content"]

TapPass implements the OpenAI API spec. Any client that supports a custom base_url works:

Terminal window
export OPENAI_BASE_URL=https://tappass.example.com/v1
export OPENAI_API_KEY=tp_abc123...

This works with Cursor, GitHub Copilot, Continue, LangChain, CrewAI, LlamaIndex, and any other OpenAI-compatible tool.