SDKs & Tools

Vercel AI SDK

The Spendplane Adapter for the Vercel AI SDK enables your Edge and Serverless functions to interact with internal VPC services with zero cold-start overhead.

How do I route v0 traffic to Llama-3?

Integration Pattern

By wrapping your AI provider with the Spendplane Shield, you can ensure that all tool calls (Function Calling) are routed through a secure, PII-redacting tunnel.

Next.js / Vercel AI SDK Adapter
import { openai } from "@ai-sdk/openai";
import { spendplane } from "@spendplane/ai-sdk";
export const apiAdapter = spendplane(openai("gpt-4-turbo"));

Edge-Optimized

The Spendplane adapter is fully compatible with Vercel Edge Functions. We use gRPC-over-HTTP/2 to minimize connection setup time and maintain high-throughput tunnels.

Function Calling Shield

When your LLM agent decides to "Call a Function" (e.g. `get_user_financials`), our adapter automatically routes that call through the Spendplane gateway for real-time traffic inspection.

Shadow Context Propagation

The Spendplane adapter automatically propagates Internal Tracing Context through the Vercel AI SDK headers. This allows you to track an LLM generation from the UI all the way to your internal PostgreSQL queries in your own VPC dashboard.

Trace Hub: Manual v1.0.4 / Vercel Verificaton: DEPLOYED