Skip to main content
Java SDK · Zero Dependencies

Build AI apps
with zero friction

A lightweight Java SDK supporting 13 providers—OpenAI, Anthropic, Gemini, Mistral, Groq, DeepSeek, and more—with a unified API for requests, conversations, streaming, pipelines, tool use, and MCP. Zero dependencies.

QuickStart.java
AIClient client = AIClient.builder()
        .model(AIModel.CLAUDE_SONNET_4_6)
        .build();

AIResponse response = client.newRequest()
        .addInput("What is the capital of France?")
        .build()
        .execute();

System.out.println(response.text());
0
External dependencies
4
Interaction patterns
0
Frameworks required
Java 11+
Minimum version
13+
Providers supported

Everything you need. Nothing you don't.

Three interaction patterns, provider abstraction, and zero bloat.

Single Requests

One-shot AI calls with text, images, and PDFs. Structured output deserializes JSON responses directly into Java objects.

🔄

Multi-Turn Conversations

Stateful dialogue with full message history. Pluggable persistence lets any service instance resume any conversation.

⚙️

Pipeline Orchestration

Chain sequential, parallel, conditional, loop, and retry steps—each targeting a different AI provider. Compose pipelines inside pipelines.

⏱️

Streaming

Real-time token-by-token output from any request or pipeline. Register listeners and get the full response when complete.

🔧

Tool Use & MCP

Define tools, detect tool calls, run manual or automatic tool loops. Connect to MCP servers via stdio or HTTP transport.

🔒

Zero Dependencies

Uses only java.net.http and built-in JSON. Drop it into any Java project—microservice, Android, CLI, legacy monolith.

🏷️

Declarative AI Services

Define annotated interfaces and let the SDK generate implementations via JDK proxies. Template variables, streaming, and async—zero boilerplate.

🔀

Provider Agnostic

13 built-in providers—OpenAI, Anthropic, Gemini, Mistral, Groq, DeepSeek, Together, Fireworks, xAI, Azure OpenAI, Vertex AI, AWS Bedrock, and Azure AI Foundry.

🛠️

Resilience Built In

Configurable retry policies with exponential backoff, jitter, status code filtering, and Retry-After header support.

📊

Configuration Hierarchy

Four-level config cascade: base → client → pipeline → request. Override at any level with full control.

Code that speaks for itself

Clean, expressive APIs for every interaction pattern.

Structured Output

Deserialize AI responses directly into typed Java objects.

Example.java
AIClient client = AIClient.builder()
        .model(AIModel.GPT_4O)
        .build();

record CityInfo(String name, String country, int population) {}

CityInfo city = client.newRequest()
        .addInput("Info about Tokyo")
        .outputType(CityInfo.class)
        .build()
        .execute()
        .as(CityInfo.class);

System.out.println(city.name());

Clean architecture, no magic

A clear separation between public API, internal engine, and provider layer.

Your Application
Protify AI SDK
AIClient · AIRequest · AIConversation
AIPipeline · AITool · MCPClient
Config · Pipeline Engine · HTTP · JSON
OpenAIAnthropicGeminiMistralGroqDeepSeekTogetherFireworksxAIAzure OpenAIVertex AIAWS Bedrock

How it compares

Side-by-side with the most popular Java AI libraries.

FeatureProtify AILangChain4jSpring AIVendor SDKs
Zero dependencies✓×××
Multi-provider support✓✓✓×
Streaming✓✓✓~
Tool use / function calling✓✓✓~
MCP support✓×××
Pipeline orchestration✓✓××
Declarative AI services✓×✓×
Structured output✓✓✓~
Multi-turn conversations✓✓✓~
Retry / resilience✓~✓×
Config hierarchy✓×~×
Image / PDF input✓✓✓~
Java 11+ compatible✓××~
No Spring required✓✓×✓

13 providers. One API.

First-class support for every major provider and cloud platform, plus a pluggable interface for your own.

OpenAIGPT-5 family
AnthropicClaude 4 family
Google GeminiGemini Pro & Flash
MistralMistral Large & Small
GroqUltra-fast inference
DeepSeekDeepSeek V3 & R1
TogetherOpen-source models
FireworksOptimized inference
xAIGrok models
Azure OpenAIEnterprise deployments
Google Vertex AIGCP managed
AWS BedrockAWS managed
Azure AI FoundryAzure managed
CustomAny LLM API

Start building in minutes

Add the dependency, initialize, and make your first AI call.

implementation 'ai.protify:protifyai:0.1.4'
App.java
import ai.protify.ai.*;

public class App {
    public static void main(String[] args) {
        AIClient client = AIClient.builder()
                .model(AIModel.CLAUDE_SONNET_4_6)
                .build();

        AIResponse response = client.newRequest()
                .addInput("What is the capital of France?")
                .build()
                .execute();

        System.out.println(response.text());
    }
}

Get in Touch

Questions, feedback, or enterprise inquiries? Drop us a line.