Skip to main content

Usage

Once you have a NilAI API key and node base URL, you can start using SecretLLM with any OpenAI-compatible library.

Getting Started with SecretLLM

Getting started with SecretLLM is straightforward:

  1. Query the /v1/models endpoint to check available models
  2. Select a model and use it with the /v1/chat/completions endpoint

Since SecretLLM is OpenAI-compatible, you can use any OpenAI library. Here's an example for querying the Llama-3.1-8B model:

from openai import OpenAI

# Initialize the OpenAI client

client = OpenAI( # Replace <node> with the specific node identifier
base_url="https://nilai-<node>.nillion.network/v1/",
api_key="YOUR_API_KEY"
)

# Send a chat completion request

response = client.chat.completions.create(
model="meta-llama/Llama-3.1-8B-Instruct",
messages=[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is your name?"
}
],
stream=False
)

SecretLLM Endpoints

SecretLLM provides the following endpoints:

NameEndpointDescription
Chat/v1/chat/completionsGenerate AI responses
Models/v1/modelsList available models
Attestation/v1/attestation/reportGet cryptographic proof of environment
Usage/v1/usageTrack your token usage
Health/v1/healthCheck service status