Andy API Documentation

Complete guide to using the Andy API - a distributed AI compute pool for running language models with automatic load balancing and failover.

Quick Navigation

Overview

The Andy API is a distributed AI compute pool that allows you to access language models across multiple hosts with automatic load balancing, failover, and scaling. It provides OpenAI-compatible endpoints for seamless integration with existing applications.

Key Features

Base URL

https://mindcraft.riqvip.dev

Quick Start

Get Started in 30 Seconds

Here's a complete example using the recommended sweaterdog/andy-4:latest model:

curl -X POST "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "sweaterdog/andy-4:latest",
    "messages": [
      {
        "role": "user",
        "content": "Hello! Can you help me understand how AI models work?"
      }
    ],
    "temperature": 0.7,
    "max_tokens": 150
  }'

Python Example

import requests

# Andy API endpoint
url = "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions"

# Request payload
payload = {
    "model": "sweaterdog/andy-4:latest",
    "messages": [
        {
            "role": "user", 
            "content": "Explain quantum computing in simple terms"
        }
    ],
    "temperature": 0.7,
    "max_tokens": 200
}

# Make the request
response = requests.post(url, json=payload)
result = response.json()

# Print the response
print(result["choices"][0]["message"]["content"])

JavaScript/Node.js Example

const fetch = require('node-fetch');

async function callAndyAPI() {
    const response = await fetch('https://mindcraft.riqvip.dev/api/andy/v1/chat/completions', {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json',
        },
        body: JSON.stringify({
            model: 'sweaterdog/andy-4:latest',
            messages: [
                {
                    role: 'user',
                    content: 'Write a short poem about artificial intelligence'
                }
            ],
            temperature: 0.8,
            max_tokens: 100
        })
    });
    
    const data = await response.json();
    console.log(data.choices[0].message.content);
}

callAndyAPI();

Authentication

The Andy API supports optional API key authentication for higher rate limits and priority access. Without an API key, you'll have basic rate limits applied.

Rate Limits

Authentication Concurrent Requests Daily Limit Priority
No API Key 5 1,000 Standard
With API Key 10 Unlimited High

Using API Keys

Include your API key in the Authorization header:

curl -X POST "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions" \
  -H "Authorization: Bearer your-api-key-here" \
  -H "Content-Type: application/json" \
  -d '{ ... }'

API Endpoints

GET /api/andy/v1/models

List all available models across the compute pool.

POST /api/andy/v1/chat/completions

Create a chat completion using the distributed model pool.

POST /api/andy/v1/embeddings

Generate embeddings using available embedding models.

GET /api/andy/pool_status

Get current status of the compute pool including active hosts and load.

Chat Completions Parameters

Parameter Type Required Description
model string Yes Model name (e.g., "sweaterdog/andy-4:latest")
messages array Yes Array of message objects with "role" and "content"
temperature number No Sampling temperature (0-2, default: 1)
max_tokens integer No Maximum tokens to generate
stream boolean No Stream response chunks (default: false)
stop array No Stop sequences

Advanced Examples

Streaming Response

import requests
import json

url = "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions"
payload = {
    "model": "sweaterdog/andy-4:latest",
    "messages": [{"role": "user", "content": "Tell me a story"}],
    "stream": True,
    "temperature": 0.8
}

response = requests.post(url, json=payload, stream=True)

for line in response.iter_lines():
    if line:
        line = line.decode('utf-8')
        if line.startswith('data: '):
            data = line[6:]
            if data != '[DONE]':
                chunk = json.loads(data)
                content = chunk['choices'][0]['delta'].get('content', '')
                print(content, end='', flush=True)

With Custom Parameters

const response = await fetch('https://mindcraft.riqvip.dev/api/andy/v1/chat/completions', {
    method: 'POST',
    headers: {
        'Content-Type': 'application/json',
        'Authorization': 'Bearer your-api-key'
    },
    body: JSON.stringify({
        model: 'sweaterdog/andy-4:latest',
        messages: [
            {
                role: 'system',
                content: 'You are a helpful coding assistant.'
            },
            {
                role: 'user',
                content: 'Write a Python function to calculate factorial'
            }
        ],
        temperature: 0.3,
        max_tokens: 300,
        stop: ['```\n\n']
    })
});

const data = await response.json();
console.log(data.choices[0].message.content);

Error Handling

import requests
import time

def call_andy_api_with_retry(payload, max_retries=3):
    url = "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions"
    
    for attempt in range(max_retries):
        try:
            response = requests.post(url, json=payload, timeout=30)
            
            if response.status_code == 200:
                return response.json()
            elif response.status_code == 429:
                # Rate limited, wait and retry
                wait_time = 2 ** attempt
                print(f"Rate limited, waiting {wait_time}s...")
                time.sleep(wait_time)
                continue
            else:
                print(f"Error {response.status_code}: {response.text}")
                
        except requests.exceptions.RequestException as e:
            print(f"Request failed: {e}")
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)
                continue
                
    return None

# Usage
result = call_andy_api_with_retry({
    "model": "sweaterdog/andy-4:latest",
    "messages": [{"role": "user", "content": "Hello!"}]
})

if result:
    print(result["choices"][0]["message"]["content"])

Host Setup Guide

Want to contribute compute power to the Andy API pool? Here's how to set up your own host and join the distributed network.

Prerequisites

Quick Host Setup

# 1. Install Ollama (if not already installed)
curl -fsSL https://ollama.com/install.sh | sh

# 2. Download a model (recommended: Andy-4)
ollama pull sweaterdog/andy-4:latest

# 3. Download the host client
wget https://raw.githubusercontent.com/mindcraft-ce/mindcraft-ce/main/local_client/andy_host_client.py

# 4. Install dependencies
pip install requests ollama-python

# 5. Join the pool
python andy_host_client.py --name "my-host" --andy-url https://mindcraft.riqvip.dev

Advanced Host Configuration

# Specify which models to share
python andy_host_client.py \
  --name "my-gpu-host" \
  --url http://localhost:11434 \
  --andy-url https://mindcraft.riqvip.dev \
  --allowed-models "sweaterdog/andy-4:latest" "llama3:8b" \
  --capabilities vision code math

Monitor Your Host

Once your host is running, you can monitor it through the Pool Dashboard and track performance metrics in the Metrics page.

Host Benefits

Support & Community

Need help or want to contribute? Join our community: