Making Your First Request

Learn how to make your first API request to ModelProxy.ai

Making Your First Request

This guide will walk you through making your first API request to ModelProxy.ai. We'll use the Chat Completions API, which is the primary way to interact with AI models through our platform.

Prerequisites#

Before you begin, make sure you have:

  1. Created an account on ModelProxy.ai
  2. Generated an API key
  3. Added credits to your account (new accounts come with $10 in free credits)

Simple Request Using cURL#

The easiest way to test the API is using cURL. Here's a simple example:

curl -X POST https://modelproxy.theitdept.au/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "openai/gpt-4o",
    "messages": [
      {"role": "user", "content": "Hello! Can you give me a brief introduction to ModelProxy.ai?"}
    ]
  }'

Replace YOUR_API_KEY with the API key you generated earlier.

Understanding the Response#

If your request is successful, you'll receive a response similar to this:

{
  "id": "chatcmpl-123abc",
  "object": "chat.completion",
  "created": 1683123456,
  "model": "openai/gpt-4o",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Hello! ModelProxy.ai is a unified API service that allows you to access various AI language models through a single, consistent interface. Instead of integrating with multiple AI providers separately, you can use ModelProxy.ai to route your requests to different models like GPT-4, Claude, or Gemini with the same API structure. It offers features like token-based billing, usage metering, and automatic provider fallback for enhanced reliability. This makes it easier to build and maintain AI-powered applications while giving you flexibility in model selection."
      },
      "finish_reason": "stop",
      "index": 0
    }
  ]
}

The response contains:

  • A unique identifier for the completion
  • The model used
  • An array of "choices" (typically just one for chat completions)
  • Each choice contains a message with the AI's response

Including Usage Information#

To see token usage and cost information, you can include the usage parameter in your request:

curl -X POST https://modelproxy.theitdept.au/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "openai/gpt-4o",
    "messages": [
      {"role": "user", "content": "Hello! Can you give me a brief introduction to ModelProxy.ai?"}
    ],
    "usage": {
      "include": true
    }
  }'

This will add a usage object to the response:

"usage": {
  "prompt_tokens": 14,
  "completion_tokens": 102,
  "total_tokens": 116,
  "cost": 0.00116
}

Using Different Models#

ModelProxy.ai supports various AI models. To use a different model, simply change the model parameter:

curl -X POST https://modelproxy.theitdept.au/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "claude-3-opus-20240229",
    "messages": [
      {"role": "user", "content": "Hello! Can you give me a brief introduction to ModelProxy.ai?"}
    ]
  }'

For a list of available models, see our Models API Reference.

Using with Programming Languages#

Python Example#

import requests

api_key = "YOUR_API_KEY"
url = "https://modelproxy.theitdept.au/api/v1/chat/completions"

headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {api_key}"
}

data = {
    "model": "openai/gpt-4o",
    "messages": [
        {"role": "user", "content": "Hello! Can you give me a brief introduction to ModelProxy.ai?"}
    ]
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

JavaScript Example#

async function callModelProxy() {
  const response = await fetch('https://modelproxy.theitdept.au/api/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      model: 'openai/gpt-4o',
      messages: [
        {role: 'user', content: 'Hello! Can you give me a brief introduction to ModelProxy.ai?'}
      ]
    })
  });
  
  const result = await response.json();
  console.log(result);
}

callModelProxy();

Common Errors and Troubleshooting#

401 Unauthorized#

This typically means your API key is invalid or missing. Check that:

  • You've included the Authorization header
  • The format is Bearer YOUR_API_KEY
  • The API key is correct and not revoked

402 Payment Required#

This means your account doesn't have enough credits. Add more credits in your dashboard.

404 Not Found#

The model you requested doesn't exist or isn't available. Check the model name for typos and refer to the list of available models.

429 Too Many Requests#

You've exceeded your rate limits. Wait a moment and try again, or check your API key's rate limit settings.

Next Steps#

Now that you've made your first request, you can: