# Chat Completions

## Request

**Chat Competions (POST)**

Headers

| Name          | Value            |
| ------------- | ---------------- |
| Authorization | `Bearer <token>` |

Body

| Name                    | Type                                         | Description                       |
| ----------------------- | -------------------------------------------- | --------------------------------- |
| `model`\*               | string                                       | Model                             |
| `messages`\*            | list                                         | Messages history                  |
| `stream`                | boolean                                      | Will there be a streaming request |
| `temperature`           | float                                        | Temperature, models creativity    |
| `max_completion_tokens` | integer                                      | Max tokens                        |
| `frequency_penalty`     | float                                        | Frequency penalty                 |
| `logprobs`              | bool                                         | Logprobs                          |
| `presence_penalty`      | float                                        | Presense penalty                  |
| `reasoning_effort`      | string \["minimal", "low", "medium", "high"] | Reasoning effort                  |
| `response_format`       | dict                                         | Response format                   |
| `tool_choice`           | string \["none", "auto", "required"]         | Tool choice                       |
| `tools`                 | list                                         | Tools                             |
| `top_logprobs`          | integer                                      | Top logprobs                      |
| `top_p`                 | float                                        | Top p                             |

## Python

```python
from openai import OpenAI

client = OpenAI(api_key="YOUR_TOKEN", base_url="https://api.gpt4-all.xyz/v1")

response = client.chat.completions.create(
    model="gpt-4.1-mini",
    messages=[{"role": "user", "content": "hi"}],
    stream=False,
)

print(response.choices[0].message.content)
```

## Python Stream

```python
from openai import OpenAI

client = OpenAI(api_key="YOUR_TOKEN", base_url="https://api.gpt4-all.xyz/v1")

response = client.chat.completions.create(
    model="gpt-4.1-mini",
    messages=[{"role": "user", "content": "hi"}],
    stream=True,
)

for chunk in response:
    print(chunk.choices[0].delta.content, end="")
```

## Python Vision

```python
from openai import OpenAI

client = OpenAI(api_key="YOUR_TOKEN", base_url="https://api.gpt4-all.xyz/v1")
start = time.time()

response = client.chat.completions.create(
  model="gpt-4.1-mini",
  messages=[
    {
      "role": "user",
      "content": [
        {"type": "text", "text": "whats on the image?"},
        {
          "type": "image_url",
          "image_url": {
            "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
          },
        },
      ],
    }
  ],
)

print(response.choices[0].message.content)
```
