Client events

These are events that the OpenAI Realtime WebSocket server will accept from the client.source

session.update

Send this event to update the session’s default configuration. The client may send this event at any time to update the session configuration, and any field may be updated at any time, except for "voice". The server will respond with a session.updated event that shows the full effective configuration. Only fields that are present are updated, thus the correct way to clear a field like "instructions" is to pass an empty string.source

Optional client-generated ID used to identify this event.source

The event type, must be session.update.source

Realtime session object configuration.source

OBJECT session.update
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{
    "event_id": "event_123",
    "type": "session.update",
    "session": {
        "modalities": ["text", "audio"],
        "instructions": "You are a helpful assistant.",
        "voice": "sage",
        "input_audio_format": "pcm16",
        "output_audio_format": "pcm16",
        "input_audio_transcription": {
            "model": "whisper-1"
        },
        "turn_detection": {
            "type": "server_vad",
            "threshold": 0.5,
            "prefix_padding_ms": 300,
            "silence_duration_ms": 500
        },
        "tools": [
            {
                "type": "function",
                "name": "get_weather",
                "description": "Get the current weather...",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": { "type": "string" }
                    },
                    "required": ["location"]
                }
            }
        ],
        "tool_choice": "auto",
        "temperature": 0.8,
        "max_response_output_tokens": "inf"
    }
}

input_audio_buffer.append

Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. In Server VAD mode, the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually.source

The client may choose how much audio to place in each event up to a maximum of 15 MiB, for example streaming smaller chunks from the client may allow the VAD to be more responsive. Unlike made other client events, the server will not send a confirmation response to this event.source

Optional client-generated ID used to identify this event.source

The event type, must be input_audio_buffer.append.source

Base64-encoded audio bytes. This must be in the format specified by the input_audio_format field in the session configuration.source

OBJECT input_audio_buffer.append
1
2
3
4
5
{
    "event_id": "event_456",
    "type": "input_audio_buffer.append",
    "audio": "Base64EncodedAudioData"
}

input_audio_buffer.commit

Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.source

Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an input_audio_buffer.committed event.source

Optional client-generated ID used to identify this event.source

The event type, must be input_audio_buffer.commit.source

OBJECT input_audio_buffer.commit
1
2
3
4
{
    "event_id": "event_789",
    "type": "input_audio_buffer.commit"
}

input_audio_buffer.clear

Send this event to clear the audio bytes in the buffer. The server will respond with an input_audio_buffer.cleared event.source

Optional client-generated ID used to identify this event.source

The event type, must be input_audio_buffer.clear.source

OBJECT input_audio_buffer.clear
1
2
3
4
{
    "event_id": "event_012",
    "type": "input_audio_buffer.clear"
}

conversation.item.create

Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.source

If successful, the server will respond with a conversation.item.created event, otherwise an error event will be sent.source

Optional client-generated ID used to identify this event.source

The event type, must be conversation.item.create.source

The ID of the preceding item after which the new item will be inserted. If not set, the new item will be appended to the end of the conversation. If set, it allows an item to be inserted mid-conversation. If the ID cannot be found, an error will be returned and the item will not be added.source

The item to add to the conversation.source

OBJECT conversation.item.create
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
    "event_id": "event_345",
    "type": "conversation.item.create",
    "previous_item_id": null,
    "item": {
        "id": "msg_001",
        "type": "message",
        "role": "user",
        "content": [
            {
                "type": "input_text",
                "text": "Hello, how are you?"
            }
        ]
    }
}

conversation.item.truncate

Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.source

Truncating audio will delete the server-side text transcript to ensure there is not text in the context that hasn't been heard by the user.source

If successful, the server will respond with a conversation.item.truncated event.source

Optional client-generated ID used to identify this event.source

The event type, must be conversation.item.truncate.source

The ID of the assistant message item to truncate. Only assistant message items can be truncated.source

The index of the content part to truncate. Set this to 0.source

Inclusive duration up to which audio is truncated, in milliseconds. If the audio_end_ms is greater than the actual audio duration, the server will respond with an error.source

OBJECT conversation.item.truncate
1
2
3
4
5
6
7
{
    "event_id": "event_678",
    "type": "conversation.item.truncate",
    "item_id": "msg_002",
    "content_index": 0,
    "audio_end_ms": 1500
}

conversation.item.delete

Send this event when you want to remove any item from the conversation history. The server will respond with a conversation.item.deleted event, unless the item does not exist in the conversation history, in which case the server will respond with an error.source

Optional client-generated ID used to identify this event.source

The event type, must be conversation.item.delete.source

The ID of the item to delete.source

OBJECT conversation.item.delete
1
2
3
4
5
{
    "event_id": "event_901",
    "type": "conversation.item.delete",
    "item_id": "msg_003"
}

response.create

This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.source

A Response will include at least one Item, and may have two, in which case the second will be a function call. These Items will be appended to the conversation history.source

The server will respond with a response.created event, events for Items and content created, and finally a response.done event to indicate the Response is complete.source

The response.create event includes inference configuration like instructions, and temperature. These fields will override the Session's configuration for this Response only.source

Optional client-generated ID used to identify this event.source

The event type, must be response.create.source

Realtime session object configuration.source

OBJECT response.create
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
    "event_id": "event_234",
    "type": "response.create",
    "response": {
        "modalities": ["text", "audio"],
        "instructions": "Please assist the user.",
        "voice": "sage",
        "output_audio_format": "pcm16",
        "tools": [
            {
                "type": "function",
                "name": "calculate_sum",
                "description": "Calculates the sum of two numbers.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "a": { "type": "number" },
                        "b": { "type": "number" }
                    },
                    "required": ["a", "b"]
                }
            }
        ],
        "tool_choice": "auto",
        "temperature": 0.7,
        "max_output_tokens": 150
    }
}

response.cancel

Send this event to cancel an in-progress response. The server will respond with a response.cancelled event or an error if there is no response to cancel.source

Optional client-generated ID used to identify this event.source

The event type, must be response.cancel.source

OBJECT response.cancel
1
2
3
4
{
    "event_id": "event_567",
    "type": "response.cancel"
}