AIDevKit - AI Suite for Unity
API ReferencesDiscordGlitch9
  • Introduction
    • AI Dev Kit 3.7.0
    • Troubleshooting
    • FAQ
    • Update Logs
      • AI Dev Kit v2
      • AI Dev Kit v1
  • Quick Start
    • API Key Setup
      • OpenAI
      • Google Gemini
      • ElevenLabs
      • OpenRouter
    • Adding Models & Voices
      • Quick Add Guide
      • Creating Snippets
    • Self-Hosting with Ollama
  • Editor Tools
    • Editor Chat
    • Asset Generators
    • Asset Managers
      • Prompt History
      • File Manager
      • Chatbot Manager
      • Assistant Manager
  • GEN Tasks
    • Overview
      • Prefixes
      • Sequence
    • Response
    • Image
    • Video
    • SoundFX
    • Speech
    • Transcript
    • Voice Change
    • Audio Isolation
  • Components
    • Chatbot
    • Chatbot (Assistants API)
    • Realtime Assistant
    • Modules
    • Event Receivers
  • Platform API
    • OpenAI
      • 💬Chat completions
      • 🖼️Image operations
      • 🗣️Text to speech
      • 🎙️Speech to text
        • Recording real-time in Unity
      • 💾Files
      • 🔎Embeddings
      • 🛡️Moderations
      • ⚙️Fine-tuning
      • Assistants API
        • How it works
        • Creating custom functions
        • Creating assistants API
    • Google Gemini
      • 📝System instructions
      • 💬Text generation
      • ⚙️Fine-tuning
      • ▶️Fucntion calling
      • 🔎Embeddings
      • 🛡️Safety
      • 💻Code execution
  • Legacy Documents
    • AI Dev Kit 1.0
      • Preperation
      • Scriptable Toolkits
        • Chat Streamer
        • Image Generator
        • Voice Transcriber
        • Voice Generator
      • Editor Tools
      • Troubleshooting (Legacy)
        • ❗Build Error: The name 'UnityMenu' does not exist in the current context
        • ❗The type or namespace name 'AndroidJavaObject' could not be found
        • ❗The type or namaspace name 'Plastic' does not exist
        • ❗Build Error: The name 'Asset Database' does not exist in the current context
        • ❗'ModelData.Create(Provider, string, UnixTime?, string)': not all code paths return a value
      • Code Generators
        • C# Script Generator
        • Unity Component Generator
    • AI Dev Kit 2.0
      • Event Handlers
      • Editor Chat
      • Editor Vision (TTI, ITI)
      • Editor Speech (TTS)
      • Management Tools
        • Prompt History Viewer
        • AI Model Manager
        • TTS Voice Manager
        • OpenAI File Manager
        • OpenAI Assistant Manager
        • ElevenLabs Voice Library
Powered by GitBook
On this page
  • 1. Assistant Setup
  • 2. Assistant Behavior
  • 3. Advanced Options
  • 4. Modules
  • 5. Event Receivers
  • 6. Life Cycle Event Receivers
  • 7. Required Action Handling
  1. Components

Chatbot (Assistants API)

Use this component to integrate OpenAI's Assistants API into your Unity project. This allows AI agents to run tools, maintain threads, and perform multi-step actions through OpenAI’s hosted memory.

PreviousChatbotNextRealtime Assistant

Last updated 4 days ago

What it does

This component enables:

  • Text input & response using the OpenAI Assistants API

  • Voice input via Speech-to-Text (optional)

  • Voice output via Text-to-Speech (optional)

  • Image generation when requested by the assistant (optional)

  • Tool/function calling through OpenAI’s built-in tool invocation system

  • Multi-step reasoning using threads, runs, and memory management provided by the Assistants API


1. Assistant Setup

This section defines the identity and behavior of your assistant.

Field
Description

Selected Assistant

Reference to an Assistant object. Defines the assistant's configuration and tools.

Assistant Name

Display name of this assistant in the Editor. For UI purposes.

Description

Optional description shown in the Editor or UI.

Instructions

System prompt to define the assistant’s personality and role.

Tools

List of tools (function names) the assistant can use. Set in the Assistant object.


2. Assistant Behavior

These options control how the assistant responds.

Field
Description

Model

OpenAI model to use (e.g., GPT-4o). Set per run.

Response Format

Select the format of the assistant's response: text, json, json_schema, or auto. Use text for natural language replies, and json or json_schema for structured responses. auto lets the model choose the most suitable format.

Stream

Enable to receive streaming responses from the assistant.


3. Advanced Options

Optional fine-tuning of model behavior.

Field
Description

Temperature

Controls randomness. Higher = more creative.

Top P

Controls diversity. Lower = more focused.


4. Modules

Optional components that enhance the assistant’s capabilities.

Module
Description

Function Manager

Executes Unity functions when the assistant calls a tool.

Speech-to-Text

Enables voice input.

Text-to-Speech

Converts assistant responses to voice.

Image Generator

Allows assistant to generate images when appropriate.


5. Event Receivers

Use these to trigger UnityEvents in response to assistant activity.

Event Type
Description

Chat Event Receiver

Triggered when a message is sent or received.

Tool Call Receiver

Triggered when the assistant requests a tool call.

Streaming Event Receiver

Triggered as streamed tokens arrive.

Error Receiver

Triggered on any exception during chat.


6. Life Cycle Event Receivers

Receive callbacks for lower-level events in the Assistants API flow.

Receiver Type
Description

Assistant Event Receiver

Called on assistant load/update.

Thread Event Receiver

Called on thread creation/update.

Run Event Receiver

Called when a new run is started or completed.

Message Event Receiver

Called on message creation/receipt.


7. Required Action Handling

If the assistant uses required_actions (e.g., waits for a tool to complete), this section manages how Unity should respond.

Field
Description

Ignore Required Actions

If enabled, assistant won’t wait for function results.

Required Action Timeout

Time (in seconds) before a required action times out.

On Required Action

A list of UnityEvents to trigger when a required action is detected (e.g., tool call).