AI DevKit
Glitch9 Inc.Glitch9 DocsDiscordIssues
  • Introduction
    • AI DevKit 3.0
    • Update Logs
    • Troubleshooting
      • ❗Issues After Updating AIDevKit?
      • ❗The type or namespace name 'Newtonsoft' could not be found
      • ❗Build Error: The name 'UnityMenu' does not exist in the current context
      • ❗Model 'modelName' not found
      • ❗The model `model name` does not exist or you do not have access to it
      • ❗The type or namespace name 'AndroidJavaObject' could not be found
      • ❗The type or namaspace name 'Plastic' does not exist
      • ❗Build Error: The name 'Asset Database' does not exist in the current context
      • ❗'ModelData.Create(Provider, string, UnixTime?, string)': not all code paths return a value
      • ⚠️ Timeout Issues
      • ⚠️ Receiving a “HTTP/1.1 400 Bad Request” Error?
    • FAQ
      • My OpenAI API free trial has ended or is inactive.
  • Quick Start
    • Get API Keys
      • OpenAI API Key Guide
      • Google API Key Guide
      • ElevenLabs API Key Guide
    • Text Generation
    • C# Object Generation
    • Image Generation
    • Sound Effect Generation
    • Text to Speech (TTS)
    • Speech to Text (STT)
    • Voice Changer
    • Audio Isolation
  • Pro Features
    • Generation Menu
      • Code Generators
        • C# Script Generator
        • Unity Component Generator
    • Editor Chat
    • Editor Vision (TTI, ITI)
    • Editor Speech (TTS)
    • Management Tools
      • Prompt History Viewer
      • AI Model Manager
      • TTS Voice Manager
      • OpenAI File Manager
      • OpenAI Assistant Manager
      • ElevenLabs Voice Library
  • Assistants API (OpenAI)
    • How it works
    • Creating custom functions
    • Creating assistants API
  • Advanced API Supports
    • OpenAI API
      • 💬Chat completions
      • 🖼️Image operations
      • 🗣️Text to speech
      • 🎙️Speech to text
        • Recording real-time in Unity
      • 💾Files
      • 🔎Embeddings
      • 🛡️Moderations
      • ⚙️Fine-tuning
    • Google API
      • 📝System instructions
      • 💬Text generation
      • ⚙️Fine-tuning
      • ▶️Fucntion calling
      • 🔎Embeddings
      • 🛡️Safety
      • 💻Code execution
    • ElevenLabs API
  • Legacy Documents
    • AI DevKit 1.0 - 2.0
      • AI DevKit 2.0
      • AI DevKit 1.0
      • Preperation
      • Event Handlers
      • Scriptable Toolkits
        • Chat Streamer
        • Image Generator
        • Voice Transcriber
        • Voice Generator
      • Editor Tools
Powered by GitBook
On this page
  1. Assistants API (OpenAI)

How it works

PreviousAssistants API (OpenAI)NextCreating custom functions

Last updated 10 months ago

The Assistants API is designed to help developers build powerful AI assistants capable of performing a variety of tasks.

Assistants can call OpenAI’s models with specific instructions to tune their personality and capabilities. They can access multiple tools in parallel, which can be OpenAI-hosted tools like code_interpreter and file_search, or you build and host via function calling. Assistants can also access persistent threads to store message history and manage context length. Furthermore, they can handle files in various formats for tasks that require file manipulation.

Understanding objects related to AssistantsAPI

  • Assistant: A purpose-built AI that uses OpenAI’s models and calls tools.

  • Thread: Represents a conversation session between an Assistant and a user, storing messages and handling truncation.

  • Message: A message created by an Assistant or a user, which can include text, images, and other files.

  • Run: An invocation of an Assistant on a Thread, where the Assistant uses its configuration and the Thread’s messages to perform tasks.

  • Run Step: A detailed list of steps taken by the Assistant during a Run, which includes calling tools and creating messages.

Context Window Management

The API automatically manages truncation to stay within the model's maximum context length. You can customize this by specifying max_prompt_tokens and max_completion_tokens.

Understanding Run Lifecycle

Run objects can have multiple statuses:

  • RunStatus.Queued: The run is waiting to start.

  • RunStatus.InProgress: The run is currently executing.

  • RunStatus.Completed: The run successfully completed.

  • RunStatus.RequiresAction: The run requires additional information.

  • RunStatus.Expired: The run expired due to time constraints.

  • RunStatus.Cancelling: The run is being cancelled.

  • RunStatus.Cancelled: The run was successfully cancelled.

  • RunStatus.Failed: The run failed due to an error.

  • RunStatus.Incomplete: The run ended due to token limits being reached.

Data Access Guidance

Assistants, Threads, Messages, and Vector Stores created via the API are scoped to the Project they are created in. Implement authorization and restrict API key access to ensure data security.

Example C# Code

// Define your custom object for handling chat responses and media attachments.
public class Chat
{
    public ChatRole Role { get; set; }  // It always has to be ChatRole.User if it's an input message.
    public string Message { get; set; }  // Text message content.
    public List<OpenAIFile> Files { get; set; }  // Files attached to the message.
    public List<UnityImageFile> Images { get; set; }  // Images attached to the message.

    // Converts the EditorChat object into a MessageRequest for API processing.
    public MessageRequest ToMessageRequest()
    {
        MessageRequest.Builder builder = new MessageRequest.Builder();
        builder.SetPrompt(Message);
        if (Images != null && Images.Any())
            builder.SetImages(Images.ToArray());
        if (Files != null && Files.Any())
            builder.SetFiles(Files.ToArray());
        return builder.Build();
    }
}

// Integration class for OpenAI's AssistantsAPIv2 using Unity.
using Cysharp.Threading.Tasks;
using Glitch9.Apis.OpenAI.AssistantsAPI;
using System.Collections.Generic;
using System.Linq;

namespace Glitch9.Apis.OpenAI
{
    public class ChatGPT
    {
        private static class DefaultValues
        {
            internal const GPTModel MODEL = GPTModel.GPT4o;  // Default GPT model used.
        }

        private static readonly AssistantOptions k_Options = new()
        {
            Id = "chat_gpt",
            Model = DefaultValues.MODEL,
            Name = "ChatGPT",
            Description = "A chatbot that can help you with anything you need.",
            Instructions = "You are a chatbot designed to output. You can help users with any questions they have.",
            ResponseFormat = ChatFormat.Text  // The format of chat responses.
        };

        public AssistantsAPIv2 Api;

        public EditorChatGPT()
        {
            // Initialize the API with predefined options.
            Api = new AssistantsAPIv2(k_Options);
            Api.InitializeAsync().Forget();
        }

        // Processes a chat message and manages tool execution if needed.
        public async UniTask<Chat> SubmitMessageAsync(Chat inputChat)
        {
            var msgRequest = inputChat.ToMessageRequest();
            var result = await Api.RequestAsync(msgRequest);

            if (result == null || result.IsFailure)
                return new Chat { Role = ChatRole.Assistant, Message = "There was an error processing your request." };

            // Return the result as a new EditorChat instance.
            return new Chat { Role = ChatRole.Assistant, Message = result.GetResult() };
        }

        // Additional methods to manage threads and responses can be implemented here.
    }
}

Now you have explored how Assistants work, the next step is to explore Assistant Tools, covering topics like Function calling, File Search, and Code Interpreter.

custom tools
This image is from OpenAI's official document:
https://platform.openai.com/docs/assistants/how-it-works/objects