AI Dev Kit
API ReferencesDiscordGlitch9
  • Introduction
    • AI DevKit 3.7.0
    • Troubleshooting
    • FAQ
    • Update Logs
  • Provider Setup
    • OpenAI
    • Google Gemini
    • ElevenLabs
    • Ollama
    • OpenRouter
  • Editor Tools
    • Introduction
    • Editor Chat
    • Model Library
    • Voice Library
  • GEN Tasks
    • Introduction - Prefixes
    • Response Generation
    • Chat Session
    • Image Generation
    • Video Generation
    • Sound FX Generation
    • Text to Speech (TTS)
    • Speech to Text (STT)
    • Voice Change
    • Audio Isolation
  • Advanced APIs (Pro)
    • Assistants API
      • How it works
      • Creating custom functions
      • Creating assistants API
    • Realtime API
  • Legacy API
    • OpenAI
      • 💬Chat completions
      • 🖼️Image operations
      • 🗣️Text to speech
      • 🎙️Speech to text
        • Recording real-time in Unity
      • 💾Files
      • 🔎Embeddings
      • 🛡️Moderations
      • ⚙️Fine-tuning
    • Google Gemini
      • 📝System instructions
      • 💬Text generation
      • ⚙️Fine-tuning
      • ▶️Fucntion calling
      • 🔎Embeddings
      • 🛡️Safety
      • 💻Code execution
  • Legacy Documents
    • AI DevKit 1.0 - 2.0
      • AI DevKit 2.0
      • AI DevKit 1.0
      • Preperation
      • Event Handlers
      • Scriptable Toolkits
        • Chat Streamer
        • Image Generator
        • Voice Transcriber
        • Voice Generator
      • Editor Tools
      • Troubleshooting (Legacy)
        • ❗Build Error: The name 'UnityMenu' does not exist in the current context
        • ❗The type or namespace name 'AndroidJavaObject' could not be found
        • ❗The type or namaspace name 'Plastic' does not exist
        • ❗Build Error: The name 'Asset Database' does not exist in the current context
        • ❗'ModelData.Create(Provider, string, UnixTime?, string)': not all code paths return a value
      • Code Generators
        • C# Script Generator
        • Unity Component Generator
      • Generation Menu
      • Editor Chat
      • Editor Vision (TTI, ITI)
      • Editor Speech (TTS)
      • Management Tools
        • Prompt History Viewer
        • AI Model Manager
        • TTS Voice Manager
        • OpenAI File Manager
        • OpenAI Assistant Manager
        • ElevenLabs Voice Library
Powered by GitBook
On this page
  • Step 1: Creating a VoiceTranscriber Instance
  • Step 2: Configuring VoiceTranscriber
  • Step 3: Implementing VoiceTranscriber in Your Unity Scene
  • Step 4: Recording and Transcribing Speech
  • Step 5: Handling Transcription Results
  • Step 6: Managing Transcriptions and Recordings
  • Step 7: Customizing the Transcription Process
  • Step 8: Error Handling and User Feedback
  • Best Practices
  1. Legacy Documents
  2. AI DevKit 1.0 - 2.0
  3. Scriptable Toolkits

Voice Transcriber

PreviousImage GeneratorNextVoice Generator

The VoiceTranscriber toolkit is an integral part of the OpenAI with unity asset, enabling the conversion of speech to text. This tool is ideal for applications requiring voice command recognition, dialogue systems, accessibility features, or any functionality where user voice input needs to be interpreted as text.

Step 1: Creating a VoiceTranscriber Instance

  1. In Unity's top menu, navigate to Assets > Create > Glitch9/OpenAI/Toolkits > Voice Transcriber.

  2. A VoiceTranscriber instance will appear in your project directory. Click on it to adjust its properties in the Inspector.

Step 2: Configuring VoiceTranscriber

  1. Audio File Path: Set the path where audio recordings will be saved.

  2. Review additional properties that may be provided for more advanced settings and preferences.

Step 3: Implementing VoiceTranscriber in Your Unity Scene

  1. Assign the VoiceTranscriber ScriptableObject to a controller script within your scene that will handle voice recording and transcription.

  2. Add a reference to the VoiceTranscriber in your script to invoke its methods for recording and transcribing audio.

csharpCopy codepublic VoiceTranscriber voiceTranscriber;

// Call this method to start recording voice input from the user.
public void StartRecordingVoice() {
    voiceTranscriber.StartRecording();
}

Step 4: Recording and Transcribing Speech

  1. Use the StartRecording method to begin capturing audio from the user's microphone.

  2. Once the desired audio has been captured, call the StopAndTranscribeRecordingAsync method to end the recording and start the transcription process.

csharpCopy code// Call this method to stop recording and transcribe the captured audio.
public async void StopAndTranscribe() {
    AudioFile transcriptionResult = await voiceTranscriber.StopAndTranscribeRecordingAsync();
    // Use the transcribed text from transcriptionResult
}

Step 5: Handling Transcription Results

  • The transcription result will be contained within the AudioFile object returned by the StopAndTranscribeRecordingAsync method, including both the audio clip and the transcribed text.

Step 6: Managing Transcriptions and Recordings

  • Maintain a reference to the audio recordings and transcriptions using the GetRecordings method, which provides access to all recorded and transcribed sessions.

Step 7: Customizing the Transcription Process

  • Adjust settings such as recording duration and audio frequency for optimized performance specific to your application's environment and requirements.

Step 8: Error Handling and User Feedback

  • Implement error handling to manage potential issues during recording or transcription.

  • Provide users with feedback when recording starts and ends, and inform them of any errors or transcription status updates.

Best Practices

  • Test and calibrate the microphone input settings in various environments to ensure reliable voice capture.

  • Prompt users to speak clearly and consider implementing a voice activity detection system to initiate and terminate recordings efficiently.

  • Handle personal user data responsibly, especially if recording sensitive information.