AI DevKit
Glitch9 Inc.Glitch9 DocsDiscordIssues
  • Introduction
    • AI DevKit 3.0
    • Update Logs
    • Troubleshooting
      • ❗Issues After Updating AIDevKit?
      • ❗The type or namespace name 'Newtonsoft' could not be found
      • ❗Build Error: The name 'UnityMenu' does not exist in the current context
      • ❗Model 'modelName' not found
      • ❗The model `model name` does not exist or you do not have access to it
      • ❗The type or namespace name 'AndroidJavaObject' could not be found
      • ❗The type or namaspace name 'Plastic' does not exist
      • ❗Build Error: The name 'Asset Database' does not exist in the current context
      • ❗'ModelData.Create(Provider, string, UnixTime?, string)': not all code paths return a value
      • ⚠️ Timeout Issues
      • ⚠️ Receiving a “HTTP/1.1 400 Bad Request” Error?
    • FAQ
      • My OpenAI API free trial has ended or is inactive.
  • Quick Start
    • Get API Keys
      • OpenAI API Key Guide
      • Google API Key Guide
      • ElevenLabs API Key Guide
    • Text Generation
    • C# Object Generation
    • Image Generation
    • Sound Effect Generation
    • Text to Speech (TTS)
    • Speech to Text (STT)
    • Voice Changer
    • Audio Isolation
  • Pro Features
    • Generation Menu
      • Code Generators
        • C# Script Generator
        • Unity Component Generator
    • Editor Chat
    • Editor Vision (TTI, ITI)
    • Editor Speech (TTS)
    • Management Tools
      • Prompt History Viewer
      • AI Model Manager
      • TTS Voice Manager
      • OpenAI File Manager
      • OpenAI Assistant Manager
      • ElevenLabs Voice Library
  • Assistants API (OpenAI)
    • How it works
    • Creating custom functions
    • Creating assistants API
  • Advanced API Supports
    • OpenAI API
      • 💬Chat completions
      • 🖼️Image operations
      • 🗣️Text to speech
      • 🎙️Speech to text
        • Recording real-time in Unity
      • 💾Files
      • 🔎Embeddings
      • 🛡️Moderations
      • ⚙️Fine-tuning
    • Google API
      • 📝System instructions
      • 💬Text generation
      • ⚙️Fine-tuning
      • ▶️Fucntion calling
      • 🔎Embeddings
      • 🛡️Safety
      • 💻Code execution
    • ElevenLabs API
  • Legacy Documents
    • AI DevKit 1.0 - 2.0
      • AI DevKit 2.0
      • AI DevKit 1.0
      • Preperation
      • Event Handlers
      • Scriptable Toolkits
        • Chat Streamer
        • Image Generator
        • Voice Transcriber
        • Voice Generator
      • Editor Tools
Powered by GitBook
On this page
  • Generate text from text-only input
  • Generate text from text-and-image input
  • Generate a text stream
  • What's next
  1. Advanced API Supports
  2. Google API

Text generation

PreviousSystem instructionsNextFine-tuning

Last updated 10 months ago

Generate text from text-only input

The simplest way to generate text using the Gemini API is to provide the model with a single text-only input, as shown in this example:

using Glitch9.AIDevKit.Google.GenerativeAI;

// Choose a model that's appropriate for your use case.
var model = new GenerativeModel(GeminiModel.Gemini15Flash);

var prompt = "Write a story about a magic backpack.";

var response = await model.GenerateContentAsync(prompt);

Debug.Log(response.Text);
import os
import google.generativeai as genai

# Access your API key as an environment variable.
genai.configure(api_key=os.environ['API_KEY'])
# Choose a model that's appropriate for your use case.
model = genai.GenerativeModel('gemini-1.5-flash')

prompt = "Write a story about a magic backpack."

response = model.generate_content(prompt)

print(response.text)

In this case, the prompt ("Write a story about a magic backpack") doesn't include any output examples, system instructions, or formatting information. It's a approach. For some use cases, a or prompt might produce output that's more aligned with user expectations. In some cases, you might also want to provide to help the model understand the task or follow specific guidelines.

Generate text from text-and-image input

The Gemini API supports multimodal inputs that combine text with media files. The following example shows how to generate text from text-and-image input:

using Glitch9.AIDevKit.Google.GenerativeAI;

// Choose a model that's appropriate for your use case.
var model = new GenerativeModel(GeminiModel.Gemini15Flash);

var image1 = new ImageResource("Assets/image1.jpg");
var image2 = new ImageResource("Assets/image2.jpg");

var prompt = "What's different between these pictures?";

var response = await model.GenerateContentAsync(
    prompt, 
    images: new List<ImageResource> { image1, image2 });
    
Debug.Log(response.GetOutputText());
message = "hello world"
print(message)

Generate a text stream

By default, the model returns a response after completing the entire text generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.

using Glitch9.AIDevKit.Google.GenerativeAI;

// Choose a model that's appropriate for your use case.
var model = new GenerativeModel(GeminiModel.Gemini15Flash);

var prompt = "Write a story about a magic backpack.";

var streamHandler = new StreamHandler();
streamHandler.OnStream += OnStream;

var response = await model.GenerateContentAsync(
    prompt, 
    streamHandler: streamHandler);
    
return;

void OnStream(object sender, string chunk)
{
    Debug.Log(chunk);
}
import os
import google.generativeai as genai

# Access your API key as an environment variable.
genai.configure(api_key=os.environ['API_KEY'])
# Choose a model that's appropriate for your use case.
model = genai.GenerativeModel('gemini-1.5-flash')

prompt = "Write a story about a magic backpack."

response = model.generate_content(prompt, stream=True)

for chunk in response:
  print(chunk.text)
  print("_"*80)

What's next

As with text-only prompting, multimodal prompting can involve various approaches and refinements. Depending on the output from this example, you might want to add steps to the prompt or be more specific in your instructions. To learn more, see .

The following example shows how to implement streaming using the method to generate text from a text-only input prompt.

This guide shows how to use and to generate text outputs from text-only and text-and-image inputs. To learn more about generating text using the Gemini API, see the following resources:

: The Gemini API supports prompting with text, image, audio, and video data, also known as multimodal prompting.

: System instructions let you steer the behavior of the model based on your specific needs and use cases.

: Sometimes generative AI models produce unexpected outputs, such as outputs that are inaccurate, biased, or offensive. Post-processing and human evaluation are essential to limit the risk of harm from such outputs.

💬
zero-shot
one-shot
few-shot
system instructions
File prompting strategies
streamGenerateContent
generateContent
streamGenerateContent
Prompting with media files
System instructions
Safety guidance
LogoGenerate text using the Gemini API  |  Google for DevelopersGoogle for Developers
Google official document