Batch Embedding
Generate embeddings for multiple texts efficiently using .GENEmbed().
Basic Usage
string[] texts = new[]
{
"First document",
"Second document",
"Third document"
};
float[][] embeddings = await texts
.GENEmbed()
.ExecuteAsync();Configuration
string[] texts = new[] { "Text 1", "Text 2", "Text 3" };
float[][] embeddings = await texts
.GENEmbed()
.SetModel(OpenAIModel.TextEmbedding3Small)
.ExecuteAsync();Unity Integration Examples
Example 1: Bulk Document Indexing
Example 2: FAQ Database Builder
Example 3: Content Library Processor
Example 4: Multi-Language Indexer
Performance Benefits
Sequential vs Batch
Batch Size Recommendations
Different providers have different limits:
Best Practices
✅ Good Practices
❌ Bad Practices
Error Handling
Provider Limits
Provider
Max Batch Size
Notes
OpenAI
2,048 texts
Per request
100 texts
Per request
Performance Comparison
Example: 1000 documents
Method
Time
API Calls
Sequential
~1000s
1000
Batch (100)
~10s
10
Batch (500)
~2s
2
Use Cases
Use Case
Batch Size
Small FAQ
All at once (10-50)
Medium Library
100-200 per batch
Large Dataset
500-1000 per batch
Real-time
Process as needed
Next Steps
Text Embedding - Single text embeddings
Last updated