githubEdit

ramSelf-Hosting with Ollama

Run popular LLMs like Llama, Gemma and dozens of others locally for free

Install Ollama App

Install the Ollama application locally on your machine: press the Download button at ollama.comarrow-up-right to download the Ollama app. Make sure the app is running after installation.

Install Ollama Model

Install at least one Ollama model, either manually from the Ollama websitearrow-up-right, or using the Terminal command ollama run <model_name>. For example, to install the llama 3.2 model, use the command ollama run llama3.2.

Test & Configure in Unity

circle-info

Edit ▶ Preferences ▶︎ AI Dev Kit ▶︎ Ollama

To make sure that your server is running, test the connection using the Test Connection button.

Last updated