# Ollama

## <mark style="color:blue;">Ollama</mark>

* **Function:** Ollama is a service for managing and training LLM models. It provides infrastructure and tools for researchers and developers to train, tune, and deploy large language models on cloud platforms like Google Cloud TPUs.
* **Focus:** Ollama is geared towards advanced users and developers with experience in LLM training and deployment. It offers more flexibility and control over the training process and model configurations.
* **Examples:** Ollama is used by researchers developing cutting-edge language models, as well as companies building their own internal LLMs for specific applications.

{% embed url="<https://github.com/jmorganca/ollama>" %}

Get up and running with large language models locally.

#### macOS

[Download](https://ollama.ai/download/Ollama-darwin.zip)

#### <mark style="color:red;">Windows</mark>

<mark style="color:red;">**Coming soon! For now, you can install Ollama on Windows via WSL2.**</mark>

(<mark style="color:blue;">WSL2, which stands for Windows Subsystem for Linux 2, is a feature of Windows 10 and 11 that allows you to run a real Linux environment directly within your Windows system. This means you can access and use all of your favorite Linux tools and commands, even on a Windows machine, without having to dual-boot or use a virtual machine.)</mark>

#### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

#### Docker

The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.

### Quickstart

To run and chat with [Llama 2](https://ollama.ai/library/llama2):

```
ollama run llama2
```

### Model library

Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library)

Here are some example open-source models that can be downloaded:

<table><thead><tr><th width="177">Model</th><th width="132">Parameters</th><th width="165">Size</th><th>Download</th></tr></thead><tbody><tr><td>Llama 2</td><td>7B</td><td>3.8GB</td><td><code>ollama run llama2</code></td></tr><tr><td>Mistral</td><td>7B</td><td>4.1GB</td><td><code>ollama run mistral</code></td></tr><tr><td>Dolphin Phi</td><td>2.7B</td><td>1.6GB</td><td><code>ollama run dolphin-phi</code></td></tr><tr><td>Phi-2</td><td>2.7B</td><td>1.7GB</td><td><code>ollama run phi</code></td></tr><tr><td>Neural Chat</td><td>7B</td><td>4.1GB</td><td><code>ollama run neural-chat</code></td></tr><tr><td>Starling</td><td>7B</td><td>4.1GB</td><td><code>ollama run starling-lm</code></td></tr><tr><td>Code Llama</td><td>7B</td><td>3.8GB</td><td><code>ollama run codellama</code></td></tr><tr><td>Llama 2 Uncensored</td><td>7B</td><td>3.8GB</td><td><code>ollama run llama2-uncensored</code></td></tr><tr><td>Llama 2 13B</td><td>13B</td><td>7.3GB</td><td><code>ollama run llama2:13b</code></td></tr><tr><td>Llama 2 70B</td><td>70B</td><td>39GB</td><td><code>ollama run llama2:70b</code></td></tr><tr><td>Orca Mini</td><td>3B</td><td>1.9GB</td><td><code>ollama run orca-mini</code></td></tr><tr><td>Vicuna</td><td>7B</td><td>3.8GB</td><td><code>ollama run vicuna</code></td></tr><tr><td>LLaVA</td><td>7B</td><td>4.5GB</td><td><code>ollama run llava</code></td></tr></tbody></table>

> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
