Ollama is an open-source platform for running and managing large-language-model (LLM) packages entirely on your local machine. It bundles model weights, configurationOllama is an open-source platform for running and managing large-language-model (LLM) packages entirely on your local machine. It bundles model weights, configuration

Complete Ollama Tutorial (2026) – LLMs via CLI, Cloud & Python

\ Ollama has become the standard for running Large Language Models (LLMs) locally. In this tutorial, I want to show you the most important things you should know about Ollama.

https://youtu.be/AGAETsxjg0o?embedable=true

Watch on YouTube: Ollama Full Tutorial

What is Ollama?

Ollama is an open-source platform for running and managing large-language-model (LLM) packages entirely on your local machine. It bundles model weights, configuration, and data into a single Modelfile package. Ollama offers a command-line interface (CLI), a REST API, and a Python/JavaScript SDK, allowing users to download models, run them offline, and even call user-defined functions. Running models locally gives users privacy, removes network latency, and keeps data on the user’s device.

Install Ollama

Visit the official website to download Ollama https://ollama.com/. It’s available for Mac, Windows, and Linux.

\ Linux:

curl -fsSL https://ollama.com/install.sh | sh

macOS:

brew install ollama

Windows: download the .exe installer and run it.

How to Run Ollama

Before running models, it is essential to understand Quantization. Ollama typically runs models quantized to 4 bits (q4_0), which significantly reduces memory usage with minimal loss in quality.

Recommended Hardware:

  • 7B Models (e.g., Llama 3, Mistral): Requires ~8GB RAM (runs on most modern laptops).

  • 13B — 30B Models: Requires 16GB — 32GB RAM.

  • 70B+ Models: Requires 64GB+ RAM or dual GPUs.

  • GPU: An NVIDIA GPU or Apple Silicon (M1/M2/M3) is highly recommended for speed.

\ Go to the Ollama website and click on the “Models” and select the model for your test.

After that, click on the model name and copy the terminal command:

Then, open the terminal window and paste the command:

It will allow you to download and chat with a model immediately.

Ollama CLI — Core Commands

Ollama’s CLI is central to model management. Common commands include:

  • ollama pull — Download a model
  • ollama run — Run a model interactively
  • ollama list or ollama ls — List downloaded models
  • ollama rm — Remove a model
  • ollama create -f — Create a custom model
  • ollama serve — Start the Ollama API server
  • ollama ps — Show running models
  • ollama stop — Stop a running model
  • ollama help — Show help

Advanced Customization: Custom model with Modelfiles

You can “fine-tune” a model’s personality and constraints using a Modelfile. This is similar to a Dockerfile.

  • Create a file named Modelfile
  • Add the following configuration:

# 1. Base the model on an existing one FROM llama3 # 2. Set the creative temperature (0.0 = precise, 1.0 = creative) PARAMETER temperature 0.7 # 3. Set the context window size (default is 4096 tokens) PARAMETER num_ctx 4096 # 4. Define the System Prompt (The AI’s “brain”) SYSTEM """ You are a Senior Python Backend Engineer. Only answer with code snippets and brief technical explanations. Do not be conversational. """

FROM defines the base model

SYSTEM sets a system prompt

PARAMETER controls inference behavior

After that, you need to build the model by using this command:

ollama create [change-to-your-custom-name] -f Modelfile

This wraps the model + prompt template together into a reusable package.

Then run in:

ollama run [change-to-your-custom-name]

Press enter or click to view image in full size

Ollama Server (Local API)

Ollama can run as a local server that apps can call. To start the server use the command:

ollama serve

It listens on http://localhost:11434 by default.

Raw HTTP

import requests r = requests.post( "http://localhost:11434/api/chat", json={ "model": "llama3", "messages": [{"role":"user","content":"Hello Ollama"}] } ) print(r.json()["message"]["content"])

This lets you embed Ollama into apps or services.

Python Integration

Use Ollama inside Python applications with the official library. Run these commands:

Create and activate virtual environments:

python3 -m venv .venv source .venv/bin/activate

Install the official library:

pip install ollama

Use this simple Python code:

import ollama # This sends a message to the model 'gemma:2b' response = ollama.chat(model='gemma:2b', messages=[ { 'role': 'user', 'content': 'Write a short poem about coding.' }, ]) # Print the AI's reply print(response['message']['content'])

This works over the local API automatically when Ollama is running.

You can also call a local server:

import requests r = requests.post( "http://localhost:11434/api/chat", json={ "model": "llama3", "messages": [{"role":"user","content":"Hello Ollama"}] } ) print(r.json()["message"]["content"])

Using Ollama Cloud

Ollama also supports cloud models — useful when your machine can’t run very large models.

First, create an account on https://ollama.com/cloud and sign in. Then, inside the Models pag,e click on the cloud link and select any model you want to test.

\ In the models list, you will see the model with the -cloud prefix**,** which means it is available in the Ollama cloud.

Click on it and copy the CLI command. Then, inside the terminal, use:

ollama signin

To sign in to your Ollama account. Once you sign in with ollama signin, then run cloud models:

ollama run nemotron-3-nano:30b-cloud

Your Own Model in the Cloud

While Ollama is local-first, Ollama Cloud allows you to push your custom models (the ones you built with Modelfiles) to the web to share with your team or use across devices.

  • Create an account at ollama.com.
  • Add your public key (found in ~/.ollama/id_ed25519.pub).
  • Push your custom model:

ollama push your-username/change-to-your-custom-model-name

Conclusion

That is the complete overview of Ollama! It is a powerful tool that gives you total control over AI. If you like this tutorial, please like it and share your feedback in the section below.

Cheers! ;)

\

Market Opportunity
Octavia Logo
Octavia Price(VIA)
$0.0174
$0.0174$0.0174
-12.56%
USD
Octavia (VIA) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Sensura to Showcase Non-Invasive Health Monitoring Platform, Starting with Glucose, at CES 2026

Sensura to Showcase Non-Invasive Health Monitoring Platform, Starting with Glucose, at CES 2026

LAS VEGAS, Jan. 6, 2026 /PRNewswire/ — Sensura, a Singapore-based deep-tech company focused on next-generation health and wellness monitoring, today announced that
Share
AI Journal2026/01/07 11:30
Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

The post Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC appeared on BitcoinEthereumNews.com. Franklin Templeton CEO Jenny Johnson has weighed in on whether the Federal Reserve should make a 25 basis points (bps) Fed rate cut or 50 bps cut. This comes ahead of the Fed decision today at today’s FOMC meeting, with the market pricing in a 25 bps cut. Bitcoin and the broader crypto market are currently trading flat ahead of the rate cut decision. Franklin Templeton CEO Weighs In On Potential FOMC Decision In a CNBC interview, Jenny Johnson said that she expects the Fed to make a 25 bps cut today instead of a 50 bps cut. She acknowledged the jobs data, which suggested that the labor market is weakening. However, she noted that this data is backward-looking, indicating that it doesn’t show the current state of the economy. She alluded to the wage growth, which she remarked is an indication of a robust labor market. She added that retail sales are up and that consumers are still spending, despite inflation being sticky at 3%, which makes a case for why the FOMC should opt against a 50-basis-point Fed rate cut. In line with this, the Franklin Templeton CEO said that she would go with a 25 bps rate cut if she were Jerome Powell. She remarked that the Fed still has the October and December FOMC meetings to make further cuts if the incoming data warrants it. Johnson also asserted that the data show a robust economy. However, she noted that there can’t be an argument for no Fed rate cut since Powell already signaled at Jackson Hole that they were likely to lower interest rates at this meeting due to concerns over a weakening labor market. Notably, her comment comes as experts argue for both sides on why the Fed should make a 25 bps cut or…
Share
BitcoinEthereumNews2025/09/18 00:36
Kelun-Biotech to Attend the 44th J.P. Morgan Healthcare Conference, Sharing Its Business Progresses and Innovation Strategies

Kelun-Biotech to Attend the 44th J.P. Morgan Healthcare Conference, Sharing Its Business Progresses and Innovation Strategies

CHENGDU, China, Jan. 6, 2026 /PRNewswire/ — The 44th J.P. Morgan Healthcare Conference (JPMHC) will be held in San Francisco, California, USA, from January 12 to
Share
AI Journal2026/01/07 11:15