Introduction
Running large language models locally is becoming an increasingly practical option for developers, businesses, and IT teams that want better privacy, predictable costs, and full control over AI workloads. Ollama makes this process much easier by providing a simple way to download, manage, and run LLMs on your own infrastructure. If you want to deploy Ollama on an Ubuntu 22.04 server, this guide walks through the full setup process, from server preparation to launching your first model.
Why Install Ollama on Ubuntu 22.04
Installing Ollama on Ubuntu 22.04 is a smart choice for anyone building a self-hosted AI environment. Ubuntu offers a stable Linux foundation, wide package support, and strong compatibility with server tools. Combined with Ollama, it becomes an efficient platform for local AI inference, testing, and application development.
Key benefits of using Ollama on a server include:
- Improved data privacy by keeping prompts and responses on your own machine
- Lower long-term costs compared to repeated third-party API usage
- Greater control over model versions, updates, and system resources
- Simple model management for local LLM deployment
- Support for CPU-based environments when GPU hardware is not available
Server Prerequisites
Before you begin the Ollama installation, make sure your Ubuntu 22.04 server is ready. You should have SSH access, a user account with sudo privileges, and enough CPU, RAM, and disk space to support the models you plan to run. Larger models require more system resources, so capacity planning is important.
Basic preparation includes:
- Deploying a server with Ubuntu 22.04 installed
- Connecting to the system over SSH
- Updating all packages to current versions
- Installing
curlif it is not already available - Configuring the firewall for SSH and optional remote Ollama access
Connect to Your Ubuntu Server
Use SSH from your local machine to log in to the Ubuntu 22.04 server.
ssh username@your_server_ip
Replace username and your_server_ip with your actual account name and server IP address.
Update the Operating System
It is always best to start with the latest package metadata and security updates. This helps avoid dependency problems during installation.
sudo apt update
sudo apt upgrade -y
Install Required Dependency
Ollama installation relies on curl to retrieve the official setup script. If it is missing, install it with the following command.
sudo apt install curl -y
Configure the Firewall
For basic server security, allow SSH access through ufw and then enable the firewall. If you want to access Ollama remotely over the network, also allow port 11434.
sudo ufw allow ssh
sudo ufw enable
sudo ufw allow 11434
This step is especially important if you plan to use Ollama from another workstation, web app, or API client.
Install Ollama on Ubuntu 22.04
The easiest way to install Ollama is by using the official installation script. This script detects your operating system and architecture, downloads the appropriate files, and sets up Ollama as a system service.
curl -fsSL https://ollama.com/install.sh | sh
Once the installation finishes, Ollama should be available on the system and configured to start automatically through systemd.
Verify the Ollama Service
After installation, confirm that the service is active and running correctly.
systemctl status ollama
If everything is working as expected, the service output should show that ollama.service is loaded and active.
Enable Remote Access to Ollama
By default, Ollama listens only on the local interface. If you want other devices or applications on your network to connect to the server, update the service configuration so it binds to all interfaces.
Open the service override editor:
sudo systemctl edit ollama.service
Add the following configuration to the editor:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Save the file and exit. Then reload the systemd configuration and restart Ollama so the new setting takes effect.
sudo systemctl daemon-reload
sudo systemctl restart ollama
After this change, the Ollama server can accept remote connections, provided that port 11434 is open in the firewall.
Run Your First LLM with Ollama
With the service installed and running, you can download and launch a model directly from the command line. A popular starting point is llama3.1, which offers a solid balance of capability and efficiency for local AI usage.
ollama run llama3.1
If the model is not already stored locally, Ollama will download it automatically before starting an interactive session. Depending on your internet speed and server specifications, the first pull may take some time.
Once the model is ready, you can begin entering prompts in the terminal and reviewing responses in real time. To leave the interactive session, use the Ollama exit command such as /bye.
Common Troubleshooting Tips
If the installation does not go smoothly, these checks can help resolve the most common issues.
- Package errors: If Ubuntu package installation fails, repair dependencies and retry the update process.
- Service startup problems: Review the service state and logs to identify permission issues, missing dependencies, or port conflicts.
- Firewall blocks: Confirm that port 11434 is open if remote access is required.
- Insufficient resources: Large language models can consume substantial memory, CPU, and storage. Consider using a smaller model or upgrading the server.
- Model download issues: Verify internet connectivity, available disk space, and correct model naming.
- Compatibility concerns: Keep Ollama updated and confirm that your chosen model works with the installed version.
Useful diagnostic commands include the following:
sudo apt install -f
journalctl -u ollama
sudo ufw status
Best Practices for Self-Hosted Ollama
To get the most from your Ubuntu Ollama server, follow a few operational best practices:
- Keep Ubuntu and Ollama updated for security and stability
- Monitor CPU, memory, and disk usage regularly
- Restrict network exposure if the service does not need public access
- Use smaller or optimized models when running on CPU-only systems
- Document service changes such as custom environment variables
These steps help maintain reliable local LLM performance and reduce operational risk.
Conclusion
Installing Ollama on Ubuntu 22.04 is a practical way to build a private, self-hosted AI environment without depending entirely on external APIs. With a few setup steps, you can prepare your server, install Ollama, allow remote access if needed, and start running local large language models such as Llama 3.1. For developers and organizations focused on control, privacy, and cost efficiency, Ollama on Ubuntu is a strong foundation for local AI deployment.







