ComfyUI is a powerful node-based interface for running Stable Diffusion workflows, and Ubuntu 24.04 is an excellent foundation for building a reliable AI workstation, render node, or headless image generation server. This guide explains how to install ComfyUI on Ubuntu 24.04, prepare NVIDIA GPU support, configure a Python virtual environment, and set up automatic startup with systemd. The process is well suited for dedicated AI machines as well as homelab environments that need stable, repeatable deployments.
This setup is especially useful for systems equipped with NVIDIA RTX GPUs, including cards commonly used for local inference and image generation. Whether you are deploying ComfyUI on a personal workstation or a remote Ubuntu server, the steps below will help you create a clean and maintainable installation.
Install required Ubuntu packages
Start by updating your package list and installing the base tools needed for ComfyUI, Python environments, and system management.
sudo apt update
sudo apt install git python3 python3-venv python3-dev build-essential libgl1 ssh systemd-timesyncd -y
These packages provide Git for cloning the project, Python tooling for the virtual environment, development libraries for Python packages, and essential system utilities commonly needed on Ubuntu AI servers.
Install NVIDIA drivers on Ubuntu 24.04
For the best ComfyUI performance, your NVIDIA drivers should be installed correctly before setting up PyTorch or testing GPU acceleration. First, check which GPU driver packages are available for your system.
sudo ubuntu-drivers list --gpgpu
Then install the recommended server driver version and supporting NVIDIA utilities. Replace the driver version shown below if your hardware requires a different package.
sudo apt update
sudo ubuntu-drivers install --gpgpu nvidia:580-server
sudo apt install nvidia-utils-580-server
sudo nvidia-cuda-toolkit
After the driver installation completes, reboot the machine so the new kernel modules load properly.
Once the system is back online, confirm that Ubuntu can see the GPU.
sudo nvidia-smi
If the command returns your GPU model, driver version, and VRAM details, the NVIDIA installation is working correctly. For more advanced driver configurations, consult the official Ubuntu documentation at https://documentation.ubuntu.com/server/how-to/graphics/install-nvidia-drivers/index.html.
Clone the ComfyUI repository
A consistent directory structure makes maintenance easier, especially if you manage multiple machines. In this example, ComfyUI is installed under ~/src, but you can choose another location if you prefer.
git clone https://github.com/comfyanonymous/ComfyUI.git ~/src/comfyanonymous/ComfyUI
This folder will act as the main application directory for your ComfyUI installation.
Create the Python virtual environment
Using a virtual environment keeps ComfyUI dependencies isolated from the rest of the operating system. That makes upgrades, troubleshooting, and package management much simpler.
cd ~/src/comfyanonymous/ComfyUI
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip wheel
pip install -r requirements.txt
At this point, the core Python dependencies for ComfyUI should be installed in the virtual environment.
Install PyTorch with GPU support
Some systems may already pull in the required PyTorch packages during setup, but it is still worth verifying that GPU-enabled versions are present. This can improve performance for many ComfyUI workflows.
pip install torch torchvision
If you need a specific CUDA-compatible PyTorch build for your environment, adjust the package installation accordingly.
Prepare folders for models, outputs, and shared workflows
Many users prefer to separate generated content and reusable assets from the application code. This approach also works well if you sync data between multiple ComfyUI nodes using tools such as Syncthing.
mkdir ~/ComfyUI
ln -s ~/src/comfyanonymous/ComfyUI/models ~/ComfyUI/models
ln -s ~/src/comfyanonymous/ComfyUI/output ~/ComfyUI/output
ln -s ~/src/comfyanonymous/ComfyUI/custom_nodes ~/ComfyUI/custom_nodes
ln -s ~/src/comfyanonymous/ComfyUI/user/default/workers ~/ComfyUI/workflows
This creates a single top-level folder that can be easier to back up, sync, or manage across several Ubuntu systems.
Run ComfyUI manually for the first test
Before configuring auto-start, launch ComfyUI manually to verify that the installation works as expected.
cd ~/src/comfyanonymous/ComfyUI
source .venv/bin/activate
python3 main.py --listen 0.0.0.0
The --listen 0.0.0.0 option allows other devices on your network to access the ComfyUI web interface. If your Ubuntu machine has the IP address 192.168.1.123, you can open ComfyUI in a browser at http://192.168.1.123:8188.
If you are running ComfyUI locally on the same machine and do not need network access, you can omit the listen flag and use http://127.0.0.1:8188 instead.
Create a dedicated user for the ComfyUI service
For improved isolation and cleaner permissions, you can run ComfyUI under a separate Linux user. This is a good choice for shared systems or dedicated AI servers. If the machine is only used by one person and simplicity is your priority, you may choose to run the service under your own account instead.
sudo groupadd comfyui
sudo useradd -m -g comfyui comfyui
Next, add your current user to the same group so you can still manage the files.
sudo usermod -a -G comfyui $(whoami)
Then update ownership and permissions for the ComfyUI installation directory.
sudo chown -R comfyui: ~/src/comfyanonymous/ComfyUI
sudo chmod -R 775 ~/src/comfyanonymous/ComfyUI
Set up ComfyUI to start automatically with systemd
Using systemd ensures ComfyUI launches when Ubuntu boots and restarts automatically if the process stops unexpectedly. This is ideal for remote servers, render nodes, and unattended homelab systems.
Create the service definition file:
sudo nano /etc/systemd/system/comfyui.service
Add the following configuration, replacing /home/marc with the correct absolute path for your environment. Do not use ~ in a systemd service file.
The service should include these settings:
- Description set to ComfyUI Service
- Network dependency on
network-online.target - User set to
comfyui - Working directory pointing to your ComfyUI installation
- ExecStart using the virtual environment Python binary and
main.py --listen 0.0.0.0 - Restart policy set to always
- WantedBy set to
multi-user.target
After saving the service file, reload systemd and enable the service.
sudo systemctl daemon-reload
sudo systemctl enable comfyui
sudo systemctl start comfyui
To verify the service is active, run:
sudo systemctl status comfyui
If the status output shows the service is running, ComfyUI should now be available through the same browser URL used during manual testing.
Why this setup works well for AI servers and homelabs
This ComfyUI installation method is practical for users who want a dependable Ubuntu 24.04 deployment with strong GPU compatibility and easy maintenance. Key advantages include:
- Clean separation between system packages and Python dependencies
- Reliable NVIDIA GPU support for AI image generation workflows
- Automatic startup with
systemdafter reboot - A simple directory layout for syncing models, outputs, and custom nodes
- Good suitability for headless workstations, render nodes, and remote access setups
Conclusion
Installing ComfyUI on Ubuntu 24.04 is straightforward when you approach it in stages: prepare the operating system, install NVIDIA drivers, configure Python, test the application manually, and finally enable automatic startup with systemd. The result is a stable and scalable AI image generation environment that works well on local machines and network-accessible servers alike.
If you plan to use ComfyUI regularly for Stable Diffusion workflows, custom nodes, or distributed rendering, this Ubuntu-based setup provides a strong foundation for performance, uptime, and long-term manageability.







