Introduction
ComfyUI is a node-based Stable Diffusion Graphical User Interface (GUI) that lets you design and execute advanced pipelines using a flowchart-based interface. The interface offers optimization points such as re-execution of workflow parts that change between executions, supports loading checkpoints, saves workflows as JSON files, and generates full workflows from PNG files.
This guide explains how to deploy ComfyUI on a Rcs Cloud GPU server. You will set up the server with all necessary dependencies, install the ComfyUI Manager for further optimizations, and run the ComfyUI application as a system service for deployment in a production environment.
Prerequisites
Before you begin:
- Deploy a fresh Ubuntu 22.04 A100 Rcs GPU Stack server using the Rcs marketplace application with at least
40 GB
of GPU RAM. - Set up a new domain A record that points to the Server IP Address. For example,
comfyui.example.com
- Acess the server using SSH as a non-root sudo user
- Update the server
Install ComfyUI
Switch to your user home directory.
console$ cd
Clone the ComfyUI repository using Git.
console$ git clone https://github.com/comfyanonymous/ComfyUI.git
Switch to the ComfyUI directory.
console$ cd ComfyUI
Install the required
PyTorch
andxformers
dependency packages.console$ pip install torch==2.1.0+cu121 torchvision==0.16.0+cu121 torchaudio==2.1.0+cu121 --extra-index-url https://download.pytorch.org/whl xformers
The above command installs the following packages:
- torch: A machine learning library that provides a flexible and dynamic computational graph for deep learning tasks.
- torchvision: A PyTorch library designed for computer vision tasks. It includes utilities for image and video datasets designed to handle image classification and object detection.
- torchaudio: A PyTorch library focused on audio processing tasks. It provides audio loading, transformations, and common audio datasets for deep learning applications.
- xformers: A Python library that implements various transformer-based models and attention mechanisms. It's built on top of PyTorch and offers a convenient way to work and experiment with transformer architectures.
Install additional dependencies using the
requirements.txt
file.console$ pip install -r requirements.txt
Switch to the
checkpoints
directory.console$ cd models/checkpoints
Edit the default
put_checkpoints_here
file using a text editor such as Nano.console$ nano put_checkpoints_here
Add the following content to the file.
bash# Checkpoints ### SDXL wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors -P ./models/checkpoints/ #wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors -P ./models/checkpoints/ # SD1.5 wget -c https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -P ./models/checkpoints/ # SD2 #wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors -P ./models/checkpoints/ #wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors -P ./models/checkpoints/
Save and close the file.
The above configuration downloads the Stable Diffusion
XL
and Stable Diffusion1.5
models to your project using thewget
utility. To enable additional models such as Vae, Lora, or any other fine-tuned models, navigate to Hugging Face, and copy the model checkpoint file URL. Then, add the full URL to theput_checkpoints_here
file.Run the file using
bash
to download the specified models.console$ bash put_checkpoints_here
Switch to your main project directory
ComfyUI
.console(venv) $ cd /home/example_user/ComfyUI
Set Up ComfyUI as a System Service
Create a new ComfyUI service file.
consolesudo nano /etc/systemd/system/comfyui.service
Add the following configurations to the file. Replace
/home/example_user/ComfyUI
with your ComfyUI path, andexample_user
with your actual system username.ini[Unit] Description=ComfyUI Daemon After=network.target [Service] User=example_user Group=example_user WorkingDirectory=/home/example_user/ComfyUI ExecStart=python3 main.py [Install] WantedBy=multi-user.target
Save and close the file.
The above configuration creates a new
comfyui
system service that manages the ComfyUI application runtime processes.Enable the
comfyui
system service.console$ sudo systemctl enable comfyui
Restart the Systemd daemon to apply changes.
console$ sudo sytemctl daemon-reload
Start the ComfyUI service.
console$ sudo systemctl start comfyui
View the ComfyUI service status and verify that it's active, and running.
console$ sudo systemctl status comfyui
Your output should look like the one below:
● comfyui.service - ComfyUI Daemon Loaded: loaded (/etc/systemd/system/comfyui.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2023-12-04 20:09:27 UTC; 34s ago Main PID: 3306 (python) Tasks: 6 (limit: 17835) Memory: 303.3M CPU: 4.039s CGroup: /system.slice/comfyui.service └─3306 /root/ComfyUI/venv/bin/python main.py Dec 04 20:09:30 vultr python[3306]: Set vram state to: NORMAL_VRAM Dec 04 20:09:30 vultr python[3306]: Device: cuda:0 GRID A100D-1-10C MIG 1g.9gb : cudaMallocAsync....
Test access to the default ComfyUI port
8188
using thecurl
utility.console$ curl 127.0.0.1:8188
Set Up Nginx as a Reverse Proxy to Securely Expose ComfyUI
To securely expose ComfyUI in production environments, set up the Nginx web server as a reverse proxy to forward incoming connection requests on the HTTP port 80
to the backend ComfyUI port 8188
. This allows you to mask your ComfyUI ports and securely handle all connections. Follow the steps below to set up a new Nginx virtual host configuration to forward connection requests to ComfyUI.
Install Nginx.
console$ sudo apt install nginx -y
Verify that the Nginx web server is available, active, and running.
console$ sudo systemctl status nginx
Create a new Nginx virtual host configuration file in the
sites-available
directory.console$ sudo nano /etc/nginx/sites-available/comfyui.conf
Add the following configurations to the file. Replace
comfyui.example.com
with your actual domain.nginxserver { listen 80; listen [::]:80; server_name comfyui.example.com; location / { proxy_pass http://127.0.0.1:8188; } }
Save and close the file.
Link the configuration file to the
sites-enabled
directory to activate the new ComfyUI virtual host profile.console$ sudo ln -s /etc/nginx/sites-available/comfyui.conf /etc/nginx/sites-enabled/
Test the Nginx configuration for errors.
console$ sudo nginx -t
Output:
shellnginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
Restart Nginx to apply changes.
console$ sudo systemctl restart nginx
Security
By default, Uncomplicated Firewall (UFW) is active on Rcs Ubuntu servers. To enable connections to the HTTP port 80
, and HTTPS port 443
allow the ports through your firewall configuration to access the ComfyUI interface as described in the steps below.
View the UFW firewall table to verify it's active on your server.
console$ sudo ufw status
When the UFW status is
inactive
, run the following command to enable the firewall.console$ sudo ufw enable
Allow the HTTP port
80
through the firewall.console$ sudo ufw allow 80/tcp
Allow the HTTPS port
443
.console$ sudo ufw allow 443/tcp
Reload the firewall rules to save changes.
console$ sudo ufw reload
Secure ComfyUI with Valid Let's Encrypt SSL Certificates
SSL Certificates encrypt the connection between users and the backend ComfyUI server. To secure ComfyUI in a production environment, generate valid SSL certificates using a trusted authority such as Let's Encrypt. Follow the steps in this section to install the Certbot Let's Encrypt client to request SSL certificates using your ComfyUI domain name.
Install the Certbot Let's Encrypt client using Snap.
console$ sudo snap install --classic certbot
Generate a new SSL certificate using your domain. Replace
comfyui.example.com
with your actual domain name, anduser@example.com
with your active email address.console$ sudo certbot --nginx -d comfyui.example.com -m user@example.com --agree-tos
Verify that Certbot auto-renews the SSL certificate upon expiry.
console$ sudo certbot renew --dry-run
If the above command completes successfully, Certbot auto-renews your SSL certificate after every
90
days before expiry.
Generate Images using ComfyUI
Visit your ComfyUI domain name using a web browser such as Chrome to access the application interface.
https://comfyui.example.com
By default, ComfyUI uses the text-to-image workflow. In case of any changes, click Load Default in the floating right panel to switch to the default workflow.
To generate images, click the Load Checkpoint node drop-down and select your target model. For example, select the
Stable Diffusion Checkpoint
model.If your node does not contain any models, verify that you correctly applied your model URLs in your
put_checkpoints_here
file.Navigate to the Prompt nodes, and enter a main prompt and a negative prompt to influence your generated image.
Click Queue Prompt in the right bottom floating bar to start the image generation process.
Wait for a few seconds for the image generation process to complete. When ready, view your generated image in the Save Image node.
Enable the ComfyUI Manager
ComfyUI Manager is a custom node that provides a user-friendly interface for managing other custom nodes. It allows you to install, update, remove, enable, and disable custom nodes without manually installing them on your server. This saves time and development effort when working with different custom nodes. Follow the steps below to integrate the ComfyUI manager in your application.
In your server terminal session, switch to the ComfyUI application directory.
console$ cd /home/example_user/ComfyUI
Stop the ComfyUI system service.
console$ sudo systemctl stop comfyui
Switch to the
custom_nodes
directory.console$ cd custom_nodes
Clone the ComfyUI manager repository using Git.
console$ git clone https://github.com/ltdrdata/ComfyUI-Manager.git
Start the ComfyUI system service.
console$ sudo systemctl start comfyui
Access your ComfyUI application in your web browser
https://comfyui.example.com
Click
Manager
in the floating right bottom bar.Verify that the ComfyUI manager interface loads correctly.
Click
Install Custom Nodes
orInstall Models
to access the installer dialog. Then, install any additional models you'd like to enable in your ComfyUI application.
Common Nodes
ComfyUI contains nodes that are modular units designed to execute specific Image generation functions. All nodes connect through links to form a workflow. Each node processes the input data and passes the output to the next node for further processing. This modular approach empowers you to create diverse image generation workflows and experiment with multiple settings to generate high-quality results. Below are the most common ComfyUI node types you can configure in your application.
- Input nodes: Export data to other nodes, such as images, text prompts, and random numbers.
- Processing nodes: Manipulate the data provided by input nodes, such as resizing images, applying filters, and generating prompts.
- Output nodes: Save results from the image generation process, such as saving images to disk or displaying them in a preview window.
- Load Checkpoint: Loads a pre-trained Stable Diffusion model into ComfyUI.
- CLIP Encode: Encodes text prompts into CLIP embeddings.
- Prompt Node: Provides text prompts to the model.
- Negative Prompt Node: Provides negative prompts to the model to guide the image generation process.
- VAE Encode: Converts images to latent representations.
- VAE Decode: Converts latent representations to images.
- Random Seed Node: Generates random seeds for the Image generation samplers.
- Empty Latent Image Node: Creates a blank latent image of a specified size for use in other image generation nodes.
- Scale Node: Scales images to a desired size.
- Save image Node: Saves generated images to disk.
- Preview Node: Displays generated images in a preview window.
Conclusion
In this guide, you have set up a ComfyUI server, installed all necessary dependencies, and accessed ComfyUI to generate images in a production environment. Additionally, you installed the ComfyUI Manager to extend the application nodes and models. For more information and configuration options, visit the ComfyUI community documentation.