Knowledgebase

How to Deploy ComfyUI on Cloud GPU Print

  • 0

Introduction

ComfyUI is a node-based Stable Diffusion Graphical User Interface (GUI) that lets you design and execute advanced pipelines using a flowchart-based interface. The interface offers optimization points such as re-execution of workflow parts that change between executions, supports loading checkpoints, saves workflows as JSON files, and generates full workflows from PNG files.

This guide explains how to deploy ComfyUI on a Rcs Cloud GPU server. You will set up the server with all necessary dependencies, install the ComfyUI Manager for further optimizations, and run the ComfyUI application as a system service for deployment in a production environment.

Prerequisites

Before you begin:

Install ComfyUI

  1. Switch to your user home directory.

    console
    $ cd
    
  2. Clone the ComfyUI repository using Git.

    console
    $ git clone https://github.com/comfyanonymous/ComfyUI.git
    
  3. Switch to the ComfyUI directory.

    console
    $ cd ComfyUI
    
  4. Install the required PyTorch and xformers dependency packages.

    console
    $ pip install torch==2.1.0+cu121 torchvision==0.16.0+cu121 torchaudio==2.1.0+cu121 --extra-index-url https://download.pytorch.org/whl xformers
    

    The above command installs the following packages:

    • torch: A machine learning library that provides a flexible and dynamic computational graph for deep learning tasks.
    • torchvision: A PyTorch library designed for computer vision tasks. It includes utilities for image and video datasets designed to handle image classification and object detection.
    • torchaudio: A PyTorch library focused on audio processing tasks. It provides audio loading, transformations, and common audio datasets for deep learning applications.
    • xformers: A Python library that implements various transformer-based models and attention mechanisms. It's built on top of PyTorch and offers a convenient way to work and experiment with transformer architectures.
  5. Install additional dependencies using the requirements.txt file.

    console
    $ pip install -r requirements.txt
    
  6. Switch to the checkpoints directory.

    console
    $ cd models/checkpoints
    
  7. Edit the default put_checkpoints_here file using a text editor such as Nano.

    console
    $ nano put_checkpoints_here
    
  8. Add the following content to the file.

    bash
    # Checkpoints
    
    ### SDXL
    
    wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors -P ./models/checkpoints/
    #wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors -P ./models/checkpoints/
    
    # SD1.5
    
    wget -c https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -P ./models/checkpoints/
    
    # SD2
    
    #wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors -P ./models/checkpoints/
    #wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors -P ./models/checkpoints/
    

    Save and close the file.

    The above configuration downloads the Stable Diffusion XL and Stable Diffusion 1.5 models to your project using the wget utility. To enable additional models such as Vae, Lora, or any other fine-tuned models, navigate to Hugging Face, and copy the model checkpoint file URL. Then, add the full URL to the put_checkpoints_here file.

  9. Run the file using bash to download the specified models.

    console
    $ bash put_checkpoints_here
    
  10. Switch to your main project directory ComfyUI.

    console
    (venv) $ cd /home/example_user/ComfyUI
    

Set Up ComfyUI as a System Service

  1. Create a new ComfyUI service file.

    console
    sudo nano /etc/systemd/system/comfyui.service
    
  2. Add the following configurations to the file. Replace /home/example_user/ComfyUI with your ComfyUI path, and example_user with your actual system username.

    ini
    [Unit]
    Description=ComfyUI Daemon
    After=network.target
    
    [Service]
    User=example_user
    Group=example_user
    WorkingDirectory=/home/example_user/ComfyUI
    ExecStart=python3 main.py
    
    [Install]
    WantedBy=multi-user.target
    

    Save and close the file.

    The above configuration creates a new comfyui system service that manages the ComfyUI application runtime processes.

  3. Enable the comfyui system service.

    console
    $ sudo systemctl enable comfyui
    
  4. Restart the Systemd daemon to apply changes.

    console
    $ sudo sytemctl daemon-reload
    
  5. Start the ComfyUI service.

    console
    $ sudo systemctl start comfyui
    
  6. View the ComfyUI service status and verify that it's active, and running.

    console
    $ sudo systemctl status comfyui
    

    Your output should look like the one below:

    ● comfyui.service - ComfyUI Daemon
    
         Loaded: loaded (/etc/systemd/system/comfyui.service; enabled; vendor preset: enabled)
         Active: active (running) since Mon 2023-12-04 20:09:27 UTC; 34s ago
       Main PID: 3306 (python)
          Tasks: 6 (limit: 17835)
         Memory: 303.3M
            CPU: 4.039s
         CGroup: /system.slice/comfyui.service
                 └─3306 /root/ComfyUI/venv/bin/python main.py
    
    Dec 04 20:09:30 vultr python[3306]: Set vram state to: NORMAL_VRAM
    Dec 04 20:09:30 vultr python[3306]: Device: cuda:0 GRID A100D-1-10C MIG 1g.9gb : cudaMallocAsync....
  7. Test access to the default ComfyUI port 8188 using the curl utility.

    console
    $ curl 127.0.0.1:8188
    

Set Up Nginx as a Reverse Proxy to Securely Expose ComfyUI

To securely expose ComfyUI in production environments, set up the Nginx web server as a reverse proxy to forward incoming connection requests on the HTTP port 80 to the backend ComfyUI port 8188. This allows you to mask your ComfyUI ports and securely handle all connections. Follow the steps below to set up a new Nginx virtual host configuration to forward connection requests to ComfyUI.

  1. Install Nginx.

    console
    $ sudo apt install nginx -y
    
  2. Verify that the Nginx web server is available, active, and running.

    console
    $ sudo systemctl status nginx
    
  3. Create a new Nginx virtual host configuration file in the sites-available directory.

    console
    $ sudo nano /etc/nginx/sites-available/comfyui.conf
    
  4. Add the following configurations to the file. Replace comfyui.example.com with your actual domain.

    nginx
    server {
        listen 80;
        listen [::]:80;
    
        server_name comfyui.example.com;
    
        location / {
            proxy_pass http://127.0.0.1:8188;
        }
    }
    

    Save and close the file.

  5. Link the configuration file to the sites-enabled directory to activate the new ComfyUI virtual host profile.

    console
    $ sudo ln -s /etc/nginx/sites-available/comfyui.conf /etc/nginx/sites-enabled/
    
  6. Test the Nginx configuration for errors.

    console
    $ sudo nginx -t
    

    Output:

    shell
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
    
  7. Restart Nginx to apply changes.

    console
    $ sudo systemctl restart nginx
    

Security

By default, Uncomplicated Firewall (UFW) is active on Rcs Ubuntu servers. To enable connections to the HTTP port 80, and HTTPS port 443 allow the ports through your firewall configuration to access the ComfyUI interface as described in the steps below.

  1. View the UFW firewall table to verify it's active on your server.

    console
    $ sudo ufw status
    

    When the UFW status is inactive, run the following command to enable the firewall.

    console
    $ sudo ufw enable
    
  2. Allow the HTTP port 80 through the firewall.

    console
    $ sudo ufw allow 80/tcp
    
  3. Allow the HTTPS port 443.

    console
    $ sudo ufw allow 443/tcp
    
  4. Reload the firewall rules to save changes.

    console
    $ sudo ufw reload
    

Secure ComfyUI with Valid Let's Encrypt SSL Certificates

SSL Certificates encrypt the connection between users and the backend ComfyUI server. To secure ComfyUI in a production environment, generate valid SSL certificates using a trusted authority such as Let's Encrypt. Follow the steps in this section to install the Certbot Let's Encrypt client to request SSL certificates using your ComfyUI domain name.

  1. Install the Certbot Let's Encrypt client using Snap.

    console
    $ sudo snap install --classic certbot
    
  2. Generate a new SSL certificate using your domain. Replace comfyui.example.com with your actual domain name, and user@example.com with your active email address.

    console
    $ sudo certbot --nginx -d comfyui.example.com -m user@example.com --agree-tos
    
  3. Verify that Certbot auto-renews the SSL certificate upon expiry.

    console
    $ sudo certbot renew --dry-run
    

    If the above command completes successfully, Certbot auto-renews your SSL certificate after every 90 days before expiry.

Generate Images using ComfyUI

  1. Visit your ComfyUI domain name using a web browser such as Chrome to access the application interface.

    https://comfyui.example.com

    Image of ComfyUI default view

    By default, ComfyUI uses the text-to-image workflow. In case of any changes, click Load Default in the floating right panel to switch to the default workflow.

  2. To generate images, click the Load Checkpoint node drop-down and select your target model. For example, select the Stable Diffusion Checkpoint model.

    Image of selecting a model

    If your node does not contain any models, verify that you correctly applied your model URLs in your put_checkpoints_here file.

  3. Navigate to the Prompt nodes, and enter a main prompt and a negative prompt to influence your generated image.

    Image of prompt and negative prompt node

  4. Click Queue Prompt in the right bottom floating bar to start the image generation process.

  5. Wait for a few seconds for the image generation process to complete. When ready, view your generated image in the Save Image node.

    Image of queue section and generate image node

Enable the ComfyUI Manager

ComfyUI Manager is a custom node that provides a user-friendly interface for managing other custom nodes. It allows you to install, update, remove, enable, and disable custom nodes without manually installing them on your server. This saves time and development effort when working with different custom nodes. Follow the steps below to integrate the ComfyUI manager in your application.

  1. In your server terminal session, switch to the ComfyUI application directory.

    console
    $ cd /home/example_user/ComfyUI
    
  2. Stop the ComfyUI system service.

    console
    $ sudo systemctl stop comfyui
    
  3. Switch to the custom_nodes directory.

    console
    $ cd custom_nodes
    
  4. Clone the ComfyUI manager repository using Git.

    console
    $ git clone https://github.com/ltdrdata/ComfyUI-Manager.git
    
  5. Start the ComfyUI system service.

    console
    $ sudo systemctl start comfyui
    
  6. Access your ComfyUI application in your web browser

    https://comfyui.example.com
  7. Click Manager in the floating right bottom bar.

    Image of ComfyUI Manager option

  8. Verify that the ComfyUI manager interface loads correctly.

    Image of ComfyUI Manager view

  9. Click Install Custom Nodes or Install Models to access the installer dialog. Then, install any additional models you'd like to enable in your ComfyUI application.

    Image of Install node

Common Nodes

ComfyUI contains nodes that are modular units designed to execute specific Image generation functions. All nodes connect through links to form a workflow. Each node processes the input data and passes the output to the next node for further processing. This modular approach empowers you to create diverse image generation workflows and experiment with multiple settings to generate high-quality results. Below are the most common ComfyUI node types you can configure in your application.

  • Input nodes: Export data to other nodes, such as images, text prompts, and random numbers.
  • Processing nodes: Manipulate the data provided by input nodes, such as resizing images, applying filters, and generating prompts.
  • Output nodes: Save results from the image generation process, such as saving images to disk or displaying them in a preview window.
  • Load Checkpoint: Loads a pre-trained Stable Diffusion model into ComfyUI.
  • CLIP Encode: Encodes text prompts into CLIP embeddings.
  • Prompt Node: Provides text prompts to the model.
  • Negative Prompt Node: Provides negative prompts to the model to guide the image generation process.
  • VAE Encode: Converts images to latent representations.
  • VAE Decode: Converts latent representations to images.
  • Random Seed Node: Generates random seeds for the Image generation samplers.
  • Empty Latent Image Node: Creates a blank latent image of a specified size for use in other image generation nodes.
  • Scale Node: Scales images to a desired size.
  • Save image Node: Saves generated images to disk.
  • Preview Node: Displays generated images in a preview window.

Conclusion

In this guide, you have set up a ComfyUI server, installed all necessary dependencies, and accessed ComfyUI to generate images in a production environment. Additionally, you installed the ComfyUI Manager to extend the application nodes and models. For more information and configuration options, visit the ComfyUI community documentation.

Introduction ComfyUI is a node-based Stable Diffusion Graphical User Interface (GUI) that lets you design and execute advanced pipelines using a flowchart-based interface. The interface offers optimization points such as re-execution of workflow parts that change between executions, supports loading checkpoints, saves workflows as JSON files, and generates full workflows from PNG files. This guide explains how to deploy ComfyUI on a Rcs Cloud GPU server. You will set up the server with all necessary dependencies, install the ComfyUI Manager for further optimizations, and run the ComfyUI application as a system service for deployment in a production environment. Prerequisites Before you begin: Deploy a fresh Ubuntu 22.04 A100 Rcs GPU Stack server using the Rcs marketplace application with at least 40 GB of GPU RAM. Set up a new domain A record that points to the Server IP Address. For example, comfyui.example.com Acess the server using SSH as a non-root sudo user Update the server Install ComfyUI Switch to your user home directory. CONSOLE Copy $ cd Clone the ComfyUI repository using Git. CONSOLE Copy $ git clone https://github.com/comfyanonymous/ComfyUI.git Switch to the ComfyUI directory. CONSOLE Copy $ cd ComfyUI Install the required PyTorch and xformers dependency packages. CONSOLE Copy $ pip install torch==2.1.0+cu121 torchvision==0.16.0+cu121 torchaudio==2.1.0+cu121 --extra-index-url https://download.pytorch.org/whl xformers The above command installs the following packages: torch: A machine learning library that provides a flexible and dynamic computational graph for deep learning tasks. torchvision: A PyTorch library designed for computer vision tasks. It includes utilities for image and video datasets designed to handle image classification and object detection. torchaudio: A PyTorch library focused on audio processing tasks. It provides audio loading, transformations, and common audio datasets for deep learning applications. xformers: A Python library that implements various transformer-based models and attention mechanisms. It's built on top of PyTorch and offers a convenient way to work and experiment with transformer architectures. Install additional dependencies using the requirements.txt file. CONSOLE Copy $ pip install -r requirements.txt Switch to the checkpoints directory. CONSOLE Copy $ cd models/checkpoints Edit the default put_checkpoints_here file using a text editor such as Nano. CONSOLE Copy $ nano put_checkpoints_here Add the following content to the file. BASH Copy # Checkpoints ### SDXL wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors -P ./models/checkpoints/ #wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors -P ./models/checkpoints/ # SD1.5 wget -c https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -P ./models/checkpoints/ # SD2 #wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors -P ./models/checkpoints/ #wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors -P ./models/checkpoints/ Save and close the file. The above configuration downloads the Stable Diffusion XL and Stable Diffusion 1.5 models to your project using the wget utility. To enable additional models such as Vae, Lora, or any other fine-tuned models, navigate to Hugging Face, and copy the model checkpoint file URL. Then, add the full URL to the put_checkpoints_here file. Run the file using bash to download the specified models. CONSOLE Copy $ bash put_checkpoints_here Switch to your main project directory ComfyUI. CONSOLE Copy (venv) $ cd /home/example_user/ComfyUI Set Up ComfyUI as a System Service Create a new ComfyUI service file. CONSOLE Copy sudo nano /etc/systemd/system/comfyui.service Add the following configurations to the file. Replace /home/example_user/ComfyUI with your ComfyUI path, and example_user with your actual system username. INI Copy [Unit] Description=ComfyUI Daemon After=network.target [Service] User=example_user Group=example_user WorkingDirectory=/home/example_user/ComfyUI ExecStart=python3 main.py [Install] WantedBy=multi-user.target Save and close the file. The above configuration creates a new comfyui system service that manages the ComfyUI application runtime processes. Enable the comfyui system service. CONSOLE Copy $ sudo systemctl enable comfyui Restart the Systemd daemon to apply changes. CONSOLE Copy $ sudo sytemctl daemon-reload Start the ComfyUI service. CONSOLE Copy $ sudo systemctl start comfyui View the ComfyUI service status and verify that it's active, and running. CONSOLE Copy $ sudo systemctl status comfyui Your output should look like the one below: ● comfyui.service - ComfyUI Daemon Loaded: loaded (/etc/systemd/system/comfyui.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2023-12-04 20:09:27 UTC; 34s ago Main PID: 3306 (python) Tasks: 6 (limit: 17835) Memory: 303.3M CPU: 4.039s CGroup: /system.slice/comfyui.service └─3306 /root/ComfyUI/venv/bin/python main.py Dec 04 20:09:30 vultr python[3306]: Set vram state to: NORMAL_VRAM Dec 04 20:09:30 vultr python[3306]: Device: cuda:0 GRID A100D-1-10C MIG 1g.9gb : cudaMallocAsync.... Test access to the default ComfyUI port 8188 using the curl utility. CONSOLE Copy $ curl 127.0.0.1:8188 Set Up Nginx as a Reverse Proxy to Securely Expose ComfyUI To securely expose ComfyUI in production environments, set up the Nginx web server as a reverse proxy to forward incoming connection requests on the HTTP port 80 to the backend ComfyUI port 8188. This allows you to mask your ComfyUI ports and securely handle all connections. Follow the steps below to set up a new Nginx virtual host configuration to forward connection requests to ComfyUI. Install Nginx. CONSOLE Copy $ sudo apt install nginx -y Verify that the Nginx web server is available, active, and running. CONSOLE Copy $ sudo systemctl status nginx Create a new Nginx virtual host configuration file in the sites-available directory. CONSOLE Copy $ sudo nano /etc/nginx/sites-available/comfyui.conf Add the following configurations to the file. Replace comfyui.example.com with your actual domain. NGINX Copy server { listen 80; listen [::]:80; server_name comfyui.example.com; location / { proxy_pass http://127.0.0.1:8188; } } Save and close the file. Link the configuration file to the sites-enabled directory to activate the new ComfyUI virtual host profile. CONSOLE Copy $ sudo ln -s /etc/nginx/sites-available/comfyui.conf /etc/nginx/sites-enabled/ Test the Nginx configuration for errors. CONSOLE Copy $ sudo nginx -t Output: SHELL Copy nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Restart Nginx to apply changes. CONSOLE Copy $ sudo systemctl restart nginx Security By default, Uncomplicated Firewall (UFW) is active on Rcs Ubuntu servers. To enable connections to the HTTP port 80, and HTTPS port 443 allow the ports through your firewall configuration to access the ComfyUI interface as described in the steps below. View the UFW firewall table to verify it's active on your server. CONSOLE Copy $ sudo ufw status When the UFW status is inactive, run the following command to enable the firewall. CONSOLE Copy $ sudo ufw enable Allow the HTTP port 80 through the firewall. CONSOLE Copy $ sudo ufw allow 80/tcp Allow the HTTPS port 443. CONSOLE Copy $ sudo ufw allow 443/tcp Reload the firewall rules to save changes. CONSOLE Copy $ sudo ufw reload Secure ComfyUI with Valid Let's Encrypt SSL Certificates SSL Certificates encrypt the connection between users and the backend ComfyUI server. To secure ComfyUI in a production environment, generate valid SSL certificates using a trusted authority such as Let's Encrypt. Follow the steps in this section to install the Certbot Let's Encrypt client to request SSL certificates using your ComfyUI domain name. Install the Certbot Let's Encrypt client using Snap. CONSOLE Copy $ sudo snap install --classic certbot Generate a new SSL certificate using your domain. Replace comfyui.example.com with your actual domain name, and user@example.com with your active email address. CONSOLE Copy $ sudo certbot --nginx -d comfyui.example.com -m user@example.com --agree-tos Verify that Certbot auto-renews the SSL certificate upon expiry. CONSOLE Copy $ sudo certbot renew --dry-run If the above command completes successfully, Certbot auto-renews your SSL certificate after every 90 days before expiry. Generate Images using ComfyUI Visit your ComfyUI domain name using a web browser such as Chrome to access the application interface. https://comfyui.example.com By default, ComfyUI uses the text-to-image workflow. In case of any changes, click Load Default in the floating right panel to switch to the default workflow. To generate images, click the Load Checkpoint node drop-down and select your target model. For example, select the Stable Diffusion Checkpoint model. If your node does not contain any models, verify that you correctly applied your model URLs in your put_checkpoints_here file. Navigate to the Prompt nodes, and enter a main prompt and a negative prompt to influence your generated image. Click Queue Prompt in the right bottom floating bar to start the image generation process. Wait for a few seconds for the image generation process to complete. When ready, view your generated image in the Save Image node. Enable the ComfyUI Manager ComfyUI Manager is a custom node that provides a user-friendly interface for managing other custom nodes. It allows you to install, update, remove, enable, and disable custom nodes without manually installing them on your server. This saves time and development effort when working with different custom nodes. Follow the steps below to integrate the ComfyUI manager in your application. In your server terminal session, switch to the ComfyUI application directory. CONSOLE Copy $ cd /home/example_user/ComfyUI Stop the ComfyUI system service. CONSOLE Copy $ sudo systemctl stop comfyui Switch to the custom_nodes directory. CONSOLE Copy $ cd custom_nodes Clone the ComfyUI manager repository using Git. CONSOLE Copy $ git clone https://github.com/ltdrdata/ComfyUI-Manager.git Start the ComfyUI system service. CONSOLE Copy $ sudo systemctl start comfyui Access your ComfyUI application in your web browser https://comfyui.example.com Click Manager in the floating right bottom bar. Verify that the ComfyUI manager interface loads correctly. Click Install Custom Nodes or Install Models to access the installer dialog. Then, install any additional models you'd like to enable in your ComfyUI application. Common Nodes ComfyUI contains nodes that are modular units designed to execute specific Image generation functions. All nodes connect through links to form a workflow. Each node processes the input data and passes the output to the next node for further processing. This modular approach empowers you to create diverse image generation workflows and experiment with multiple settings to generate high-quality results. Below are the most common ComfyUI node types you can configure in your application. Input nodes: Export data to other nodes, such as images, text prompts, and random numbers. Processing nodes: Manipulate the data provided by input nodes, such as resizing images, applying filters, and generating prompts. Output nodes: Save results from the image generation process, such as saving images to disk or displaying them in a preview window. Load Checkpoint: Loads a pre-trained Stable Diffusion model into ComfyUI. CLIP Encode: Encodes text prompts into CLIP embeddings. Prompt Node: Provides text prompts to the model. Negative Prompt Node: Provides negative prompts to the model to guide the image generation process. VAE Encode: Converts images to latent representations. VAE Decode: Converts latent representations to images. Random Seed Node: Generates random seeds for the Image generation samplers. Empty Latent Image Node: Creates a blank latent image of a specified size for use in other image generation nodes. Scale Node: Scales images to a desired size. Save image Node: Saves generated images to disk. Preview Node: Displays generated images in a preview window. Conclusion In this guide, you have set up a ComfyUI server, installed all necessary dependencies, and accessed ComfyUI to generate images in a production environment. Additionally, you installed the ComfyUI Manager to extend the application nodes and models. For more information and configuration options, visit the ComfyUI community documentation.

Was this answer helpful?
Back

Powered by WHMCompleteSolution