Introduction
Large language models (LLM) are artificial intelligence models that process natural language inputs and generate human-like outputs. You can implement LLM operations for various tasks such as chatbots, virtual assistants, and customer service. Rust is a statically and strongly typed programming language that focuses on performance and safety. It's a good choice for developing chatbots and other LLM-powered applications.
This article explains how to build a Chatbot in Rust with Large Language Models on a Rcs Cloud GPU server. You will use the Leptos web framework to build a web application in Rust. Then, integrate a Large Language Model to enable the Chatbot application processes.
Prerequisites
Before you begin:
Deploy a One-Click Docker instance with at least
8 GBof RAM to use as your development serverDeploy a Ubuntu Rcs Cloud GPU NVIDIA A100 Server with
10GB GPU RAM to use as the production serverUse SSH to access the development server as a non-root user with sudo privileges
Add the new user to the Docker group:
console# usermod -aG docker example_user
Switch to the non-root user account:
console# su example_user
Set Up the Development Server
Update the server.
console$ sudo apt-get update
Install
build-essentialandlibssl-devdependency packages.console$ sudo apt-get install -y build-essential libssl-dev
Install the latest Rust toolchain and Cargo packages.
console$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
When prompted for your desired installation type, enter
1to select theProceed with installation (default)option.Activate Rust in your server session.
console$ source "$HOME/.cargo/env"
View the installed Rust and Cargo versions.
console$ rustc --version && cargo --version
Install the
cargo-leptospackage usingcargo.console$ cargo install cargo-leptos@0.2.4
Create a new project
rust-chatbotusing Cargo and thestart-axum-workspacetemplate.console$ cargo leptos new --git https://github.com/quanhua92/start-axum-workspace --name rust-chatbot
Switch to the
rust-chatbotdirectory.console$ cd rust-chatbot
Install the
wasm32target usingrustupconsole$ rustup target add wasm32-unknown-unknown
Build the Rust Chatbot Application
Edit the
Cargo.tomlfile in theappdirectory using a text editor such as Nano.console$ nano app/Cargo.toml
Add the following
serdecrate to thedependenciessection within the file.toml[dependencies] serde = { version = "1.0.188", features = ["derive"] }
Create a
modelssub-directory in theapp/srcdirectory.console$ mkdir -p app/src/models
Create a new file
conversation.rsin theapp/src/modelsdirectory to implement the conversation area flow.console$ nano app/src/models/conversation.rs
Add the following code to the file.
rustuse serde::Deserialize; use serde::Serialize; #[derive(Serialize, Deserialize, Clone, Debug)] pub struct Message { pub text: String, pub sender: String, } #[derive(Serialize, Deserialize, Clone, Debug)] pub struct Conversation { pub messages: Vec<Message>, } impl Conversation { pub fn new() -> Self { Self { messages: Vec::new(), } } }
Save and close the file.
Create a new
mod.rsin theapp/src/modelsdirectory.console$ nano app/src/models/mod.rs
Add the following code to the file.
rustpub mod conversation; pub use conversation::{Conversation, Message};
The above code creates the
Conversationstruct andMessagestruct to manage the chat data.
Create the Conversation Area
Create a
componentssub-directory in theapp/srcdirectory.console$ mkdir -p app/src/components
Create a new file
conversation_area.rsin theapp/src/componentsdirectory.console$ nano app/src/components/conversation_area.rs
Add the following code to the file.
rustuse crate::models::Conversation; use leptos::html::Div; use leptos::logging::log; use leptos::*; #[component] pub fn ConversationArea(conversation: ReadSignal<Conversation>) -> impl IntoView { let div_ref = create_node_ref::<Div>(); create_effect(move |_| { let c = conversation.get(); log!("ConversationArea: {:?}", c); if let Some(div) = div_ref.get() { request_animation_frame(move || { div.set_scroll_top(div.scroll_height()); }); } }); view! { <div class="conversation-area" node_ref=div_ref> { move || conversation.get().messages.iter().map(move |message| { view! { <div class="message"> <span class="message-sender">{message.sender.clone()}</span> <p class="message-text">{message.text.clone()}</p> </div> } }) .collect::<Vec<_>>() } </div> } }
Save and close the file.
This creates a component named
ConversationAreathat displays all messages in the conversation.
Create the Application Input Area
Create a new file
input_area.rsin theapp/src/componentsdirectory.console$ nano app/src/components/input_area.rs
Add the following code to the file.
rustuse crate::models::Conversation; use leptos::html::Input; use leptos::*; #[component] pub fn InputArea(submit: Action<String, Result<Conversation, ServerFnError>>) -> impl IntoView { let text_ref = create_node_ref::<Input>(); view! { <form class="input-area" on:submit=move |ev| { ev.prevent_default(); let input = text_ref.get().expect("input exists"); let user_input = input.value(); let user_input = user_input.trim(); if !user_input.is_empty() { submit.dispatch(user_input.to_string()); input.set_value(""); } }> <input type="text" class="input-area-text" placeholder="Enter a prompt here" node_ref=text_ref/> <input type="submit" class="input-area-button" value="Send"/> </form> } }
Save and close the file.
The above code creates a new component
InputAreathat displays a form with a text input field and a submit button to send a new prompt. The button uses asubmitsignal to send the new prompt message to the parent component.Create a new
mod.rsfile in theapp/src/components/directory to import the input area.console$ nano app/src/components/mod.rs
Add the following code to the file.
rustpub mod conversation_area; pub mod input_area; pub use conversation_area::ConversationArea; pub use input_area::InputArea;
Save and close the file.
Apply CSS Styling to the Application Interface
Back up the original
style/main.scssfile.console$ mv style/main.scss style/main.scss.ORIG
Create a new
style/main.scssfile to include the application style elements.console$ nano style/main.scss
Add the following CSS code to the file.
cssbody { font-family: sans-serif; text-align: center; margin: 0; padding: 0; } .chat-area { display: flex; flex-direction: column; height: 100vh; justify-content: space-between; } .conversation-area { overflow: auto; display: flex; flex-direction: column; padding: 0.25rem; } .conversation-area > .message { display: flex; align-items: center; gap: 0.5rem; border-bottom: 1px solid hsl(0, 0, 0%, 10%); } .conversation-area > .message > .message-sender { min-width: 40px; height: 40px; border-radius: 20px; background-color: hsl(0, 0, 0%, 10%); display: flex; align-items: center; justify-content: center; font-size: 0.7em; } .input-area { display: flex; justify-content: space-between; gap: 0.5rem; padding: 0.25rem; } .input-area-text { flex-grow: 1; min-height: 2em; }
Save and close the file.
Create the Application Server Function
In this section, create a server function to handle the chat conversation. The function uses the Leptos framework to call both frontend and backend server functions. When accessed, the Leptos framework sends a fetch request to the server, serializes arguments, and deserializes the return value from the response.
Create a new file
api.rsin theapp/srcdirectory.console$ nano app/src/api.rs
Add the following code to the file.
rustuse crate::models::Conversation; use crate::models::Message; use leptos::logging::log; use leptos::*; #[server(ProcessConversation, "/api")] pub async fn process_conversation( conversation: Conversation, ) -> Result<Conversation, ServerFnError> { log!("process_conversation {:?}", conversation); let mut conversation = conversation; conversation.messages.push(Message { text: "Response from AI".to_string(), sender: "AI".to_string(), }); Ok(conversation) }
Save and close the file.
The above code creates a server function
process_conversationthat displays the textResponse from AIwithin the application conversation area.Back up the original
lib.rsin theapp/srcdirectoryconsole$ mv app/src/lib.rs app/src/lib.ORIG
Create the
lib.rsfile.console$ nano app/src/lib.rs
Add the following contents to the file.
rustuse leptos::*; use leptos_meta::*; use leptos_router::*; pub mod api; pub mod components; pub mod error_template; pub mod models; use crate::api::process_conversation; use crate::components::{ConversationArea, InputArea}; use crate::models::{Conversation, Message}; #[component] pub fn App() -> impl IntoView { // Provides context that manages stylesheets, titles, meta tags, etc. provide_meta_context(); view! { // injects a stylesheet into the document <head> // id=leptos means cargo-leptos will hot-reload this stylesheet <Stylesheet id="leptos" href="/pkg/start-axum-workspace.css"/> // sets the document title <Title text="Welcome to Rust Chatbot"/> // content for this welcome page <Router> <main> <Routes> <Route path="" view=|| view! { <HomePage/> }/> </Routes> </main> </Router> } } /// Renders the home page of your application. #[component] fn HomePage() -> impl IntoView { // Creates a reactive value to update the button let (conversation, set_conversation) = create_signal(Conversation::new()); let send_message = create_action(move |input: &String| { let message = Message { text: input.clone(), sender: "User".to_string(), }; set_conversation.update(move |c| { c.messages.push(message); }); process_conversation(conversation.get()) }); create_effect(move |_| { if let Some(_) = send_message.input().get() { set_conversation.update(move |c| { c.messages.push(Message { text: "...".to_string(), sender: "AI".to_string(), }); }); } }); create_effect(move |_| { if let Some(Ok(response)) = send_message.value().get() { set_conversation.set(response); } }); view! { <div class="chat-area"> <ConversationArea conversation /> <InputArea submit=send_message /> </div> } }
Save and close the file.
In the above code, the web application renders the
HomePagecomponent that contains theConversationAreaandInputArea. The signalsend_messagecalls theprocess_conversationserver function to process the conversation.Build the application using
cargo.console$ cargo leptos build
When successful, verify the build process does not return any errors.
By default, UFW is active on Rcs Ubuntu servers. To enable the application interface, allow the HTTP server port
3000through the firewall.console$ sudo ufw allow 3000
Allow the web socket port
3001.console$ sudo ufw allow 3001
Reload the Firewall rules to apply changes.
console$ sudo ufw reload
Run the application using
cargoto test the application interface.console$ LEPTOS_SITE_ADDR=0.0.0.0:3000 cargo leptos watch
Visit your Server IP on port
3000to access the application.http://<SERVER-IP>:3000
In your server terminal session, press Ctrl + C on your keyboard to stop the running application process.
Add a Language Model to the Application
To enable all application processes, integrate a pre-trained Large Language Model (LLM) that generates a response within the application. The model loads o