Building a Streamlit Chatbot with LangChain and Llama 3.1: Exploring LLMs — 3

If you are new to this series, find the previous related bogs for more context here.

The entire code for this blog can be found here

Quick Recap

Our initial chatbot script used the Langchain library, which allows seamless interaction with LLMs (Large Language Models) like Llama3.1, and integrated a PostgreSQL database. The idea was simple: when the user asks a question, the LLM generates an SQL query, runs it on the database, and returns a result in human-readable form.

While this works well in a command-line interface, our next goal would be to make this more user-friendly by building a chatbot interface.

Creating a chatbot with the latest language models has never been easier, thanks to platforms like Streamlit, which make it simple to turn Python scripts into interactive web applications. In this article, we'll walk through how we transformed our code from the previous blog into a modular chatbot and integrated it into a Streamlit app.

Why Streamlit?

Streamlit is a popular Python library that turns Python scripts into shareable web apps in just a few minutes. Its simplicity, ease of use, and ability to handle real-time interaction make it perfect for projects like this chatbot. With Streamlit, you can create highly interactive interfaces while keeping the focus on your backend logic, without needing to worry about frontend complexities.

Step 1: Refactoring the Script into a Module

To integrate our LLM script into a Streamlit app, we’ll first refactor it into a Python module. The key challenge here is to encapsulate the functionality of querying the LLM and database in a class and create a reusable method that will accept a user's input and return a response.

Here’s how we structure the ChatLLM class in a new file, chat_llm_module.py

import re
from langchain_community.llms import Ollama
from langchain_community.utilities import SQLDatabase
from langchain.chains import create_sql_query_chain
from langchain_core.prompts import PromptTemplate
from langchain.schema.runnable import RunnableLambda
from langchain_core.output_parsers import StrOutputParser

class ChatLLM:
    def __init__(self, model_name="llama3.1", db_uri="postgresql://abouzuhayr:@localhost:5432/postgres"):
        # Initialize LLM and database
        self.llm = Ollama(model=model_name)
        self.db = SQLDatabase.from_uri(db_uri)

        # Create the SQL query chain
        self.write_query = create_sql_query_chain(llm=self.llm, db=self.db)

        # Prompt template for answering the question
        self.answer_prompt = PromptTemplate.from_template(
            """Given the following user question, corresponding SQL query, and SQL result, answer the user question.

            Question: {question}
            SQL Query: {query}
            SQL Result: {result}
            Answer: """
        )

        # Create the LLM chain
        self.chain = self._create_chain()

    def _create_chain(self):
        # Wrap the SQL query generation and execution logic
        def write_query_with_question(inputs):
            response = self.write_query.invoke(inputs)
            return {'response': response, 'question': inputs['question']}

        write_query_runnable = RunnableLambda(write_query_with_question)

        # Function to extract and execute the SQL query
        def extract_and_execute_sql(inputs):
            response = inputs.get('response', '')
            question = inputs.get('question', '')

            # Regex to find the SQL query
            pattern = re.compile(r'SQLQuery:\s*(.*)')
            match = pattern.search(response)

            if match:
                sql_query = match.group(1).strip()
                result = self.db.run(sql_query)
                return {
                    "question": question,
                    "query": sql_query,
                    "result": result
                }
            else:
                return {
                    "question": question,
                    "query": None,
                    "result": "No SQL query found in the response."
                }

        extract_and_execute = RunnableLambda(extract_and_execute_sql)

        # Combine everything into a chain
        chain = (
            write_query_runnable
            | extract_and_execute
            | self.answer_prompt
            | self.llm
            | StrOutputParser()
        )
        return chain

    def get_response(self, question):
        # Call the chain with the user question
        print("question" + question)
        response = self.chain.invoke({"question": question})
        print("answer" + response)
        return response

# Now you can call this class in your Streamlit app

Now that we have simplified the interaction with the LLM by encapsulating all the necessary logic (query creation, execution, and response generation) into a method called get_response. Our module is ready to plug into any interface, such as our Streamlit app. We won’t go into much details about the functionality of this class, since it is similar to what we discussed in our last blog post.

Step 2: Building the Streamlit App

Streamlit is extremely straightforward to work with. Since we have our module ready, integrating it into a Streamlit app is the next step. The idea is to create a chat interface where users can input their questions, and the app responds using our LLM module.

We’ll custom CSS to style the chat bubbles. Here’s how we set up the basic structure:

  1. App Layout: We’ll create a text input field for users to type their questions and a simple form to submit it.

  2. Displaying Conversations: We’ll store the conversation in st.session_state, so it persists across updates. Each message (both user and bot) will be displayed as a chat bubble, styled using custom CSS.

Here’s a streamlined version of the chat.py script:

import streamlit as st
from chat_llm_module import ChatLLM

chat_llm = ChatLLM()

def set_custom_css():
    st.markdown("""
    <style>
    .user_message { ... }
    .bot_message { ... }
    </style>
    """, unsafe_allow_html=True)

def main():
    st.title("AI-Powered Chat with LLM")
    set_custom_css()

    if "conversation" not in st.session_state:
        st.session_state.conversation = []

    with st.form("chat_form", clear_on_submit=True):
        user_input = st.text_input("You:", placeholder="Enter your question here...")
        submitted = st.form_submit_button("Send")

        if submitted and user_input:
            response = chat_llm.get_response(user_input)
            st.session_state.conversation.append(("You", user_input))
            st.session_state.conversation.append(("Bot", response))

    for speaker, message in st.session_state.conversation:
        if speaker == "You":
            st.markdown(f'<div class="user_message">{message}</div>', unsafe_allow_html=True)
        else:
            st.markdown(f'<div class="bot_message">{message}</div>', unsafe_allow_html=True)

if __name__ == "__main__":
    main()

Step 3: Enhancing the User Experience with Custom CSS

To make the chatbot look visually appealing, we can add some basic CSS for the chat bubbles. We created two classes: one for the user’s messages and one for the bot’s responses. Each message appears as a colored bubble, either aligned to the right for user messages or the left for bot responses:

.user_message {
    background-color: #A8DADC;
    color: #1D3557;
    padding: 10px;
    border-radius: 20px;
    float: right;
}

.bot_message {
    background-color: #F1FAEE;
    color: #457B9D;
    padding: 10px;
    border-radius: 20px;
    float: left;
}

Step 4: Testing the App

With the app structure and LLM module in place, our app is ready to run using the following command:

streamlit run chat.py

And that’s it! Our chatbot is live and working as intended! It allows users to enter questions, processes them using the LLM module, and returns responses in real-time, all within an intuitive chat interface.

Conclusion

By integrating the LLM module into a Streamlit app, we’ve successfully created an interactive chatbot that processes user queries, generates SQL commands, and returns results in a conversational format. Refactoring the original LLM script into a modular class made the integration with Streamlit seamless, allowing for reusable and maintainable code. The flexibility of Streamlit, combined with custom CSS for styling, enabled us to create a simple yet intuitive user interface for the chatbot.

Key Takeaways

  • Streamlit: Simplifies the process of turning Python scripts into interactive web apps with minimal effort.

  • Modularization: Encapsulating logic in a class (like ChatLLM) promotes reusable, testable, and scalable code.

  • Custom CSS: Allows for a more user-friendly and visually appealing chat interface.

By chaining together the different components (LLM, SQL, and Streamlit), we can quickly prototype powerful AI applications that are both interactive and functional.

In our next blog post, we’ll dive into enhancing the chatbot with more advanced features like memory, allowing it to retain the context of past conversations for even more personalized responses.

Till then, happy coding!

🚨 Attention, fellow code ninjas! 🚨
If this article made you feel smarter than a chatbot on caffeine, smash that like button like you're debugging a Friday night production issue! 💥

But wait! Before you run off to code your own bot, I need your feedback! Yes, you—the one still reading this. Did I nail it or fail it? Drop a comment, ask a question, or just throw in some emojis. 🙌

Your thoughts are the SQL queries to my database—without them, I’m just sitting here with an empty result set. So don’t ghost me like that 404 Not Found error. Let’s chat, laugh, and code together! 💬👩‍💻👨‍💻

Now go ahead—like, comment, and keep that feedback coming, coding legends! 🦸‍♂️🦸‍♀️