Photo by New Material on Unsplash
Query Your PostgreSQL Database with LangChain and Llama 3.1 : Exploring LLMs - 2
Generating responses in natural language
You can find the link to this tutorial here
In our previous blog post, we discussed how to generate and execute queries on a database by inputting text in natural language. In our today’s blog post, we’ll try to take the output of the query and pass it again through the LLM to generate a result that is also in natural language.
If you are new to this series, consider going through Query Your PostgreSQL Database with LangChain and Llama 3.1 : Exploring LLMs - 1 for more context.
Let’s code!
Step 0 - Previous Code
We’ll be building this feature on top of our code from the previous blog post, so the following part remains relatively similar.
from langchain_community.llms import Ollama
llm = Ollama(model = "llama3.1")
from langchain_community.utilities import SQLDatabase
from langchain.chains import create_sql_query_chain
from langchain_core.prompts import PromptTemplate
db = SQLDatabase.from_uri("postgresql://abouzuhayr:@localhost:5432/postgres")
write_query = create_sql_query_chain(llm=llm, db=db)
Step 1 : Define the Answer Prompt Template
We'll define a prompt template that will be used to generate the final answer to the user.
langchain_core.prompts import PromptTemplate
answer_prompt = PromptTemplate.from_template(
"""Given the following user question, corresponding SQL query, and SQL result, answer the user question.
Question: {question}
SQL Query: {query}
SQL Result: {result}
Answer: """
)
A PromptTemplate
is a way to structure the input that we'll provide to the LLM. It allows us to define a template with placeholders (like {question}
, {query}
, and {result}
) that will be filled in with actual values during execution. This helps the LLM understand the context and generate accurate responses.
Step 2 : Creating a Runnable for write_query_with_question
Next, we need to create a component that generates the SQL query from the user's question and passes along the question for later use. We'll define a function and wrap it with RunnableLambda
:
langchain.schema.runnable import RunnableLambda
# Create a Runnable that wraps write_query and passes along the question
def write_query_with_question(inputs):
response = write_query.invoke(inputs)
return {'response': response, 'question': inputs['question']}
write_query_runnable = RunnableLambda(write_query_with_question)
Understanding RunnableLambda
RunnableLambda
allows us to wrap a simple Python function so it can be used as a component in our LangChain chain. This makes it compatible with other chain elements and ensures the data flows smoothly between them.
In this case, write_query_with_question
:
Invokes the
write_query
chain (which generates the SQL query from the question).Returns a dictionary containing the response from
write_query
and the original question.
Step 3 : Extracting and Executing the SQL Query
Now, we'll define a function to extract the SQL query from the LLM's response and execute it against our database:
import re
# Define your function to extract and execute the SQL query
def extract_and_execute_sql(inputs):
# Extract the response text and question
response = inputs.get('response', '')
question = inputs.get('question', '')
# Define the regular expression pattern to match the SQL query
pattern = re.compile(r'SQLQuery:\s*(.*)')
# Search for the pattern in the response
match = pattern.search(response)
if match:
# Extract the matched SQL query
sql_query = match.group(1).strip()
# Execute the query using your custom logic
result = db.run(sql_query)
# Return the necessary information for the next steps in the chain
return {
"question": question,
"query": sql_query,
"result": result
}
else:
return {
"question": question,
"query": None,
"result": "No SQL query found in the response."
}
}
This function:
Extracts the SQL query from the LLM's response using a regular expression.
Executes the SQL query against the database.
Returns the question, the extracted SQL query, and the query result.
Wrapping extract_and_execute_sql
with RunnableLambda
We wrap the extract_and_execute_sql
function with RunnableLambda
so it can be included in our chain:
# Wrap your function with RunnableLambda
extract_and_execute = RunnableLambda(extract_and_execute_sql)
Again, RunnableLambda
makes our custom function compatible with the chain by treating it as a runnable component.
Building the Chain
Now, we assemble the components into a chain:
langchain_core.output_parsers import StrOutputParser
# Create the chain
chain = (
write_query_runnable
| extract_and_execute
| answer_prompt
| llm
| StrOutputParser()
)
How Does the Chain Work?
The chain consists of several components linked together using the |
operator, which represents the flow of data from one component to the next.
write_query_runnable
: Generates the SQL query from the user's question and passes along the question.extract_and_execute
: Extracts the SQL query from the response and executes it against the database.answer_prompt
: Formats the inputs (question, SQL query, and result) into a prompt for the LLM.llm
: The language model generates a natural language answer based on the prompt.StrOutputParser()
: Parses the LLM's output into a string for final presentation.
Visualizing the Data Flow
Input:
{"question": "How many employees are there in total?"}
After
write_query_runnable
:Generates the SQL query.
Output:
{'response': '...SQLQuery: SELECT COUNT(*) FROM employees;', 'question': 'How many employees are there in total?'}
After
extract_and_execute
:Extracts and executes the SQL query.
Output:
{'question': 'How many employees are there in total?', 'query': 'SELECT COUNT(*) FROM employees;', 'result': '42'}
After
answer_prompt
:Formats the prompt for the LLM.
Output: Formatted string with placeholders filled.
After
llm
:Generates the natural language answer.
Output:
'There are 42 employees in total.'
After
StrOutputParser()
:Parses the output to a string.
Final Output:
'There are 42 employees in total.'
Invoking the Chain
We can now use the chain to process a question:
# Invoke the chain with your question
response = chain.invoke({"question": "How many employees are there in total?"})
# Print the final answer
print(response)
This will output:
There are 5 employees in total.
Conclusion
By leveraging LangChain's chaining capabilities, we can build a modular and extensible application that processes natural language questions, generates and executes SQL queries, and returns the results in a user-friendly format. The use of PromptTemplate
and RunnableLambda
allows us to create reusable components that can be easily integrated into a chain.
Key Takeaways
PromptTemplate
: Helps structure inputs for the LLM by defining templates with placeholders.RunnableLambda
: Wraps custom functions to make them compatible with LangChain's chain, treating them as runnable components.Chaining: By linking components together, we can create complex workflows that process data step by step.
In our next blog post, we’ll explore how to add memory to our chatbot such that it can retain context while answering questions, making it more fun to use!
Till then, happy coding!
Wait, You're Leaving Without Sharing Your Wisdom? 🤔
Hold up! Before you dive back into the code abyss or start chaining together more functions than a magician's handkerchief, let's have a little chat! If this post made you chuckle, think, or question my sanity (all valid responses), why not hit that like button? 👍
Remember, every time you leave a comment:
A bug gets its wings 🐛➡️🦋
A programmer's coffee turns into code ☕➡️💻
An infinite loop finds its end 🔄➡️🏁
Also, let's face it, my code could use all the help it can get. 😅