How To Build a Flask ChatGPT ChatBot With Custom Knowledge Base using OpenAI API, Llama-Index, Langchain
Introduction
In today's fast-paced world, businesses and organizations are always on the lookout for ways to provide quick and efficient customer service. One such way that has gained immense popularity in recent times is through chatbots. These AI-powered conversational agents can simulate human conversation and provide personalized assistance to users, making them an invaluable tool for organizations looking to enhance customer experience. In this tutorial, we will take you through the process of building your very own AI-powered chatbot using GPT-3 on customized PDF data, Llama Index, Flask, and OpenAI. So, let's dive right in !
Prerequisites
Before we get started, you'll need to have the following installed:
- Python 3
- Pip (Python Package Installer)
- You'll also need an OpenAI API key, which you can get by signing up for their API at OpenAI API KEY.
Libraries Used
Before we dive into the code, let's briefly discuss the libraries we'll be using:
- Llama Index
- Llama Index
Llama Index is a library that allows you to easily build and search text indexes. In our chatbot, we'll be using Llama Index to index the contents of a PDF file and search for relevant information based on user input.
- Langchain
- Langchain
Langchain is a library that provides a unified interface to a variety of language models for natural language processing. In our chatbot, we'll be using Langchain to interface with GPT-3, a powerful language model developed by OpenAI.
- Flask
- Flask
Flask is a lightweight web framework for Python. In our chatbot, we'll be using Flask to create a web application that will serve as the interface for our chatbot.
- OpenAI
- OpenAI
OpenAI is an AI research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. In our chatbot, we'll be using OpenAI's GPT-3 language model to generate responses to user input.Now that we've covered the libraries we'll be using, let's move on to building our chatbot.Step 1: Installing the Required Libraries :
First, let's install the required libraries using pip:
pip install llama_index langchain flask openaiStep 2: Creating the Flask Application :
Next, let's create a Flask application in a file named app.py:
import osfrom pathlib import Pathfrom dotenv import load_dotenvfrom flask import Flask, render_template, requestfrom llama_index import GPTSimpleVectorIndex, LLMPredictor, ServiceContext, download_loaderfrom langchain import OpenAIAPI_KEY = 'YOUR_API_KEY_HERE'os.environ["OPENAI_API_KEY"] = API_KEYapp = Flask(__name__)load_dotenv()FILES = "./files"PDF_FILE = "file.pdf"πIn this code, we're importing the necessary libraries and setting up our Flask application. We're also defining a path for the PDF file we'll be using to customize GPT-3.Step 3: Defining Functions
Next, we'll define two functions: init() and load_pdf_file().def init():if not os.path.exists(FILES):os.mkdir(FILES)def load_pdf_file():PDFReader = download_loader("PDFReader")loader = PDFReader()return loader.load_data(file=Path(PDF_FILE))The init() function creates a directory for our PDF file if it doesn't already exist. The load_pdf_file() function loads the contents of the PDF file using Llama Index.Step 4: Creating the Index
To create the index, we will first load the PDF file using the PDFReader loader from the llama_index library. We will then create an instance of the LLMPredictor class from the langchain library, which is responsible for generating predictions using OpenAI's GPT-3 language model. We will then create an instance of the ServiceContext class from the llama_index library. This class is used to configure the llama_index services, including the LLMPredictor, the chunk size limit for processing documents, and the batch size limit for training the index. In this case, we will use the default configuration values for the LLMPredictor and the chunk size limit, and set the batch size limit to 1024. Finally, we will create an instance of the GPTSimpleVectorIndex class from the llama_index library. This class is used to index and search for documents using the GPT-3 language model. We will pass the documents loaded from the PDF file, as well as the service context we created, to the constructor of the GPTSimpleVectorIndex class.Here is the code π:def get_index():# Load the PDF file using the PDFReader loaderPDFReader = download_loader("PDFReader")loader = PDFReader()documents = loader.load_data(file=Path(PDF_FILE))# Create an instance of the LLMPredictor classllm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="text-davinci-003"))# Create an instance of the ServiceContext classservice_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, chunk_size_limit=1024, batch_size_limit=1024)# Create an instance of the GPTSimpleVectorIndex classindex = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)return indexThat's it for step 4. We have now created an index for the PDF file data using GPT-3. In the next step, we will create a Flask web application that will allow users to interact with the chatbot.Step 5: Creating a Flask Web Application
To create a web application that will allow users to interact with the chatbot, we will use the Flask framework. Flask is a lightweight web framework that allows developers to quickly and easily create web applications using Python.
First, we will import the necessary modules and set up the Flask application. We will also load the PDF file and create the index using the functions we defined in the previous steps.
We will then define a route for the home page of the web application. This route will handle both GET and POST requests. When a GET request is made, the server will render the home.html template, which contains a form for the user to input their query. When a POST request is made, the server will retrieve the query from the form, use the index to generate a response, and render the home.html template with the response displayed to the user.
We will also define a route for the exit page, which will be displayed when the user types "exit" into the query form.
Here's the code for step 5:
import osfrom pathlib import Pathfrom dotenv import load_dotenvfrom flask import Flask, render_template, requestfrom llama_index import GPTSimpleVectorIndex, LLMPredictor, ServiceContext, download_loaderfrom langchain import OpenAIAPI_KEY = 'OpenAI-API-KEY'os.environ["OPENAI_API_KEY"] = API_KEYapp = Flask(__name__)load_dotenv()FILES = "./files"PDF_FILE = "file.pdf"def init():if not os.path.exists(FILES):os.mkdir(FILES)def load_pdf_file():PDFReader = download_loader("PDFReader")loader = PDFReader()return loader.load_data(file=Path(PDF_FILE))def get_index():# Load the PDF file using the PDFReader loaderdocuments = load_pdf_file()# Create an instance of the LLMPredictor classllm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="text-davinci-003"))# Create an instance of the ServiceContext classservice_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, chunk_size_limit=1024, batch_size_limit=1024)# Create an instance of the GPTSimpleVectorIndex classindex = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)return index@app.route('/', methods=['GET', 'POST'])def home():init()index = get_index()if request.method == 'POST':prompt = request.form.get('prompt')if prompt == "exit":return render_template('exit.html')response = index.query(prompt)response = str(response)if response.startswith("\n"):response = response[1:]return render_template('home.html', response=response)return render_template('home.html')@app.route('/exit')def exit():return render_template('exit.html')if __name__ == '__main__':app.run(debug=True)
That's it for step 5. We have now created a Flask web application that allows users to interact with the chatbot using a simple form. In the next step, we will create the HTML templates that will be used by the web application.
Step 6: Creating the Web Interface
1 : Create a directory called templates inside the main app directory;
2 :Create a new file and name it base.html right inside the templates directory;
Add the following HTML code to the file:
<!DOCTYPE html><html lang="en"><head><meta charset="UTF-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><title>{% block title %}{% endblock %}</title><link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css" integrity="sha384-OgVRvuATP1z7JjHLkuOU7Xw704+h835Lr+ZmGmiLwR12a8M/h9UJhs7XpVoZB1M7" crossorigin="anonymous"></head><body><nav class="navbar navbar-expand-lg navbar-dark bg-primary"><a class="navbar-brand" href="#">IUEA AI-Powered ChatBot</a><button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarNav" aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation"><span class="navbar-toggler-icon"></span></button><div class="collapse navbar-collapse" id="navbarNav"><ul class="navbar-nav"><li class="nav-item active"><a class="nav-link" href="#">Home</a></li><li class="nav-item"><a class="nav-link" href="#">Exit</a></li></ul></div></nav><div class="container">{% block content %}{% endblock %}</div><script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script><script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.9.3/dist/umd/popper.min.js"></script><script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/js/bootstrap.min.js" integrity="sha384-OgVRvuATP1z7JjHLkuOU7Xw704+h835Lr+ZmGmiLwR12a8M/h9UJhs7XpVoZB1M7" crossorigin="anonymous"></script></body></html>
3 : Create a new file called home.html in the templates directory.
Add the following HTML code to the file:
{% extends 'base.html' %}{% block content %}<div class="container"><h1>IUEA AI-Powered ChatBot</h1><p>Welcome to the IUEA AI-Powered ChatBot! This chatbot has been designed to provide quick and efficient assistance to students, staff members, researchers, investors and other people.</p><p>To use the chatbot, simply type in your question or prompt in the text field below and click on the "Ask" button. The chatbot will then generate a response based on the input provided.</p><form action="/" method="post"><div class="form-group"><label for="prompt">Ask a Question:</label><input type="text" class="form-control" id="prompt" name="prompt" required></div><button type="submit" class="btn btn-primary">Ask</button></form><br><h3>Chatbot Response:</h3><p>{{ response }}</p></div>{% endblock %}
Save the file.
4: Creating the exit.html file
Create a new file called exit.html in the templates directory.
Add the following HTML code to the file:
{% extends 'base.html' %}{% block content %}<div class="container"><h1>Thank you for using the IUEA AI-Powered ChatBot!</h1><p>We hope that you found our chatbot helpful. If you have any further questions or concerns, please feel free to reach out to us at info@iuea.ac.ug.</p></div>{% endblock %}
5: Save Them all
Step 7: Run the App.
Make sure you have a PDF file containing the data you want to use to train the chatbot. Save this file in the same folder as the code.
Open a command prompt or terminal window and navigate to the folder containing the code.
Type the following command to run the app:
python app.py
Once the app is running, open a web browser and navigate to http://localhost:5000/ to access the chatbot.To use the chatbot, simply enter a prompt in the input field and hit the submit button. The chatbot will generate a response based on the data it has been trained on and display it on the page.
If you want to exit the chatbot, simply enter "exit" in the prompt field and hit submit. This will take you to a page where you can close the chatbot.
Conclusion
In conclusion, building a chatbot using GPT-3 and Python libraries like llama_index, Langchain, Flask, and OpenAI can be a fun and rewarding project. With these tools, you can create a conversational AI that can help users find the information they need quickly and easily. In this blog post, we have covered the step-by-step process of building a chatbot from scratch, from downloading and setting up the necessary libraries to creating a user interface and training the model. By following the instructions outlined in this post, you can create your own AI-powered chatbot that is tailored to your needs.
However, it is important to note that building a chatbot is not a one-time process. As with any AI system, it requires continuous maintenance, updating, and improvement. It is important to regularly evaluate the performance of your chatbot and adjust it accordingly to ensure that it continues to meet the needs of its users. Overall, building a chatbot can be a rewarding and exciting experience that allows you to leverage the power of AI to create a useful tool for yourself or your organization. With the right tools, resources, and approach, anyone can create a chatbot that is efficient, effective, and engaging.
Good article, you could've demonstrated for us probably but still good
ReplyDeleteThanks for this article. It's really innovative. I would like to use your skill to integrate it in django. I will be glad
ReplyDeleteI will be glad to help!
Deleteactually I will uploading a new article on how to integrate this llma_index custom gpt in Django sooner.