Abstract visualisation of a custom-based content bot

How to create a Custom-Based Content Bot that integrates with a Chat SDK

Italo Orihuela
Italo Orihuela
Aug 15, 2023

Welcome to a step-by-step journey through the creation of a Custom Based Content Bot that seamlessly integrate with a Chat SDK.

In this guide, we will explain the use of Llama Index, GitHub Repository Loader, and LangChain, enabling you to craft a bot that enhances user interactions through tailored content. But first, let’s go to the details of building our Custom Based Content Bot!


Creating the Project and Installing Dependencies

For my use case, I am using VSCode with Anaconda, so I have to create an environment with Python 3.11 so I can run my project.

I use the previous command as I installed Python 3.11 with the environment variable of py311.

After that, I pip install Poetry, which I use as a dependency management tool which helps me with the libraries that are required for running LlamaIndex.

Once all of this is set, let’s re-assure that LlamaIndex, LlamaHub and LangChain are installed.

After that, we can proceed with our project :)

LlamaIndex GitHub Loader

After having the dependencies ready, we can build the ingestion part, so that we can use the GitHub Loader to import all the information we want from the selected GitHub Repository. As mentioned before, you will need to have your GitHub personal access token and an OpenAI API Key. Please store them in a <span id="greylight" class="greylight">env.</span> file so that you can access them quickly and securely within the different files of your project.

LlamaIndex using LangChain for Index Construction and Storage

In this case I like having files separated according to the process, so later I can change then according to the type of function they are executing. In this part we are calling the pickle file we developed before and then using LangChain LLM and Embedding Model to define the type of characteristics for doing the embeddings that will later be stored as vectors by using LlamaIndex GPTVectorStoreIndex feature.

As you can see, we are using “text-embedding-ada-002” for embeddings, as it is one of the embedding models that best perform and is cost-efficient. Once the index is created, in this case with the variable “index”, we proceed to store it with the name you want it to have in your directory.

Adding LangChain Prompt Template

Now we move to the query engine part, as mentioned at the beginning, we will use LangChain Prompt Template to be able to customize the type of response we want to receive from our bot. Remember, in this case, the index is built in another file, so please call the index function from the file you created.

Querying the Index with Personalised Prompt

Finally, we get to the query part! After having the Prompt Template set, we can add the query engine details as follow:

Once you run this, you will get a customized response considering your GitHub repository information and the personalised type of prompt template you have designed.

Integrating with a Chat SDK

For this last part, we recommend you to use a provider that has the option of Webhook events. In this case, by using Amity Chat SDK, you can get real-time events through webhooks, so that you can develop a script to detect that particular event that will call the channel of your bot’s name so that it will trigger your bot’s logic. For example, you can use the previous scripts to create an API that will recognize that event, so that it provides a response.

Also view: Add a Vector Database to your GPT Custom Content Bot