How to Build a Locally Hosted Chatbot w/ Bedrock and More!

January 29, 2024

Introduction

Generative AI has become very popular in the last year, thanks to OpenAI’s ChatGPT platform. ChatGPT is a large language model that has been trained with general purpose knowledge. Humans can interact with the model in a conversational way.

Think of a GenAI chatbot like simply talking to a virtual friend. The virtual friend’s brain is implanted with huge amounts of data. This virtual friend uses that data to “think” and “speak” like an actual human being.

Of course, GenAI is a lot more complicated than that and a chatbot is just one of the many use cases. Check out this holiday chatbot (hurry before Santa drives away 😂) built by Serverless Guru and the webinar where we discussed how it was built.

I wanted to build a chatbot for a while now, however, I was lost seeing all the new technology stacks out there and I did not know where or how to get started.

Thanks to the community for their software contributions aimed at helping us easily and more conveniently develop applications that incorporate GenAI as a tool.

The primary purpose of this post is to share what I learned as I dabble with shiny new tech! Hopefully, this is helpful. 😊

We will walk through how to set up your local development environment so you can run the code on your machine, how to grant permissions to invoke Bedrock models from your local app, and a code walkthrough to help understand the basics of the technology stack.

Tech Stack

We will use the Foundational Models (FM) available in Amazon Bedrock.

  • Amazon Bedrock provides access to foundational models from top AI companies and Amazon. There is a playground environment in the Bedrock console to let you experiment with the models. Bedrock is serverless so it is a fully managed service which means no infrastructure to manage!
  • Don’t forget that there is a cost associated with using Bedrock!

Serverless Guru recently hosted a webinar on Building Your First Serverless API with Langchain Webinar so I was motivated to checkout LangChain.

  • LangChain is a framework that helps standardize and streamline the process of integrating language models into applications. LangChain acts as the abstraction layer to simplify interactions with different models.

I do not know UI development and that’s how I stumbled upon Streamlit.

  • Streamlit lets you build web applications easily using Python code. You can create pages, buttons, charts, and more with simple Python commands.

ChittyBot

This will be a very simple chatbot named ChittyBot that can carry a conversation with a human.

  • ChittyBot will run locally and will only reach out to the internet when using Bedrock.
  • Your local environment should have a valid AWS connection established.

The following diagram illustrates how the different tools used to build ChittyBot interact with one another.

User interacts with a chat bot in a browser. Python code processes request and returns LLM response.
ChittyBot Sequence Diagram


  1. Developer runs Python code via Streamlit
  2. Browser opens and goes to: http://localhost:8501/
  3. User sees the chat interface
  4. User submits a message through the web UI
  5. Python code executes and processes user message
  6. Bedrock model is invoked
  7. Bedrock response is streamed back
  8. Python code updates the UI with LLM response

Setup

Dev Environment

         1. I used a Python 3.11 environment. You can use virtual environments like 'conda' or 'venv'. Install the following dependencies in your Python environment:

         •   'pip install streamlit'

         •   'pip install langchain'

         •   'pip install python-dotenv'

         •   'pip install boto3>=1.33.13'

         2. In your project folder, create a '.env' file and add the following key/value pairs. Ensure the values provided match your AWS credentials.

  
AWS_DEFAULT_PROFILE=
AWS_DEFAULT_REGION=
  

         3. If you are using VS Code and need to debug or step through code, create a debug configuration with the following details:

  
{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Debug Streamlit App",
      "type": "python",
      "request": "launch",
      "program": "/path/to/python/env/py3.11/bin/streamlit",
      "args": [
        "run",
        ".py"
      ],
      "console": "integratedTerminal",
      "justMyCode": true
    }
  ]
}
  

AWS Credentials

The following are required before we can successfully invoke a Bedrock model:

  • Ensure you have requested access to the models
  • IAM role granted with permissions to invoke specific Bedrock models
  • Add a trust policy to allow our IAM user to assume the IAM role

         1. If you do not have an IAM role yet, create one and add the following policies. We will assume this role with our AWS identity to invoke Bedrock models. Model IDs can be found in the AWS Docs.

  
{
    "Version": "2012-10-17",
    "Statement": [
				{
            "Sid": "BedrockInvokeModel",
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel",
                "bedrock:InvokeModelWithResponseStream"
            ],
            "Resource": [
                "arn:aws:bedrock:::foundation-model/anthropic.claude-v2",
                
                "arn:aws:bedrock:::foundation-model/amazon.titan-embed-text-v1"
            ]
        },
        {
            "Sid": "XRay",
            "Effect": "Allow",
            "Action": [
                "bedrock:ListFoundationModels",
                "xray:PutTelemetryRecords",
                "xray:PutTraceSegments"
            ],
            "Resource": "*"
        }
    ]
}
  

         2. Add a trust policy to the role you just created to allow your user to assume that role.

  
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "lambda.amazonaws.com",
                "AWS": "arn:aws:iam:::user/"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
  

         3. Add an entry for the IAM role to your '~/.aws/config' file. LangChain uses 'boto3' to connect to AWS and will pick up this configuration to assume the correct IAM role.

  
[profile :role/
source_profile= # must be a valid aws user profile from ~/.aws/credentials; this is the user profile you will use to assume the IAM role
  

Skipping this step will result in the following IAM error:

  
botocore.errorfactory.AccessDeniedException: An error occurred (AccessDeniedException) when calling the InvokeModelWithResponseStream operation: You don't have access to the model with the specified model ID.
  

Code Walkthrough

ChittyBot will be a context-aware chatbot, which means that it will remember previous messages within the chat session.

The complete code can be found in this Github repository.

Setup AWS Environment Variables

Loads the AWS environment variables from '.env' file which allows us to specify the credentials or profile to use for authentication.

  
dotenv.load_dotenv()
  

Page Header

The following Streamlit code displays provided text with header formatting. And yes, it can support emoji icons!!

  
st.header("ChittyBot: AWS Bedrock 🌩️ + LangChain 🦜️🔗 + Streamlit 👑")
  

The result in the browser is:

Banner for ChittyBot with AWS Bedrock, LangChain, and Streamlit logos.
ChittyBot: Cloudy with a chance of smart replies!

Streamlit Session

Think of a Streamlit session as a chat session. The context and conversation are preserved within a chat session.

Streamlit allows us to access the 'session_state' object which persists throughout a chat session.

In the following lines of code, we initialize the following objects when a chat session starts:

  • messages list to store the chat messages so context/memory is preserved
  • BedrockChat object from LangChain to use it for invoking Bedrock models
  
# initialize list object and add as property to st.session_state 
if "messages" not in st.session_state:
    st.session_state.messages = []

# initialize BedrockChat object and add it as property to st.session_state
# we do this so we only initialize once
if "bedrock_chat_object" not in st.session_state:
    st.session_state.bedrock_chat_object = BedrockChat(
        model_id="anthropic.claude-v2", # AWS Bedrock Model ID
        model_kwargs={"temperature": 0.7, "max_tokens_to_sample": 1024}, # AWS Bedrock Model arguments
        streaming=True, # enable response streaming
        callbacks=[StreamingStdOutCallbackHandler()], # response streaming handler
        verbose=False
    )
  

Streamlit UI

Whenever we interact with the web interface, like when we submit a message or write a response back, the entire page is reloaded or refreshed. The python code reruns from top to bottom.

In order to display the complete chat conversation in the browser, we need to loop through the messages list (chat history) that’s stored in our session.

  
for message in st.session_state.messages:
    # The "role" key in the additional_kwargs dict is used to determine the role icon (user or assistant) to use
    with st.chat_message(message.additional_kwargs['role']):
        st.markdown(message.content)
  
  • A chat session consists of 2 main roles: user and assistant. The role determines how the message container will look (which default icon to display alongside the message)
  • with st.chat_message(["user" or "assistant"]) creates a message container and displays the appropriate default icon based on who wrote the message (user or AI)
Screenshot of a chat interface with user and assistant messages.
Dialogues with AI: The New Casual Chit-Chat
  • st.markdown ”flushes” the output to the web interface making it visible in the browser

User Input

  
if prompt := st.chat_input("Hello!"):
    print(f"\nHuman: {prompt}")
    print("\nAI: ")

    # Display user message in chat message container
    with st.chat_message("user"):
        st.markdown(prompt)

    # Add user message to session chat history
    human_input = HumanMessage(content=prompt, additional_kwargs={"role": "user"})
    st.session_state.messages.append(human_input)
  
Input field with a greeting, ready to send.
Say 'Hello' and dive into the chat abyss!
  • Notice the use of the walrus operator :=. This lets us assign the input value to the variable 'prompt' and at the same time ensure that a value exists (not None/Null).
  • We create a HumanMessage object which serves as our container for the human’s message. The 'content' property stores the message and we add a new property called 'role' so we can assign the value as “user”. This is important when looping through the chat session history in order to display the correct icon in the message container.
  • We add the 'HumanMessage' object to the messages list which persists throughout our chat session

Invoke Bedrock Model with Streaming

Now that the user has submitted a message, we are ready to invoke a model.

  
stream_iterator = bedrock_chat.stream(st.session_state.messages)
  
  • Calling the 'stream' method of the 'BedrockChat' object sends the entire chat history (which already includes the user’s new message) as input to the Bedrock LLM Model
  • Because ChittyBot is a conversational and context-aware chatbot, it needs a “memory” of what it talked about with the human user. The list of messages we pass as input to the Bedrock model serves as its memory.
  • We enabled streaming when we initialized the 'BedrockChat' object so the response will be a stream

Streaming Bedrock Response

While Bedrock is streaming the response back to our code, we want to also stream that response out to the web interface to provide an interactive user experience.

  
with st.chat_message("assistant"):
        # Start with an empty message container for the assistant
        message_placeholder = st.empty()
        
        # Stream response from Bedrock model in chunks and display in chat message container for real time chat experience
        full_response = ""
        for chunk in stream_iterator:
            full_response += chunk.content + " "    
            # Add a blinking cursor to simulate typing
            message_placeholder.markdown(full_response + "▌")
        
        # Add the final response to the chat history 
        ai_response = AIMessage(content=full_response, additional_kwargs={"role": "assistant"})
        st.session_state.messages.append(ai_response)
        
        # display in chat message container
        message_placeholder.markdown(full_response)
  
  • We create a chat message container for the assistant/AI.
  • 'st.empty()' simply starts an empty message container for the assistant:
Icon of a chatbot's face on a dark background, representing the AI
Streamlit default AI icon
  • We iterate through the stream and start displaying the chunks from the LLM response as they roll in. Instead of waiting for the full response to be returned, we want to display it as it’s streamed back to give that interactive user experience.
  • We need to add the LLM’s response message to our chat history so we create an 'AIMessage' object. The response is stored in the 'content' property and we add a new property called “role” which is needed for displaying the correct icon when displaying the conversation upon page load.
  • Finally, we simply reload/refresh the assistant message container with the complete LLM response to signal that it is finished writing.

Running the Chat Bot Application Locally

Now that we have the code, we can execute it locally.

To run the application, execute the following command:

  
streamlit run 
  

You will see the following message in your terminal when the app is started. Log or print messages will also be visible in the terminal as you interact with the web/chat interface. This will take control of your terminal prompt.

To stop the application, simply type 'CTRL + C'.

Notice that a Network URL is also provided. This allows other devices within your local network to access the chat application as well.

  
You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8502
  Network URL: http://192.123.1.2:8502

  For better performance, install the Watchdog module:

  $ xcode-select --install
  $ pip install watchdog
  

A browser window/tab will open and display the web interface.

If a browser page isn’t opened, you can navigate to the Local URL: http://localhost:8502 manually.

User interface of ChittyBot with AWS Bedrock, LangChain, and Streamlit.
ChittyBot: Your friendly neighborhood chit-chatter!

Conclusion

In this blog post, we learned the basics to get started with building a context-aware conversational chatbot with memory using LangChain, Streamlit, and Amazon Bedrock.

We discussed how to set up your local development environment to allow coding and testing locally. Developing locally eliminates the noise of having to deal with cloud infrastructure or deployment so you can focus on learning the GenAI tools and frameworks. This might also provide some cost savings. The only thing that incurs cost for this project is the usage of Bedrock LLMs.

We also talked about how to set up IAM permissions and how to configure credentials locally to allow us to use Bedrock LLM models.

We ended with a deep dive into the code to understand how the different components fit together to build ChittyBot.

I hope this post can give you a starter or beginner knowledge to use tools like Amazon Bedrock, LangChain and Streamlit to incorporate GenAI capabilities into your applications.

Need to Contact Serverless Guru?

Thank you so much for hanging out. We hope this simple blog post can help you get started on working with GenAI related things.

We would love to hear your thoughts, comments and feedback. Let us know if you wish to see similar content or if you’d like to see us take ChittyBot to the next level (ex., using DynamoDB as its memory for longer conversations, deploying to AWS ecosystem, or implementing RAG, etc.)

Connect with us using the contact form at the bottom of this page or drop us an email at contact@serverlessguru.com.

Join our Discord Server to collaborate and engage in interesting and fun conversations! 😃

We strive to respond promptly and look forward to engaging in meaningful conversations with you!

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones - Founder
Ryan Jones
Founder
Speak to a Guru
arrow
Edu Marcos
Chief Technology Officer
Speak to a Guru
arrow
Mason Toberny
Mason Toberny
Head of Enterprise Accounts
Speak to a Guru
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.