Transform one-on-one customer interactions: Build speech-capable order processing agents with AWS and generative AI

In today’s landscape of one-on-one customer interactions for placing orders, the prevailing practice continues to rely on human attendants, even in settings like drive-thru coffee shops and fast-food establishments. This traditional approach poses several challenges: it heavily depends on manual processes, struggles to efficiently scale with increasing customer demands, introduces the potential for human errors, and operates within specific hours of availability. Additionally, in competitive markets, businesses adhering solely to manual processes might find it challenging to deliver efficient and competitive service. Despite technological advancements, the human-centric model remains deeply ingrained in order processing, leading to these limitations. The prospect of utilizing technology for one-on-one order processing assistance has been available for some time. However, existing solutions can often fall into two categories: rule-based systems that demand substantial time and effort for setup and upkeep, or rigid systems that lack the flexibility required for human-like interactions with customers. As a result, businesses and organizations face challenges in swiftly and efficiently implementing such solutions. Fortunately, with the advent of generative AI and large language models (LLMs), it’s now possible to create automated systems that can handle natural language efficiently, and with an accelerated on-ramping timeline. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. In addition to Amazon Bedrock, you can use other AWS services like Amazon SageMaker JumpStart and Amazon Lex to create fully automated and easily adaptable generative AI order processing agents. In this post, we show you how to build a speech-capable order processing agent using Amazon Lex, Amazon Bedrock, and AWS Lambda. Solution overview The following diagram illustrates our solution architecture. The workflow consists of the following steps: A customer places the order using Amazon Lex. The Amazon Lex bot interprets the customer’s intents and triggers a DialogCodeHook. A Lambda function pulls the appropriate prompt template from the Lambda layer and formats model prompts by adding the customer input in the associated prompt template. The RequestValidation prompt verifies the order with the menu item and lets the customer know via Amazon Lex if there’s something they want to order that isn’t part of the menu and will provide recommendations. The prompt also performs a preliminary validation for order completeness. The ObjectCreator prompt converts the natural language requests into a data structure (JSON format). The customer validator Lambda function verifies the required attributes for the order and confirms if all necessary information is present to process the order. A customer Lambda function takes the data structure as an input for processing the order and passes the order total back to the orchestrating Lambda function. The orchestrating Lambda function calls the Amazon Bedrock LLM endpoint to generate a final order summary including the order total from the customer database system (for example, Amazon DynamoDB). The order summary is communicated back to the customer via Amazon Lex. After the customer confirms the order, the order will be processed. Prerequisites This post assumes that you have an active AWS account and familiarity with the following concepts and services: Also, in order to access Amazon Bedrock from the Lambda functions, you need to make sure the Lambda runtime has the following libraries: boto3>=1.28.57 awscli>=1.29.57 botocore>=1.31.57 This can be done with a Lambda layer or by using a specific AMI with the required libraries. Furthermore, these libraries are required when calling the Amazon Bedrock API from Amazon SageMaker Studio. This can be done by running a cell with the following code: %pip install –no-build-isolation –force-reinstall \ “boto3>=1.28.57” \ “awscli>=1.29.57” \ “botocore>=1.31.57” Finally, you create the following policy and later attach it to any role accessing Amazon Bedrock: { “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “Statement1”, “Effect”: “Allow”, “Action”: “bedrock:*”, “Resource”: “*” } ] } Create a DynamoDB table In our specific scenario, we’ve created a DynamoDB table as our customer database system, but you could also use Amazon Relational Database Service (Amazon RDS). Complete the following steps to provision your DynamoDB table (or customize the settings as needed for your use case): On the DynamoDB console, choose Tables in the navigation pane. Choose Create table. For Table name, enter a name (for example, ItemDetails). For Partition key, enter a key (for this post, we use Item). For Sort key, enter a key (for this post, we use Size). Choose Create table. Now you can load the data into the DynamoDB table. For this post, we use a CSV file. You can load the data to the DynamoDB table using Python code in a SageMaker notebook. First, we need to set up a profile named dev. Open a new terminal in SageMaker Studio and run the following command: aws configure –profile dev This command will prompt you to enter your AWS access key ID, secret access key, default AWS Region, and output format. Return to the SageMaker notebook and write a Python code to set up a connection to DynamoDB using the Boto3 library in Python. This code snippet creates a session using a specific AWS profile named dev and then creates a DynamoDB client using that session. The following is the code sample to load the data: %pip install boto3 import boto3 import csv # Create a session using a profile named ‘dev’ session = boto3.Session(profile_name=”dev”) # Create a DynamoDB resource using the session dynamodb = session.resource(‘dynamodb’) # Specify your DynamoDB table name table_name=”your_table_name” table = dynamodb.Table(table_name) # Specify the path to your CSV file csv_file_path=”path/to/your/file.csv” # Read CSV file and put items into DynamoDB with open(csv_file_path, ‘r’, encoding=’utf-8-sig’) as csvfile: csvreader = csv.reader(csvfile) # Skip the header row next(csvreader, None) for row in csvreader: # Extract values from the CSV row item = { ‘Item’: row[0], # Adjust the index based on your CSV structure ‘Size’: row[1], ‘Price’: row[2] } # Put item into DynamoDB response = table.put_item(Item=item) print(f”Item added: {response}”) print(f”CSV data has been loaded into the DynamoDB table: {table_name}”) Alternatively, you can use NoSQL Workbench or other tools to quickly load the data to your DynamoDB table. The following is a screenshot after the sample data is inserted into the table. Create templates in a SageMaker notebook using the Amazon Bedrock invocation API To create our prompt template for this use case, we use Amazon Bedrock. You can access Amazon Bedrock from the AWS Management Console and via API invocations. In our case, we access Amazon Bedrock via API from the convenience of a SageMaker Studio notebook to create not only our prompt template, but our complete API invocation code that we can later use on our Lambda function. On the SageMaker console, access an existing SageMaker Studio domain or create a new one to access Amazon Bedrock from a SageMaker notebook. After you create the SageMaker domain and user, choose the user and choose Launch and Studio. This will open a JupyterLab environment. When the JupyterLab environment is ready, open a new notebook and begin importing the necessary libraries. There are many FMs available via the Amazon Bedrock Python SDK. In this case, we use Claude V2, a powerful foundational model developed by Anthropic. The order processing agent needs a few different templates. This can change depending on the use case, but we have designed a general workflow that can apply to multiple settings. For this use case, the Amazon Bedrock LLM template will accomplish the following: Validate the customer intent Validate the request Create the order data structure Pass a summary of the order to the customer To invoke the model, create a bedrock-runtime object from Boto3. #Model api request parameters modelId = ‘anthropic.claude-v2’ # change this to use a different version from the model provider accept=”application/json” contentType=”application/json” import boto3 import json bedrock = boto3.client(service_name=”bedrock-runtime”) Let’s start by working on the intent validator prompt template. This is an iterative process, but thanks to Anthropic’s prompt engineering guide, you can quickly create a prompt that can accomplish the task. Create the first prompt template along with a utility function that will help prepare the body for the API invocations. The following is the code for prompt_template_intent_validator.txt: “{\”prompt\”: \”Human: I will give you some instructions to complete my request.\\n<instructions>Given the Conversation between Human and Assistant, you need to identify the intent that the human wants…

Leave a Reply

Your email address will not be published. Required fields are marked *

National Pi Day/International Day of Mathematics 2024

National Pi Day/International Day of Mathematics 2024

“I’m never going to use this

By Market Share, Region and Technology

By Market Share, Region and Technology

WHAT WE HAVE ON THIS PAGE Introduction Portable Power Bank Statistics: As to

You May Also Like