Unlocking the Power of Gen AI: A Guide to Automating Information Extraction with Vertex AI

umangak
Staff

information-extraction-vertex-ai.png

In today's fast-paced and competitive business landscape, quick, clear, and dependable customer service is essential. Modern businesses continually strive to enhance the customer experience by providing fast responses to customer inquiries. 

Think of a tool that can respond immediately to customer questions such as, "What is the size range for product X?" Automating these tasks can create significant optimizations - freeing up staff for more strategic or complex tasks and boosting overall productivity.

Now imagine achieving this within minutes - utilizing the power of generative AI to make information retrieval quicker, simpler, and automatic with Vertex AI.

What's more, these applications aren't just limited to straightforward question-and-answer interactions. They're capable of managing bigger and more intricate tasks, making it possible to scale up services and improve efficiency when handling a high number of user inquiries.

This article will dive into how Vertex AI-powered chatbots revolutionize the automation of information extraction from diverse user inputs. We will further explore the fascinating capabilities of Vertex AI in automating information retrieval from various user inputs. 

This article is the perfect guide for anyone wanting to learn how, with Vertex AI, you can sharpen your tuning skills, automate the information retrieval or extraction process from any user input, and create a chatbot. 

Background context: Prompt tuning

The key concept here is “prompt tuning.” So what exactly is prompt tuning?

Prompt tuning is an efficient and cost-effective method to adapt an AI model for various tasks without having to retrain the entire model. Instead, the AI models are trained to behave a certain way using a set of inputs and outputs. This doesn’t change the core framework of the model, but it guides the model's specific behaviors based on the input-output set provided.

In simpler terms, we use something known as “prompts” to specify the model's behavior. This approach allows customization of a large language model for definite tasks, even with a limited amount of data.

For example, we can apply this concept to the “text-bison” model to guide its behavior. We can instruct it on the kind of responses it should provide to different user inputs. This is achieved with the help of “prompts” - a set of inputs and outputs. One of the advantageous features of prompt tuning is its adaptability - you can easily modify the prompts to alter the model's responses.

This concept can streamline and automate the process of information extraction from a given set of inputs. The model is fed with a series of prompts, which spell out possible user inputs and guide it on how to extract relevant information from these in return. With suitable tuning and high-quality prompts, the model can completely automate the information extraction process. This allows you to input as many user prompts as you wish and receive the extracted information without a hitch. Let’s look at an example of one such scenario.

Get started with prompt tuning in Vertex AI

Let's apply what we've discussed so far. Head over to the Vertex AI console and select "Language" from the options available under Generative AI Studio. Now, let's start with "Tuning." Here, you pass on a series of instructions or "prompts" that inform the model how to respond. 

Consider the task of designing a chatbot for a healthcare provider. When a patient asks, "What are the operating hours of Dr. Smith?," the chatbot should respond appropriately, "Dr. Smith's working hours are from 9 AM to 5 PM, Monday to Friday." Similarly, if a user requests, "I need to schedule a check-up with Nurse Ana," the chatbot's response should facilitate this, "I can assist in scheduling your check-up with Nurse Ana."

To do this, the first step is identifying the specific doctor or nurse about whom the user requires information. Once that is known, the relevant details can be fetched from the data source and provided to the user. So we begin by extracting the necessary entity from user inputs. Here is one such example of possible prompts:

```
Our chatbot is designed for a healthcare provider where users can inquire about the hospital, clinic, doctors, etc. Please consider these prompts while responding to queries:

Input: What are the working hours of Dr. Smith?

Output: Dr. Smith

Input: I want to speak to Dr. Akanksha. Is she available?

Output: Dr. Akanksha 
```

With these prompts, the model understands the queries related to Dr. Smith and Dr. Akanksha, respectively. It can then access their calendars or databases, retrieve the required information, and present it to the user.

Additional use case scenarios

Apart from healthcare, AI chatbots can have a significant impact across a range of industries. Here are a few more examples:

  1. Automotive industry: We could design a chatbot for an automotive company to extract specific information from customer inquiries, such as "Can you give me some details about the ABC car model?" or "Which type of fuel should I use for car XYZ?" Once the bot identifies which car model or item the user is asking about, it can source specific information about it from its database to respond to the customer's query efficiently.

  2. Movie ticket booking service: Suppose a customer asks, "Could you tell me the showtimes for movie Y at location Z?" The chatbot is capable of providing the showtimes accurately, "Movie Y at location Z has showtimes at 3 PM, 6 PM, and 9 PM."

  3. Banking institution: Imagine a scenario where a customer queries, "Could you tell me my account balance?" Provided it's programmed to comply with necessary security protocols, the chatbot can provide the required information.

Reuse code for various applications 

Here are some broader applications where the discussed code and method can be reused:

  • Entity extraction: If your objective is to consistently extract pertinent entities from a conversation, prompt tuning with Vertex AI offers an effective solution.
  • Request-response model: This method is ideal for creating a request-response model when you need a bot to generate responses based on given inputs.
  • Part of larger tasks: The method can also be a component in larger tasks that need specific actions or reactions based on entities identified in user inputs.
  • Sentiment analysis: The technique can even be used for sentiment analysis in conversational data, where the tone or sentiment of user inputs can guide the bot's responses or actions.

These examples highlight the versatility and broad applicability of this method, addressing a variety of needs in multiple scenarios.

Sample code for information extraction model

Below is a code sample for an information extraction model that can be easily reused. Simply replace the prompts with your desired prompts, and you can launch it in your Colab environment. 

To start, we'll set up the Vertex AI SDK for Python and install the required dependencies. This will ensure that our code runs smoothly and leverages the full capabilities of Vertex AI. 

Follow the step-by-step instructions below to get started and run the code snippets and you can create your own information extraction model.

Code

 

# Install Requirements

!pip install "shapely<2.0.0" 
!pip install google-cloud-aiplatform --upgrade
# Authenticate

from google.colab import auth
auth.authenticate_user()
# Init VertexAI

import vertexai
vertexai.init(project=<project>, location=<location>)
# Fetch Text Generation Model

from vertexai.language_models import TextGenerationModel
model = TextGenerationModel.from_pretrained("text-bison@001")
# Model Params

parameters = {
    "candidate_count": 1,
    "max_output_tokens": 256,
    "temperature": 0.2,
    "top_p": 0.8,
    "top_k": 40
}
# Fetch Response with Few Shot Learning

user_input = input("Enter your prompt here:")

prompts = """
Enter your prompt here

input: input1
output: output1

input: input2
output: 
"""

response = model.predict(
    user_input + prompts,
    **parameters
)

# Extracting the relevant information from the response
extracted_entity = response.text.split("Output: ")[-1]

print(f"Response from Model: {extracted_entity}")

 

Conclusion

In conclusion, this article has explored the concept of prompt tuning with Vertex AI and demonstrated its potential in creating powerful information extraction models.

By leveraging the capabilities of prompt tuning, we can design AI chatbots that accurately extract relevant information, making them invaluable tools in various industries such as healthcare, automotive, ticket booking, banking, and more. With the code snippets provided and the step-by-step instructions, you can easily implement and customize your own information extraction model.

If you find this discussion on prompt tuning with Vertex AI insightful, feel free to pass this article along to anyone who might find the information and practical applications of prompt tuning with Vertex AI useful.

And if you have any questions, please leave a comment below. Thanks for reading!

13 4 5,430
Authors
4 Comments
varsha44
Bronze 1
Bronze 1

Thank you for sharing your expertise about AI and prompt tuning. I found it insightful.  It's evident that you've invested time and expertise in creating a resource that stands out in the AI community, and I'm looking forward to reading more of your insightful blogs.

Best regards,

 

Lauren_vdv
Community Manager
Community Manager

Thanks for this feedback @varsha44! Agreed and would love to see more from @umangak as I found this content very digestible and actionable😊🙌

bilaalmirza
Bronze 2
Bronze 2

informative 

markWyatt
Bronze 1
Bronze 1

In the year 2024, I am thankful for the invaluable opportunities to gain knowledge, the meaningful connections that apkcombo.id  have brought richness to my life, and the steadfast resilience displayed in overcoming challenges, nurturing not only personal growth but also a deeper understanding of life's complexities.............