Set your hybrid chatbot up for success by leveraging best practices.
March 26, 2025 | By Clinton Hughes & Sowrabh Sanath Kumar
In the breakneck flow of business today, technology is essential to keep processes running smoothly across an enterprise. Hybrid conversational agents can assist with more than customer service—retrieving information and even optimizing tasks, freeing up human agents for more complex problem-solving.
Google Cloud’s Dialogflow CX enables users to develop sophisticated hybrid chatbots that can handle both structured and unstructured conversational flows. Because of its versatility to handle varying types of interactions, this tool enables a hybrid agent approach that has grown more common in enterprise settings. But what do leaders need to consider before implementing hybrid conversational agents in their organizations?
In the previous articles of this series, we laid out the capabilities of hybrid conversational agents and looked at the many features of Google Cloud’s Dialogflow CX platform. Now we’ll outline some best practices for implementing hybrid chatbots in your enterprise.
General Best Practices for Developing Hybrid Agents
As you begin to build hybrid conversational agents, start as simple as possible. Begin with a basic implementation and gradually add complexity rather than trying to build everything at once.
Regularly test your AI conversational agent throughout the development process to verify that it works as expected, and make sure to analyze its performance and adjust as needed.
Pay close attention to the user experience. Make sure the agent is easy to use, provides clear responses and handles errors gracefully.
Maintain documentation for your code and overall process and implement all best practices related to security of user data and infrastructure. Log and monitor all agent responses, track conversation performance, and identify areas for improvement.
Generative AI Integration Best Practices
The Generator feature in Dialogflow CX uses large language models (LLMs) and user-provided prompts to generate agent responses. Generators can be incredibly helpful for tasks like summarization and data transformation. Examples include:
- Designing prompts that leverage the current conversation context and avoiding generic prompts that could lead to irrelevant responses
- Limiting the scope and using generators for specific scenarios
- Carefully designing the prompt to receive concise, clear and relevant responses
- Generating responses based on templates to ensure consistency
- Regularly reviewing generator responses and refining prompts as needed
- Implementing mechanisms to filter unsuitable responses generated by LLMs
- Navigating cases where generator fails or produces unsatisfactory outputs
Dialogflow’s Generative Fallback feature is designed to generate helpful responses when the user’s response doesn’t match a parameter in the conversational flow. It’s a way to handle errors and redirect users back to established responses and solutions if they go off script.
Tips for Using Generative Fallback:
- Use fallback messages to guide the user back to the main conversation flow. Clearly explain that the agent misunderstood and provide options for moving forward.
- If the system frequently reaches the fallback option, reevaluate your intents.
- Rather than providing a blunt “I don’t understand,” consider a layered approach. For example: “I’m having a bit of trouble. Could you please rephrase?”
- Use Generative Fallback sparingly to reduce the probability of unpredicted responses.
Data stores are the locations where your hybrid agent pulls information to provide answers to user queries. This can include cloud storage and website content. Make sure to upload only relevant and up-to-date content to your agent’s data stores and to clean and preprocess data before importing it.
A playbook provides information to help the agent answer questions and execute tasks. When creating playbooks, make sure to do the following:
- Define specific tasks that your playbooks will handle. Limit the scope to make a task more manageable.
- Have the playbook perform input validation to ensure it’s processing the correct data.
- Implement error handling to manage errors like service outage and data loss.
- Keep records of each playbook execution and log errors for troubleshooting.
- Test the playbook with different user scenarios.
- Ensure proper authorization and access controls to the playbook.
Prompt Engineering Best Practices
Precise prompts guide the model to focus on the specific information needed, leading to more accurate and relevant answers. Clearly define what you want the agent to do, and tell the agent what role it should assume, such as a customer service agent. Make sure the model has all the context it needs to understand a task.
- Specify the desired format of the output, as well as the length and any data types you require.
- Include constraints to prevent undesirable responses and define acceptable ranges for any numerical inputs.
- Provide specific instructions for how the agent should respond if it does not know the answer to a question.
As you’re developing prompts for your hybrid conversational AI agent, experiment with different phrasing, contexts and formats. Use A/B testing to determine what yields the best results.
Security and Bias Mitigation
- Avoid biased prompts: Take steps to ensure that prompts do not unintentionally promote biased responses or reinforce stereotypes.
- Sanitize: If prompts involve external user input, sanitize them before sending to the model to prevent prompt injection attacks or malicious behavior.
- Comply with content policy: Be mindful of the content policies of the LLM service you are using. Avoid generating responses that violate these policies.
Human-in-the-Loop (HITL) Best Practices
While AI offers untold potential, it can also lead to costly errors if unchecked. The human-in-the-loop (HITL) approach establishes a constant feedback loop to enhance the accuracy and robustness of AI models. When establishing your hybrid agent, it’s important to put HITL practices in place to ensure accountability.
- Define clear criteria for when to escalate to a human. This can be based on intent confidence, user sentiment or the complexity of the request.
- When escalating, ensure that the human agent receives all the conversation history and relevant user parameters.
- Inform the user that they will be transferred to a human agent and provide an estimated wait time.
- Ensure the transition from bot to human is seamless. Avoid making the user repeat information.
- Train human agents on how to handle escalated conversations.
- Enable monitoring of conversations during HITL to understand the root cause for escalation and refine the bot’s behavior.
- Use the interaction with the human agent to collect feedback to improve the bot.
Embrace the Power of Hybrid Conversational AI Agents
Hybrid deterministic and probabilistic conversational agents offer a powerful solution for enterprises seeking to balance efficiency with adaptability in their customer interactions.
By carefully considering the use case, required functionality and available resources, organizations can implement hybrid chatbots that significantly enhance their customer experience and operational efficiency.

Clinton Hughes
Customer Solutions Manager, Google Cloud
Clinton Hughes provides thought leadership and actionable direction for TEKsystems’ Google Cloud practice as a customer solution manager. A seasoned professional in cloud technology with over 15 years of business intelligence, financial planning and cloud transformation experience, Clinton has diverse roles in engineering, program management and solution management that showcase a well-rounded perspective and position him to lead high-performing teams in their pursuit to deliver quantifiable and repeatable value.

Sowrabh Sanath Kumar
Solution Architect
Sowrabh Sanath Kumar, a solution architect in TEKsystems’ AI practice, drives solution development and innovation. With 15 years of experience, he leverages his expertise in CCAI, Gen AI, and data analytics on Google Cloud Platform to craft compelling proposals, provide thought leadership and enhance TEKsystems’ service offerings.
Related Articles

Your Google Cloud Partner
Transformational technologies demand equally transformative partnerships. The world’s leading technology brands, including Google Cloud, work with us because of our scale, speed and quality—building upon their foundation to foster and share ideas that help our customers grow. With TEKsystems by your side, you can reap the benefits of best-in-class implementation, integration and support—making the most of your technology investments and powering next-gen innovation.

Clinton Hughes
Customer Solutions Manager, Google Cloud
Clinton Hughes provides thought leadership and actionable direction for TEKsystems’ Google Cloud practice as a customer solution manager. A seasoned professional in cloud technology with over 15 years of business intelligence, financial planning and cloud transformation experience, Clinton has diverse roles in engineering, program management and solution management that showcase a well-rounded perspective and position him to lead high-performing teams in their pursuit to deliver quantifiable and repeatable value.

Sowrabh Sanath Kumar
Solution Architect
Sowrabh Sanath Kumar, a solution architect in TEKsystems’ AI practice, drives solution development and innovation. With 15 years of experience, he leverages his expertise in CCAI, Gen AI, and data analytics on Google Cloud Platform to craft compelling proposals, provide thought leadership and enhance TEKsystems’ service offerings.