The Communicator’s Guide to Responsible AI Chatbot Deployment: 7 Key Considerations
By Tanvi Singhal
Businesses are increasingly experimenting with AI, with over 88 percent regularly using it in at least one business function. Salesforce estimates that 30% of customer service cases today are handled by AI, and predicts it to rise to over 50% by 2027.
In the near future, AI chatbots will likely become a necessity rather than a novelty. The chatbot would be the first point of interaction with the brand for more customers, serving as the company’s digital ambassador. Therefore, it is essential to acknowledge the critical role of marketing and communications in the successful deployment of an AI chatbot and to involve them as core team members from the very beginning.
Here are 7 things marketing and communications professionals need to pay attention to for the effective and responsible deployment of AI chatbots and for mitigating reputational risks.
1. Design a Detailed Personality
Just as with other brand assets, chatbots should reflect the brand’s colours, voice, and tone to maintain coherence and build trust. Building the chatbot’s personality with detailed guidelines specifying the tone of the responses (such as serious, witty, creative or precise), the words to use and those to filter out, the length of the responses, the use of emojis and slang, ensures its responses align with the brand’s values and business. Not doing this could lead to misalignment and even damage to the brand. For example, a cheerful and enthusiastic chatbot could work well for a vacation-planning agency but would be a disaster for a hospital’s appointment-booking chatbot. Financial businesses would want to avoid slang and emojis and keep responses precise.
2. Mitigate Anthropomorphism Risk
While we speak about assigning a personality to the bot for brand consistency, it is crucial to remember that the purpose of the chatbot is not to make users believe it is human, but to enable customers to use more natural, human language while seeking the information they need.
Humans naturally tend to anthropomorphize, that is, to assign human traits to non-human entities such as animals and even computers. Therefore, it is not enough for the business to avoid making the chatbot human-like. Steps should be taken proactively to mitigate the risk that the chatbot will be perceived as human, as this could raise serious ethical concerns and breach trust. These steps include:
- Avoid giving the chatbot a realistic human name or mascot that could mislead users. Depending on the purpose, some organizations might give their chatbots specific names while others might use clear identifiers like “Virtual Assist,” “Guide,” or “Support Bot.”
- Whenever the user initiates the chatbot, a clear and conspicuous message should be displayed that it is an AI-powered virtual assistant, to avoid misleading users into thinking they are speaking with a human.
- Some policy initiatives also recommend periodic reminders that the chatbot is non-human, which could be a good idea when prolonged conversations can be expected, stretching to several hours, especially when they involve emotional aspects or consequential decisions, or when the user is known to be a minor.
3. Rely on a Controlled Database
It is recommended to use RAG (Retrieval-Augmented Generation), which means the chatbot relies solely on information provided by the enterprise and thus offers greater control over its content, grounding responses in the company’s own documentation. RAG significantly reduces “hallucinations,” so the bot is less likely to fabricate information and can be configured to answer with ‘I don’t know’ when the required information is not available in the source database.
RAG is a safe framework for AI chatbots, but it requires consistent, regular updates to the source content. A practical approach could be to use the website as the source content, avoiding the need to update multiple sources and documents. Outdated information is worse than no information. Sharing outdated pricing, old policies, or discontinued services is a customer-service nightmare. Assign clear ownership for content maintenance and set regular review schedules.
4. Monitor and Optimize Continuously
AI chatbots need constant monitoring and refinement. This means regularly evaluating the most frequently asked questions and making them more accessible, for instance, by placing them as ready prompts in the chatbox. Custom responses for questions that might not have been anticipated earlier but are commonly asked have to be fed into the ‘brain’ of the bot regularly to increase its accuracy. This ongoing approach to monitoring, evaluation, and improvement ensures continuous refinement. Over time, the chatbot should become smarter and more helpful, but human supervision cannot be dispensed with.
5. Maintain Transparency and Trust
Disclose whether the conversation is being recorded and, if so, how the data would be used, for example, for training and improvement or for greater personalization for the customer. Give users the option to opt in or out of data collection and make those preferences easy to review and change. It is also important to work with legal and cybersecurity teams to ensure that chatbot’s data practices, such as how long conversations are stored, what data is collected, and who can access chat logs, comply with applicable regional laws and regulations, and align with your organization’s internal policies, while avoiding the collection of unnecessary or sensitive information.
6. Build in Safeguards for Sensitive Industries
If you’re deploying a chatbot for an organization handling sensitive information, such as a bank or a healthcare provider, explicitly warn users against sharing confidential data like account details, SIN numbers, passwords, or other confidential data in the chat. By training users not to share sensitive information in chat interfaces, you are also creating awareness for your customers that benefits them even beyond your platform.
7. Respect the Human Preference
Businesses should acknowledge that many users still prefer or need to speak with a human agent and respect their preference. Offer the option to connect with a human representative clearly and up front. Making users fight their way through multiple bot interactions before they can speak to a person will only lead to frustration and annoyance. Forcing everyone through the bot first might seem efficient, but it can alienate users and impact customer satisfaction.
The bottom line
A customer-facing chatbot represents the organization in thousands of daily interactions. Therefore, deploying an AI chatbot is a decision that requires heavy communications involvement to protect the brand image and reputation.
The technology will evolve, but organizations must not lose sight of the essence of an AI chatbot, which is, to make information retrieval faster and more accessible, not to replace human agents. Keep humans in the loop, maintain transparency, respect privacy, and trust user judgment when they say they need to “speak to a person”.
Tanvi Singhal is a communications and brand strategist with experience spanning the public sector, energy, infrastructure, cultural, and education domains. Her current focus is the responsible and effective use of AR, VR, and AI to drive engagement and innovation. A lifelong learner with an MBA from MICA, India, and a Master of Digital Media from Toronto Metropolitan University, she is driven by curiosity and creativity.