Technology truly has the power to move society forward—and we’re seeing it being adopted for good across the world. When it comes to healthcare, for example, AI is being used to process medical research in a way we’ve never seen before. In recent weeks, we’ve seen reports from UCLA that a new AI system can detect prostate cancer with the same levels of accuracy as experienced radiologists.
We’re also seeing AI transform the way that industries communicate. Staffordshire University has just launched Beacon for the education sector, a digital assistant which is on hand 24/7 to support students. The chatbot can respond to a wide range of requests, from timetable enquiries through to advice on council tax exemption. This is a critical move in ensuring support for students—and in turn, allow teachers to focus on education.
But for all of the good around the progress of digital communications, there is also the bad. In 2016, Microsoft launched Tay, a Twitter chatbot that the company described as an experiment in ‘conversational understanding’. The project turned sour when people starting tweeting the bot with misogynistic and racist remarks. Tay repeated those remarks back to users—in a public forum.
Going the right way about digital engagement is an ongoing challenge. For example, gender diversity in virtual assistant identities is poor, with most defined as female through name and voice. In fact, an analysis of over 300 AI assistants—both real and fictional—by AI software developer Integrate found that 67% were female. These assistants are infiltrating our lives at a significant pace—whether we’re asking for an update on the weather, what the exchange rate is, or requesting for a song to be played. And by creating AI assistants as woman, we’re at risk of reinforcing dangerous and outdated stereotypes.
We are, however, seeing seeds of change. Earlier this year, we saw the launch of ‘Q’, the first genderless voice assistant. The voice was created using both male, female and non-binary voice recordings, which were then modulated. Launched at SXSW, Q is an awareness-building creative initiative, produced by Virtue Nordic, an outpost of Vice Media’s global ad agency, in collaboration with Copenhagen Pride. The project calls out Apple, Amazon, Google and Microsoft and other big tech players, demanding that ‘technology should recognise us all.’
So, with a shift towards machine learning and AI, how can today’s brands ensure they’re adopting an inclusive approach—and avoid making some of the mistakes we’ve already seen in the market?
Create a policy from the inside out
Brands have already started to do a great job of implementing diversity and inclusion policies for their own employees—and AI is playing a significant role in this space. HR departments are adopting machine learning algorithms to choose interviewer panels that reduce individual bias. AI-enabled tools can suggest a fair salary range to prevent gender and ethnic inequalities across employees. Smart decision-making powered by AI has the power to remove bias from decisions and transform the cultures of organisations. But when it comes to the outward communications and speaking to their customers, brands tend to fall down.
Digital assistants and chatbots are on the rise—and we can often find ourselves talking to an IVR (Interactive Voice Response) when communicating by phone with a retailer, telecoms provider or utility company. There is a clear need for improvement here, as brands should ensure that they are representing full demographics. But also, the technology needs to recognise and detect individuals from different ethnic minorities and social backgrounds. It’s important that brands keep the conversations going across the business—and don’t let unconscious bias slip in.
Link technology departments with policy drivers
As a technology provider for customer engagement solutions, we often find ourselves talking to data analysts within our customers’ businesses. They are typically focused on number crunching and making sure that a technology is implemented—basically, the task in hand—rather than whether that technology is diverse, inclusive and ultimately doing the right thing. By simply connecting the technology department with the operations teams, brands can make sure that policies are acknowledged, understood and adhered to. Technology vendors have a role to play—and can support here, by encouraging this link.
Don’t forget the human touch
Data is crucial for understanding your customers. The level of knowledge you can unearth via digital assistants is vast, with quantitative data playing an important role. But quantitative data isn’t enough. Qualitative data is also crucial and should be gathered from person to person. The exceptional examples of high-quality customer service involve businesses reaching out to customers directly and having conversations. With the level of sophistication that widely adopted-AI—or machine learning—is at today, those direct customer conversations will need to happen for many years to come.
Test, test and test again
Run a proof of concept, trial the tech, test, and test again. Even after launch, there remains a need to continually test. Implement the technology and forgetting about it is a fast route to disaster. Some of the most disastrous examples of bias in AI could have been uncovered by looking at results more closely rather than trusting the job is being done right and could have been avoided with a more stringent testing process.
It’s clear that AI has a tremendous potential to improve our lives and transform businesses, but it is vital that we use it responsibly. Businesses today need to stop thinking about communication as a channel, but as a journey. By understanding an individual’s needs and making sure that their approach is always inclusive—and consistently driven through the customer experience—brands will be on track for success.
Find out more about Digital Marketing World Forum (#DMWF) Europe, London, North America, and Singapore.