Want to Compete and Innovate in Business? Hire a Receptionist

With major recent advances in all sorts of analytical and cognitive technologies, business seems to be moving decisively in the direction of automation. However, this list to starboard, which has been happening for a number of years, leaves businesses in a profound state of disconnect with their audiences. It may be that the only solution to this problem is to return to a more human-centered approach to business communications.

Nearly anyone who contacts a business has a problem. Or, to state it another way, the customers need help. They want to engage and rationalize on a human level — on a social level and in the context of social human relationships. However, increasingly, what they hear and experience in first-tier communication is a bewildering and unwieldy interface — an aggravating series of menu options. Machines that talk much more slowly than a human would in a social scenario. Unclear directories and unclear choices. Some of the worst systems also have poor comprehension, so that they restate problem messages multiple times.

All of this is decisively negative to the customer experience. There is a reason that executives and others have been pounding the drums about customer experience, suggesting that automation will soon innovate at a level beyond what is currently offered. It’s because customer experience is key to business, and a lot of people seem to understand that.

What some don’t understand is that even though artificial intelligence and machine learning are progressing rapidly, these technologies are still more or less in their infancy and have specific limitations related to their capabilities. These specific limitations can be applied in different ways to self-driving vehicles, generative and discriminative engines in machine learning, chat bots and other artificial intelligence entities, and last but not least — interactive voice response systems.

Ask a human receptionist why their IVR is so bad, and you’re not likely to be understood. They may not know the acronym or even the term — or they might play dumb. Part of the irony inherent in these business systems is that the humans at the very end of this automation chain don’t understand how aggravating that automation chain is for the customers. This compounds the problem.

Take the example of mental health services. In corporate mental health services systems, providers will often put their scheduling and appointment setting functionality into an automated IVR system. The problem is that when a customer needs assistance from a mental health provider, he or she is unlikely to be in the frame of mind to navigate one of these aggravating and unwieldy systems. In other words, the automation does not serve the customer.

This is particularly salient in the example of mental health services, because what should be a human-based communications model for a humane service setup has been largely replaced by a corporate and automated model that is inherently incapable of handling the demands placed on it. But it’s not necessary to restrict this problem to the field of mental health services — it can be as broadly applied as the customer who has purchased a sweater with holes in it, the individual whose vehicle has broken down the freeway, or even a business buyer who needs marketing services. In any of these cases, the likelihood is that the customer experience is going to be degraded and poorly served by today’s automated technologies.

Again, this is not a reflection on the rapid progress of the technologies themselves. Deep Blue can beat Kasparov, and Watson can beat human Jeopardy contestants, and different technologies can pass the Turing test with flying colors, but none of this solves the customer’s problem — that he or she needs to be served in the context of the social interaction.

This brings us to a somewhat more technical analysis of the major shortcomings in current machine learning and artificial intelligence models.

Although engineers have learned to simulate the human brain to an amazing extent, deficiencies still exist related to the specific classes of functions that make up human behavior and activity. Specifically, although these technologies can use probabilistic inputs to provide complex results, they are not extremely adept at the sorts of contextual transactions that make up our everyday lives. As a concrete example, a machine learning program may be good at predicting whether or not a human actor will take a step, direct eye movement in a specific way, move a hand or choose a specific button from an array of controls. What the technology is not good at is understanding why someone may make these or other actions.

Another limitation has to do with what some experts might call the “politeness principle” based on a disequilibrium in rational actors’ choices. Going back to the example of a classic Nash equilibrium, we realize that in game theory, most social games have an applicable Nash equilibrium that can be modeled fairly easily. However, some games are structured so that a Nash equilibrium is not practical – or, more specifically, where a Nash equilibrium is only applicable in a fixed set of game scenarios.

 

In a lecture on game theory, Professor Padraic Bartlett explains this in terms of a “social game” of two individuals walking down a hallway toward each other (given a hallway with only two binary path options) –  identifying (left, left) and (right,right) as the two acceptable Nash equilibria, and stating:

“These are the only two equilibria: if we were in either of the mixed states, both players would want to switch, (thus leading to yet another conflict, and the resulting awkwardness).”

Here we see the challenges of applying a Nash equilibrium based on complex social factors. The rational actors have “de facto” choices – and when those choices are made clearly enough, the equilibrium results. Each player knows what is best. But when certain outlier events create uncertainty (maybe one person steps hesitantly, or the other approaching individual misreads a visual cue) the rule fails and the resulting social program is thrown into a infinite loop.

It’s easy to confuse these kinds of “glitches” with scenarios that dispute an equilibrium, such as the “prisoner’s dilemma” where two players must avoid cooperation for the best outcome, but in reality, as we can see, with the politeness principle, a Nash equilibrium does exist and can be implemented. It’s only in the glitchy application of the rule that the equilibrium proves insufficient. (In the established lexicon, this is “trembling hand” equilibrium challenge.)

In other words, we see that if two rational actors choose complementary binary choices (or “uncomplementary” binary choices as it were), they are likely to experience the kinds of recursive decision-making problems that will throw the programs into an infinite loop without exterior human guidance. Unlike two individuals walking toward each other in the hallway, these newly sentient technologies do not have the social ability to make a choice, and to a great extent will not be successful in navigating the problem itself. Here the politeness is a learned skill that is largely unquantifiable and presents machine learning with a significant modeling problem.

Yet another specific limitation relates to the use of highly fitted or possibly overfitted engines that actually approach some of the human qualities that produce indecision. In other words, machines that adopt some of our behaviors may be presented with difficulties related to some of our other behaviors. An article in KD Nuggets posted earlier last year speaks about the use of deep stubborn networks and how they have been engineered with greater complexity. A generative and discriminative engine work at odds with each other to produce collaborative results. This starts to approach some of the higher-level activity in the human mind that is not able to be modeled through linear programming. As the writer describes, what happens is that the competition between the generative and discriminative engines produces some quality that can almost be described as social — a malaise or conflict or, as the author puts it, “anxiety” that is an essential part of the human experience.

Applying words like “anxiety” and “doubt” to machine learning models is inherently a bit of a contradiction. It shows how much progress we have made in constructing machines that can think like us — but it also shows why those machines are not fully or even remotely functional in social roles. They cannot deal with the indecision and anxiety that are produced by their mechanics — and so they cannot serve customers who need this higher-level functionality. This is easy to understand in an elementary sense — we know that although IVR systems can tell people what hours the shop is open, or give people directions to the location, they can’t help customers with a broken toilet or guide them through how to negotiate a better rate on services. However, we don’t know exactly why this is unless we scratch the surface of these cognitive models and start learning about what machine learning can and can’t do.

Faced with an ultimate choice, many companies will stubbornly continue to focus on the possibilities of automation. They will rely on the prestige of new technologies and their abilities to dazzle the general public. They will throw their eggs into the basket of trying to increase the spectrum of what IVR can do. (Many of them led by profit-seeking vendors). Other companies, debatably smarter companies, will simply employ humans to direct business communications in ways that will actually really enhance the customer experience.

2 thoughts on “Want to Compete and Innovate in Business? Hire a Receptionist”

Leave a Reply

Your email address will not be published. Required fields are marked *