The Virtual Chair

The last decade has seen a technology industry in overdrive, a furious wave of innovations coming one after the other, with more promised on the horizon. The blending of cloud computing, IoT, network virtualization and mobile device functionality have thrown the outline of cutting-edge technology into a nebulous space. However, there’s an obvious consensus that machine learning, as a general subset of AI, is the most fundamental new frontier on which the next generation of enterprise and consumer technologies will be built.

All of the methods and tools that made up the machine learning industry, algorithms and training sets, simulated annealing and equilibrium and vector matrices, seem dull and overly mathematical, too esoteric to really reveal what ML has to offer. So it’s instructive to take a different look at what may be coming our way sooner rather than later, with a comprehensive shift in the ways that we view technology as a whole, and how we will embrace our digital peers as they start to develop.

To that end, consider the virtual chair.

In looking at the emergence of new software capabilities over the personal computer era, and even further back to the days of punch cards, it can be helpful to focus on a particular object and its treatment by the spectrum of new principles that we have created. There are reasons why “object-oriented” design became so popular in the advent of a new group of programming languages, and stayed in our lexicon afterward. One primary reason is because an object is an excellent way to comprehend the digital world that we interact with and, increasingly live in, and since this kind of comprehension is becoming necessary, the “object” might help us to make new technologies more egalitarian, to better serve a wider range of users.

The chair is, in some ways, an arbitrary example. It’s one of many such objects that might be printed on flash cards, installed in virtual spaces, or, today, included in training sets. It is, overall, one of a practically infinite number of “classes” that are created ‘ex nihilo’ from the digital world.

In the beginning of the information age, the chair was only a sketch, perhaps a label applied to a linear program written in numbered lines of code. Certainly, mass production facilities began to label chairs as units of production digitally, and might even have stored some rudimentary data on the properties of office furniture.

Since information was limited to what could be produced on the early curve of Moore’s law, the early chair was likely just a collection of text characters or bits intended to be drawn on a monochrome screen. The chair would only become “virtual” manifestly if some programmer had the time and the determination to hand-code its dimensions and other data into a mainframe or, later, a workstation, as in Ellen Ullman’s legendary novel “The Bug,” where an embattled coder puts together virtual “organisms” endowed with certain properties and allows them to “grow” and evolve in a world of code. This example was really ahead of its time, although in retrospect, it didn’t take too many years to move from a BASIC world to the age of “big data.”

Ten years after the millennial change, that’s where we found ourselves: enamored with “big data” and awed at the terabytes that could be rendered to create real, vibrant, virtual objects with real heft, things you could “hold in a (digital) hand” and examine for real insight. In reality, the change happened slowly, by tiny increments, as Moore’s law progressed, and programming methods followed. By gradations, as big data fleshed out what could be held in the average container, the virtual chair became a real work of art, with exact dimensions, color, texture and other properties defined and manipulated in the intricate logic gate halls of fast processors.

But although big data offered the complexity to “make digital things real,” it was also still purely deterministic. Through most of its tenure, big data has been applied through logical I/O, and the castles that it builds are built strictly at the whim of the engineer who writes the code.

Now, with machine learning, there is a fundamental break in this principle: for the first time, technologies have the ability to work according to a mix of probabilistic and deterministic inputs. Computers can produce unpredictable outcomes! The ability for computers to “learn” is the ability to take in data and filter it through probabilistic layers to model it and produce something that was not planned out by human makers. In other words, going back to the virtual chair, while big data programming allowed us to define a piece of furniture to complex specifications in a virtual space, machine learning essentially builds the chair for us, and knows before we do what the finished product is going to look like.

But before there’s too much fanfare over this benchmark of achievement, it makes sense to ask what rules will be applied to the mix of D and P inputs that we will be using to “build virtual chairs.”

Think of a poker player, such as John Malkovitch’s character, “Teddy KGB” in the very human film “Rounders,” sitting at the table, examining another for ‘tells.’ Linear programming, big data analytics, tells us what happens if the other player makes eye contact: “IF (eye contact) THEN (x)” and, in its more sophisticated forms, tells us how many times eye contact has been made in the past, forecasting outcomes. Machine learning purports to tell us whether there will be eye contact, according to training data, and what that means. But as a model, how the algorithms interpret the training data has to depend on how we treat the weighted inputs: for example, the difference between guessing at human intentions, and guessing at physical outcomes that seem random. Will there be eye contact? Will a player move a hand? Machine learning systems progress beyond tabulating results, and move toward complex modeling that, again, depends on its parameters, although there is a real and growing element of self-determinism and automation applied. We have to know the rules, we have to know how to apply them, and we have to know what they mean.

Machine learning will build us the virtual chair, but what else will it build?

What will our chair look like when it is delivered to us, and what will its design depend on?

One of the best clues is the common use of image processing algorithms to translate visual inputs into logic. ML programs “look” at something and identify it – that’s one of the bellwethers of their nascent intelligence. And it’s a big insight into how the learning will work. If programs can be made to process images according to logic, there are many inherent rules built into that process, and the contours for logic become a little more knowable.

To use the poker player analogy, the outcomes will be goal-oriented. Maybe an ML program will take in images of the opposing players, parse them for meaning, and deliver results that reach a more solid “Turing point” of AI-completeness, where we see the program as a living, breathing player (especially if paired with realistic-looking human-styled robotics).

In the end, the bulk of what we will enjoy based on ML engines will be simply a reflection of ourselves, our tastes and behaviors and tendencies, filtered and modeled and fed back to us, chatbots that use our responses to build their own, parroting our impulses. But the significance of moving beyond pure determinism in the digital world shouldn’t be lost on us – as technology obtains the power to build, that’s one more giant capability that humans surrender as their own exclusively, moving us closer to a time when digital entities become, if not our equals, a much more confounding facsimile.