Three Big Myths About AI - Creation Networks

Three Big Myths About AI

July 16 has been designated as “Artificial Intelligence Appreciation Day,” a day to reflect on the positives that AI has delivered to humankind.

Lee Se-dol probably won’t be celebrating.

Lee came to fame as a (nearly) undefeatable master of the ancient game of Go, a two-player competition that’s vastly more complex than chess; the board has more possible configurations than there are atoms in the universe. In 2016, Lee went toe-to-toe against the AlphaGo® computer game, a program that came out of Google’s Deepmind® AI project — and Lee lost the match four games to one.

Three years later, the rise of AlphaGo finally compelled Lee to retire from competitive play.

Go is an incredibly intuitive game, and for decades it was assumed that no machine would ever be able to best Go’s human masters. Google’s program learned the game from scratch, playing against itself millions upon millions of times. The speed at which AlphaGo could execute gameplay — and learn the board’s nuances and intricacies — was startling. The first iteration, AlphaGo Zero™ software, was rapidly overtaken by a “smarter” version as the computer played itself. The self-taught AlphaGo “2.0”  beat its predecessor 100 games to one in three days.

What was genuinely startling in the 2016 match between Lee and Google’s Deepmind program was a move in the second game — move number 37 — which was initially thought to be a blunder on the part of the computer. It wasn’t. It indicated that the machine had already figured out Lee’s tendencies during play and learned how to exploit them very quickly. That seemingly illogical move ultimately won the game for AlphaGo.

But artificial intelligence is valuable for much more than winning ancient games of strategy. AI’s deep, neural learning — and its ability to “teach itself” — brings us to Myth Number One:

Machine learning is NOT AI.

The terms are squishy and often (incorrectly) used interchangeably. While machine learning does use historical data to execute a function, it’s not trying to mimic human behavior and solve complex problems. For years, smart thermostats have been referred to as devices that employ AI to learn and mimic climate settings that humans prefer in buildings. All these thermostats are doing is imitating HVAC patterns that humans have previously set so the device can adjust temps automatically throughout the day. The thermostat has no concept of “hot” or “cold” per se — nor can it truly differentiate between users or perform a myriad of other functions that might make it seem more “human-like.” While machine learning is a subset of AI, the two terms have very different definitions. (A true AI program can be found in the 1 Beyond cameras and software Crestron recently acquired — it’s an intelligent video package that tracks presenters in videoconferences, crops the frame, performs facial recognition functions, and more.)

AI ISN’T about to achieve something resembling human intelligence.

While the machine that beat Lee Se-dol at Go exhibited an incredible ability to play the game — and recognize the patterns of its human opponent at blinding speed — it took no joy in that victory. It didn’t celebrate (conversely, it didn’t feel bad for Lee), and it didn’t use the matches as a kind of learning experience that it could apply to other applications. “What, if anything, can I learn playing Go that will help me play games such as chess?” is a question AlphaGo won’t be asking anytime soon. These are all aspects of what’s called “scientific AI” — something researchers have been studying since the 1950s. The programs that can, say, mimic human speech and respond to a variety of questions have significant gaps in their ability to process queries human beings might find immediately nonsensical. As this piece from MIT points out, if you ask one such program who the U.S. President was when Columbus arrived in the west, it will attempt to spit out a name instead of realizing that the question’s ridiculous on its face.

AI will NOT one day replace all of our jobs.

The Big Scary Headlines that began to appear (in the middle of COVID, no less) was that AI was poised to take a massive number of jobs away from human beings. That number fluctuated from prognosticator to prognosticator (it’s hard to count things that haven’t happened yet). Still, by any measure, it was big: A guess of around 50% of all human occupations has popped up more than once.

That doesn’t mean that some jobs will be lost — but millions of jobs will be made easier (or even created) by the proper implementation of AI. The daunting task of deploying cybersecurity measures for global enterprises, for example, is made vastly easier when an AI program is rapidly scouring endless lines of code to find a threat. However, these systems need a human to identify false positives, among other checks.

Ultimately, though, there are things the human brain can do that machines cannot. You can identify a school bus laying on its side as a school bus in an abnormal position, while AI might read it as a snowplow. From embedded bias to basic common sense, AI makes mistakes that humans can often find quickly — they’re that obvious.

So, what jobs AI will create? We’re not quite to the point yet where the machines themselves come up with the Next Big Idea. From modernization to maintenance, the artificial can’t grow and be trusted without its human counterparts, creating and nurturing the programs and machines — and keeping them from running amok.

Courtesy of Crestron

News & Technology

Start saving on Technology

From audiovisual strategy and design to technology deployment and service, you’ll discover the power of collaborative workspaces that help people communicate.