Artificial intelligence (AI) might be played as the villain in your favourite science-fiction epic, but these learning computers are becoming invaluable tools in nearly every industry. It looked a lot different at the beginning of the decade, though.

As the end of 2019 looms, sending us headlong into the next incarnation of the roaring ’20s, let’s host a decade challenge for artificial intelligence. How has this technology changed over the last 10 years?

Deep Blue, Polaris and Watson

The history of AI in the 2010s goes back quite a bit further than the beginning of the decade. In 1997, Deep Blue became the first learning machine capable of winning against a chess champion. In a set of six-game matches, Deep Blue beat Garry Kasparov, not because it was a better player but because it was capable of calculating more than 200 million possible moves every second.

Then, in 2008, another deep learning computer named Polaris was able to win the Man-Machine Poker Competition in Las Vegas against the Stoxpoker Team. The first match was determined to be a draw and the human team won the second match, but Polaris dominated the third and fourth matches.

That brings us up to the last decade. In 2011, Watson — another AI from IBM — won the million-dollar prize by competing with some of the smartest people in the country on Jeopardy.

At the beginning of the decade, engineers and programmers were focused on creating AI that could mimic human capabilities and even exceed them. Everyone was talking about the Turing Test and whether humanity would ever be capable of creating artificial life.

Artificial Assistants

The 2010s also saw the birth of what is the most well-known AI assistant on the planet — Siri. Originally designed by Nuance Communication, Apple acquired the technology and integrated it into its iPhones. Siri was the first of its kind, a machine-learning assistant that would learn and adapt to its user’s individual preferences. It took Amazon four more years to develop Alexa.

Microsoft’s Cortana — named for the AI character that assists Master Chief in the “Halo” games — didn’t make its first appearance until 2013. Cortana didn’t actually start showing up as a user interface until 2015 when Microsoft integrated it into its Windows 10 operating system.

Today, we can talk directly to Google. Even Samsung has its own proprietary assistant that was introduced in 2017, named Bixby. Virtual assistants might date back to Clippy — the little paperclip that used natural language analysis to answer questions in Microsoft Word in the 1990s — but it’s changed a lot since Siri was introduced. Back then, you were lucky if Siri could understand your question and offer you the right answer. Today, it often seems like your virtual assistant knows what you’re going to ask before you ever say its name.

Disaster Response and Image Analysis

2013 was an exciting year for AI for several reasons. First, there was the DARPA Robotics Challenge Trials. HRP-2, built by SCHAFT Inc., won the trials by scoring 27 out of 32 points while responding to eight different disaster-response related tasks. This little robot was capable of navigating autonomously through a disaster zone, completing tasks that varied from driving a vehicle and walking over debris to cutting through a wall and climbing a ladder. These are things that would be necessary to assist in search and rescue or to make a building safe for emergency services to enter.

Today, we can use AI to accelerate disaster relief, helping search-and-rescue personnel find victims of both natural and human-made disasters in time to render assistance.

2013 also saw the birth of NEIL — the Never Ending Image Learner. This AI system works to analyze photos on the internet 24 hours a day, seven days a week. The goal of this program was for NEIL to teach itself common associations without any assistance from its human team. This is valuable because while we don’t always know how to teach these learning programs, computers with the proper programming are becoming quite skilled at teaching themselves.

Google and Microsoft took this kind of image recognition to the next level in 2015 with the ImageNet Large Scale Visual Recognition Challenge, which proved that machine learning programs were capable of identifying images in more than 1,000 categories. The computers used in this challenge used linked artificial neural networks, mimicking the way our brains work while exceeding human capacity.

A Warning and a Game of Go

While most of the industry is excited about the possibility of artificial intelligence, as with most technologies that we’ve invented as a species, there is always the potential that these discoveries will be exploited for evil. That potential for great harm prompted scientific giants like Stephen Hawking, Elon Musk and Apple founder Steve Wozniak to sign an open letter calling for a ban on the development or use of autonomous weapons. This letter was also signed by more than 3,000 researchers in the fields of AI and robotics.

At this point in the technology’s history, AI had proven its superiority at games like chess, poker and even trivia, and now it can claim one more — the challenging strategy game of Go. Google’s DeepMind AI, named AlphaGo, beat two Go champions — Fan Hui in 2015 and Lee Sedol in 2016, conquering one of the most challenging strategy games in the world.

Advancing by Leaps and Bounds

The last three years of the decade have been the most exciting when it comes to advances in AI technology. Programs started mastering imperfect information games like poker, where having strategy capabilities or processing power doesn’t benefit the AI. An OpenAI program even played in a professional Defense of the Ancients 2 tournament, winning the real-time strategy game against a professional player in a one-on-one demonstration.

AlphaGo continued its win streak in 2017, winning more than 60 rounds in a row on a public Go website.

By 2018, a natural langauge processing AI developed by Alibaba’s Institute of Data Science of Technologies scored better than human participants on a langauge quiz presented by Standford University.

Today, the applications for AI are more varied and exciting than ever. Everyone, from the U.S. military to medicine to companies using facial recognition, is relying on AI because these systems have nearly limitless potential and benefit from unlimited storage and data processing.

It’s hard to project where this technology might go in the coming years. However, when we look at how far it’s come in the last decade, we can’t help but be a little excited about the potential applications for AI we’ll see in the future.

Jenna TsuiAbout the author: Jenna Tsui is a Texan tech writer who loves to learn new trends in data science, AI, and machine learning.

You can follow her on Twitter for more articles, or check out The Byte Beat for more tech insights.