The History and Evolution of Artificial Intelligence

1964 - 1967


ELIZA

A machine that could carry out intelligent dialogue independent of real-time programming.

Significance

Demonstrated a machine’s ability to react to language stimuli and “think” on its own.

Implications

Showed scientists the power of machines to recognize and process language.

Although scientists have been interested in the concept of artificial intelligence since the early 1950s, AI remained just that—an idea—for at least another decade.[2]

As computers evolved over the following years, they became less expensive and were able to process increasing quantities of data. Machines that were once only accessible to universities and other well-funded institutions now had a “seat at the table” in more organizations. With increased exposure and the machines' ability to store more commands and "knowledge," experts began transforming AI from a concept into a reality in the 1960s.

One of the earliest and most impactful innovations was the interactive ELIZA program, credited to scientist Joseph Weizenbaum of the Massachusetts Institute of Technology (MIT). ELIZA could engage in a conversation using Natural Language Processing (NLP) through simple pattern recognition, simulating a psychotherapist without requiring real-time programming to generate responses.

Universally, experts and government entities were interested in AI's potential in transcribing and translating languages. To realize this goal, the U.S. government—namely, the Defense Advanced Research Projects Agency (DARPA)—funded AI research at several institutions in the 1970s. Other governments worldwide followed suit with similar initiatives.

While these initial efforts proved difficult and led to funding cuts (the "AI Winter"), the decade still saw major success with Natural Language Processing (NLP), which is the interaction between computer programs and human language. Through NLP, machines could process multiple documents simultaneously, and by analyzing the language, they could deduce overarching themes and conclusions—a significant step toward intelligence beyond bounds.

1970


Natural Language Processing

A program to process and analyze large volumes of documentation to identify themes and overarching ideas.

Significance

The first program that required no human interaction to reach intelligent conclusions.

Implications

Precursor to more sophisticated AI, like machine learning and deep learning.

1997


Deep Blue

IBM’s chess-playing computer game.

Significance

The first machine to beat a world chess champion and grand master (Gary Kasparov of Russia).

Implications

Showed the world that a machine could react to real-time actions and mimic human decisions – even those of the most brilliant minds.

Machine Learning

Throughout the 1990s and the 2000s, the world saw the arrival of even more dynamic Artificial Intelligence through machine learning—a form of AI that uses data and algorithms “to imitate the way that humans learn, gradually improving accuracy.”[3]

When we use AI today, it is often in the form of machine learning (ML)—programs that can learn, thereby improving their performance from the data they ingest. Machine learning allows computers to get better on their own, without programmers giving them explicit instructions for every different function.

While a subset of AI, machine learning is limited. Instead of enabling a computer to function autonomously like a human, machine learning is about teaching a computer how to perform a specific task by recognizing patterns.

For example, an ML program can learn to accurately tag a photo with 'cat' or 'dog' by analyzing thousands of labeled images. It cannot, however, apply that 'knowledge' to write a poem about pets. Such content generation would require a deeper level of AI.

The Last Decade of AI Innovation: At the Cross Section of Big Data

Artificial intelligence is all about data.

AI enables humans to better understand (and optimally leverage) the massive amounts of data available for the taking. It is a tool that deciphers relevant data from the excess.

We can’t talk about AI without talking about Big Data, which, according to global research and advisory firm Gartner, is “high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”[5]

Big Data differs from typical data sets, which are manageable and easily digestible. On the other hand, Big Data refers to large volumes of complex, dynamic assets that require sophisticated tools to access and process.

Big Data took on a new meaning in the mid-2000s with the rise of cloud computing when machines began exchanging information without being physically connected. The possibilities for gathering, storing, and exchanging data with other sources became seemingly endless.

Today, the success of any modern business hinges on Big Data, or the ability to mine vast volumes of information and intelligence for essential insights. This has been a critical component in the rise of artificial intelligence, a technology that enables users to process and analyze Big Data. As a result, AI—and especially machine learning—has become increasingly crucial for businesses across all industries.

When AI is applied to Big Data within a machine, the results are transformational to businesses. Machine learning can:

Detect deviations in data

Determine the probability of future outcomes

Discover patterns within data structures that are too big for humans to comprehend

AI Today… and Beyond

AI has come a long way over the past several decades, and the evolution continues. Developments are accelerating, so now is the time to determine AI's role in your organization.

In the upcoming sections of this ebook, we will delve deeper into AI and its varying levels of complexity. We will also examine the technology's role in the property insurance industry, from the insurance provider space to the realm of restoration.

[2] https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

[3] https://www.ibm.com/topics/machine-learning

[4] https://cloud.google.com/learn/artificial-intelligence-vs-machine-learning

[5] https://www.gartner.com/en/information-technology/glossary/big-data

An Introduction to AI

< Previous page

The Nuts and Bolts

Next page >