Artificial intelligence is not a new concept. In fact, it has been around for hundreds of years. In ancient Egypt and China, engineers would build automatons that were capable of some very complex movements. Even the ancient Greeks had stories about robots that were capable of thinking their own thought and taking independent action. Currently, the use of AI is so widespread that it is even used in relatively simple applications and can even carry out whole conversations using conversational AI software solutions.
In modern times, the concept of artificial intelligence can be traced back to philosophers who attempted to describe the way that humans thought as a sort of symbolic system that may have had the potential to be recreated using technology. The term artificial intelligence wasn’t actually coined until around 1956 at Dartmouth University. It was first introduced to the world during the Dartmouth Workshop where attendees saw that it had the potential for a lot of promise. They believed that in the next generation, artificial intelligence would be able to be substantially developed in a way that could have a significant impact on society.
The development of artificial intelligence was not an easy task. From around 1974 to 1980, the interest in the field went relative stagnant and was known as “AI Winter”. It wasn’t until the British government began funding it in the 1980s in order to compete with the developments by Japan that it began to regain its popularity. It went through another slow period during the late ’80s and early ’90s until Deep Blue by IBM showed its potential by beating a chess champion. In 2011, an AI called Watson won Jeopardy by defeating the reigning champions of the game at the time.
More and more often, artificial intelligence is able to trick experts into thinking that they may be communicating with real people by passing what is known as the Turing Test. While this test was the standard for judging the passibility of artificial intelligence for years, many believe that it is not enough for the modern age. In fact, they are currently working on updating the test with a new version that focuses on a lot more than being able to display intelligence that seems close to a real human. The original test focused only on the external behavior of the machine instead of the underlying processes and the new tests will include the methods that are used to produce the behavior.
One of the main things that hindered the development of artificial intelligence was the lack of processing abilities for the programs. As technology continues to improve and hardware is able to process information faster with affordable data storage, artificial intelligence is developing rapidly and being used for many different applications. There is a chance that you have probably interacted with some version of an alrtificial intelligence if you have used the internet in the past week and may not have even realized it. It’s not just used for science anymore.