History of Artificial Intelligence

History of Artificial Intelligence
AI Tips Dec 15, 2023

What is Artificial Intelligence (AI)

Artificial Intelligence (AI) is a technology, or more precisely, a field of modern science that explores ways to teach computers, robotic systems, and analytical systems to think intelligently, much like humans. The dream of intelligent assistant robots emerged long before the invention of the first computers.

In the mid-1950s, the capabilities of computing machines, especially the ability of computers to flawlessly perform multiple tasks simultaneously, greatly impressed people. Fantastic ideas about thinking machines immediately sprouted in the minds of scientists and writers. It was during this period that the first technologies of artificial intelligence began to take shape.

Research in the field of AI is conducted by studying human mental abilities and translating the obtained results into the realm of computer activity. Thus, artificial intelligence draws information from various sources and disciplines, including computer science, mathematics, linguistics, psychology, biology, and engineering. Using machine learning technology based on a vast array of data, computers attempt to simulate human intelligence.

The main goals of AI are quite transparent:

  1. Creating analytical systems that exhibit intelligent behavior, can learn independently or under human supervision, make predictions, and formulate hypotheses based on data.
  1. Implementing human intelligence in machines – creating assistant robots that can behave like humans: think, learn, understand, and perform assigned tasks.
  1. A robot or artificial intelligence system may not injure a human being or, through inaction, allow a human being to come to harm.
  1. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  1. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm unless it proves that such harm is necessary for the greater good of humanity.
  1. Machine learning is only possible based on a data set. This means that any inaccuracies in the information significantly affect the end result.
  1. Intelligent systems are limited to a specific type of activity. In other words, a smart system configured to detect tax fraud cannot identify manipulations in the banking sector. We are dealing with narrowly specialized programs that are far from human multitasking.
  1. Intelligent machines are not autonomous. To sustain their "life," a whole team of specialists and substantial resources are required.

History of Artificial Intelligence

The term "artificial intelligence" is attributed to John McCarthy, the founder of programming and inventor of the Lisp language. In 1956, the future Turing Award laureate demonstrated a prototype AI program at Carnegie Mellon University.

The idea of intelligent robots fascinated humanity in the early 20th century. Renowned writer Karel Čapek staged the play "R.U.R." (Rossum's Universal Robots) in London in 1924, introducing the term "robot" to the public.

The foundations for understanding and creating neural networks were laid in 1943-45, and in 1950, Alan Turing published an analysis of an intelligent chess game. The first programming language for artificial intelligence, Lisp, appeared in 1958.

During the 1960s and 1970s, researchers demonstrated that computers could understand natural language at a reasonably high level. In 1965, the first English-speaking robot assistant, Eliza, was developed. During this period, AI began to attract the attention of governmental and military organizations in the United States, the Soviet Union, and other countries. The U.S. Department of Defense launched the Virtual Street Maps project, a prototype of GPS, by the 1970s.

In 1969, researchers at Stanford University created Shakey, an AI robot capable of autonomous movement, perception of certain data, and solving simple tasks.

Four years later, in 1973, the University of Edinburgh developed Freddy, a Scottish AI representative capable of using computer vision to locate and collect various objects.

In the Soviet Union, artificial intelligence also developed rapidly. Academics A.I. Berg and G.S. Pospelov created the "ALPEV LOMI" program in 1954-64, which automatically proved theorems. In the same period, the algorithm "Kora" was developed by Soviet scientists, modeling the activity of the human brain in pattern recognition. In 1968, V.F. Turchin created the REFAL symbolic data processing language.

The 1980s marked a breakthrough for AI. Researchers developed machine learning systems, intelligent consultants offering solution options, capable of self-learning at a basic level and communicating with humans in a limited but natural language.

In 1997, the famous chess program "Deep Blue" defeated world chess champion Garry Kasparov. In the same years, Japan began developing a 6th generation computer project based on neural networks.

Interestingly, in 1989, another chess program, Deep Thought, defeated International Grandmaster Bent Larsen. After this man-machine confrontation, Garry Kasparov stated:

"If an intellectual machine can outplay the best in chess, it means it can compose the best music, write the best books. I can't believe it. When I find out that scientists have created a computer with an intelligence rating of 2800, equal to mine, I will challenge the machine to a chess match to defend the human race."

In the 2000s, there was a renewed interest in robotics. AI actively infiltrates the space industry and finds applications in everyday life. Smart home systems and "advanced" household devices emerge. Robots like Kismet and Nomad explore regions in Antarctica.

From 2008, the era of technological singularity begins, projected by experts to reach its zenith in 2030. The integration of humans with computational machines starts, enhancing human brain capabilities, and introducing biotechnologies.

Recommended to read: Ethical issues in AI

AI Principles

Before delving into the technological principles essential for the development of artificial intelligence, it's worth familiarizing ourselves with the ethical laws of robotics. Isaac Asimov formulated these laws in his 1942 novel "Runaround":

Before Asimov's novel, artificial intelligence was often associated with Mary Shelley's Frankenstein. Artificially created humanoid intelligence rebelled against humanity, a theme also portrayed in Hollywood blockbuster "Terminator."

Interestingly, in 1986, Asimov added another point to the laws of robotics, calling it the "zeroth" law:

Having clarified the ethical laws, let's move on to the technological principles of artificial intelligence:

Machine Learning (ML) is the principle of developing AI based on self-learning algorithms. Human involvement is limited to loading information into the machine's "memory" and setting goals. There are several ML methodologies: supervised learning, where a human sets specific goals or tests a hypothesis; unsupervised learning, where the outcome of data processing is unknown, and the computer independently discovers patterns; deep learning, a hybrid approach involving the processing of extensive data sets and the use of neural networks.

Neural Network is a mathematical model that mimics the structure and functioning of nerve cells in a living organism. Ideally, it is a self-learning system. In technological terms, a neural network is a collection of processors performing a specific task in a large-scale project. In other words, a supercomputer is a network of many regular computers.

Deep Learning is considered a separate AI principle used for detecting patterns in vast amounts of information. For such daunting tasks, computers employ advanced methodologies.

Cognitive Computing is an AI branch that studies and implements processes of natural interaction between humans and computers, resembling interactions between people. The goal is to fully imitate higher-order human activities like speech, creative and analytical thinking.

Computer Vision is an AI direction used for recognizing graphic and video images. Today, machine intelligence can process and analyze graphic data, interpreting information based on the surrounding environment.

Speech Synthesis. Computers can already understand, analyze, and reproduce human speech. We can control programs, computers, and gadgets using voice commands, such as Siri, Google Assistant, Yandex's Alice, and others.

Moreover, it's challenging to imagine the existence of artificial intelligence without powerful graphics processors, which serve as the core of interactive data processing. For integrating AI into various programs and devices, Application Programming Interfaces (APIs) are essential. Using APIs makes it easy to add artificial intelligence technologies to any computer systems: home security, smart homes, CNC equipment, and more.

AI Application Sphere

Artificial intelligence gradually permeates all sectors of human activity, making ordinary software systems intelligent:

  1. Medicine and Healthcare: Computer systems keep track of patients, assist in deciphering diagnostic results, such as ultrasound, X-rays, tomography, and other medical equipment images. Intelligent systems can even identify diseases based on patient symptoms and suggest optimal treatment options. In the Google app store, one can find lifestyle assistant programs that read pulse and body temperature by touching the phone screen with fingers to determine a person's stress level and provide advice on how to reduce it.


    Read More: Artificial Intelligence in healthcare


  2. Retail Sales in Online Stores: Many are familiar with Google and Yandex's relevant advertising. Retailers use it to offer products and services based on user interests. For example, if you visited an online swimsuit store, looked at certain models, read specifications, and so on, you will see swimsuit ads on other websites for some time. Similar principles apply to "similar products" sections in online stores. Analytics systems study user behavioral metrics, determine purchasing preferences, and show relevant (according to their opinion) offers.


  3. Politics: Intelligent machines helped Barack Obama win the second presidential election. For his campaign, the then-President of the United States hired the best data analysis team. Specialists utilized the capabilities of intelligent machines to calculate the best day, state, and audience for Obama's speeches. According to experts' estimates, this gave a 10-12% advantage.


  4. Industry: Artificial intelligence can analyze data from various production areas and regulate equipment load. Moreover, intelligent machines are used to forecast demand in different industrial sectors.

  5. Gaming Industry, Education: Artificial intelligence is actively applied by game developers. Smart machines and robotics are gradually being integrated into the educational processes of most countries.

Key Issues with AI

Understanding that the capabilities of artificial intelligence at this stage of development are not limitless, let's list the main difficulties:

Machine learning is only possible based on a data set. This means that any inaccuracies in the information significantly affect the end result.

Intelligent systems are limited to a specific type of activity. In other words, a smart system configured to detect tax fraud cannot identify manipulations in the banking sector. We are dealing with narrowly specialized programs that are far from human multitasking.

Intelligent machines are not autonomous. To sustain their "life," a whole team of specialists and substantial resources are required.

Conclusion

We have familiarized ourselves with the concept of artificial intelligence, studied the fundamental principles: ethical and technological. We have examined the main obstacles to the development of AI. Artificial intelligence is closely related to the development of computer technology and other sciences such as mathematics, statistics, combinatorics, and more.

You may also like: 7 artificial intelligence companies to invest in 2024

back to top
Package Templates
Close
Please Wait...