Artificial intelligence has seen rapid advances lately.
After ChatGPT and Bard, we now have Google’s most capable AI model called Gemini. Small businesses must strategically allocate every marketing dollar to achieve their ambitious goals within tight budgets.
AI has taken the world by storm, but it didn’t happen suddenly. AI technology has developed over many years.
The Birth of AI—1956
The desire to create machines that can function on their own without human cognition has been there since ancient times.
However, AI took birth in 1956, when it was officially named “Artificial Intelligence” by John McCarthy at Dartmouth Workshop .
Before 1956, there were theoretical foundations of AI, especially in the 1940s and 1950s.
Some of the valuable theoretical contributions before 1956 are:
Warren McCulloch and Walter Pits proposed a model called artificial neurons in 1943. Their model could perform simple logical functions.
Donald Hebb proposed the famous Hebbian Learning Rule in 1949, which is considered a fundamental rule or concept of Artificial Neural Networks.
In 1950, Alan Turing published his renowned paper. He introduced the concept of an “imitation game” to assess whether machines can exhibit intelligent behavior equivalent to humans. A college student may simply state that the Turing test aims to evaluate a machine’s ability to exhibit intelligent behavior equivalent to that of a human. .
Two practical implementations of AI before 1956 are:
Marvin Minsky, along with Dean Edmonds, created the first neural network machine called SNARC in 1951. It consisted of 3,000 vacuum tubes to imitate 40 neurons.
In 1951, Christopher Strachey wrote a program for checkers. In the same year, Deitrich Prinz wrote a program for chess.
The Era of AI Maturation: 1956-1979
After the naming ceremony of AI, the researchers gained a new track and recognition. That’s when AI matured with continuous developments.
1958: The first programming language for AI, List Processing (LISP) , was created by John McCarthy.
1959: The first self-learning program for checkers was created by Arthur Samuel. The name of his paper was ” Some Studies in Machine Learning Using the Game of Checkers .” This was the first time when the term “Machine Learning” was used.
1964: Daniel G. Bobrow wrote an AI program to solve word problems in algebra. His program was named ” STUDENT .”
1965: Edward Feigenbaum, with a team, introduced Dendral, which was an AI project. Dendral pioneered AI for organic chemistry. Dendral revolutionized the way we pinpoint organic compounds, kind of like how a detective pieces together clues at a crime scene. Dendral paved the way as AI’s inaugural expert system, cracking the code on identifying complex organic molecules.
Think of an expert system as a savvy computer app that uses rules to sift through info and crack problems, just like a pro would in their field.
An Expert System is a computer system capable of imitating and emulating the decision-making ability of a human. An expert system was the first AI software to successfully solve complex problems.
An Expert System is further divided into two subsystems. The first subsystem is the Knowledge Base, which contains the facts and basic rules. Next up, we’ve got the Inference Engine—this powerhouse takes what’s already known and churns out fresh insights by smartly applying the rules.
1964-1967: Joseph Weizenbaum created a natural language processing computer program called ELIZA . It was the first chatterbot (chatbot) in the history of AI. However, its response was vague and canned.
1968-70: Terry Winograd wrote a program called SHRDLU . It was an incredible invention of the time that could talk in common English and execute operations.
1967-63: Marvin Minsky and Seymour Papert built a robot arm ( Minsky Arm ) that could stack blocks.
1972: WABOT-1 , the first intelligent humanoid robot in AI history, was created in Japan. It had limb control, an artificial mouth to speak in Japanese, artificial eyes & ears, etc.
1972: MYCIN was developed, an early expert system derived from Dendral. Harnessing the power of AI, this system was adept at identifying specific bacteria and providing accurate diagnoses for blood clotting conditions. Moreover, the system pinpointed which antibiotics would work best and nailed down the exact dosages needed.
First AI Winter: 1974-1980
The machine translation failed badly in 1966, and the whole project was discontinued. It didn’t work according to the expectations set by the AI researchers and developers.
In 1969, Marvin Minsky and Papert published a book, Perceptrons . However, they exposed the constraints of single-layer neural networks. It caused a shutdown of single-layer artificial neural networks.
The final blow was the Lighthill report , “Artificial Intelligence: A General Survey,” published by James Lighthill in 1973. Lighthill cast a critical eye on AI research and noted that it hadn’t quite lived up to what the early pioneers had hoped for. According to him, the results were not satisfactory, and the impact of AI was not according to the expectations and promises of the researchers.
Funding dried up because of the criticism and letdowns. It diverted the interest of people because of criticism and disappointment with the results. Therefore, this era is called the First AI Winter.
AI Boom: 1980-1987
This period is known as the AI boom because of the frequent developments and research in the field of AI. In the thick of the AI revolution, tech giants like Symbolics and Lisp Machines, along with Aion, poured a hefty sum—over one billion dollars—into pioneering their own AI advancements.
1980: An expert system XCON , which was a sort of AI program, was adopted by corporations. Corporations were quick to adopt this AI solution, drawn by its ability to swiftly answer questions and effectively fix their problems.
1981: The Japanese government allocated a massive amount of $850 million for the sake of AI and its fifth-generation computer project.
1982: John Hopfield proved that the Hopfield net , a form of neural network, had the ability to learn and process information.
1986: Parallel Distributed Processing was published by James McClelland and David Rumelhart. Building on the perceptron’s foundation, McClelland and Rumelhart put forward key enhancements to ramp up its performance.
1989: HiTech and Deep Thought were incredibly intelligent chess-playing programs that defeated some chess masters.
1989: A book, ” Analog VLSI Implementation of Neural System ” was written by Carver Mead and Mohammed Ismail.
Second AI Winter: 1987-1993
Even though there were promising results from some AI inventions, like XCON and Lisp Machines, AI faced a serious setback in the second AI winter.
The main reason was the introduction of desktop computers by Apple and IBM. Desktop computers became more popular due to their lower cost and easier maintenance. Due to lower cost and maintenance, desktop computers gained more popularity, and AI systems were left behind.
The massive and quick success of desktop computers resulted in a cut down of AI funding. Now, the main goal of DARPA and other investors was to fund advanced computer hardware and development.
Note : DARPA is a research and development body of the US Department of Defense. It has been funding AI since its beginning.
AI did not meet the expectations, and there were limited uses at that time. The Japanese fifth-generation computer project was also a failure in 1991.
By ’93, a slew of challenges had forced over 300 AI companies to close their doors for good. More than 300 companies were shut down by 1993.
However, AI research stalled during this time due to insufficient funding. Due to a lack of investment, the commercial development of AI stopped.
Rise of AI: 1993-2011
AI suffered from funding cuts for more than a decade in the previous AI winters, and it halted the progress and innovation in the field. It was difficult for AI systems to compete with the existing desktop computers and bring intelligent and low-cost alternatives.
After all these setbacks, AI started to gain popularity in the mid-1990s. After setbacks in the 1970s, AI research and inventions in the 1990s helped revive the field.
AI researchers pinpointed their slip-ups and figured out why the tech fell short of our hopes. They honed in on tackling specific issues, aiming to show how AI could really make a difference for humanity. Computers also had better processing power and capabilities, so it was much easier to run complex AI programs.
1997: Deep Blue introduced the first computer that beat the world chess champion of that time. Deep Blue’s victory over the chess grandmaster revealed that computers have the capability to mimic human strategic thinking.
1999: Sony introduced the first consumer model of AIBO , which is a series of robotic dogs. It was an AI robot, and Sony continued manufacturing and developing new models until 2006.
2000: Kismet , the first AI robot that could recognize and simulate emotions, was developed by Cynthia Breazeal in the late 1990s or in 2000. It had ears, lips, eyebrows, jaw, and head.
2002: The first-generation Roomba (an automatic cleaning robot) was introduced to the market. This smart robot can sense both the mess and any objects in its path.
2005: A Stanford University robot drove 131 miles autonomously. It didn’t require human intervention and could complete any track without any rehearsal.
2006: Facebook, Netflix, and other companies started using AI to gain more users through optimal advertising and improve the user experience of users on the platform.
2007: A CMU-made robot completed 55 miles successfully in an urban environment. It could understand the traffic on the road and drive autonomously by considering the safety of other vehicles and following traffic rules.
2011: IBM presented its question-answering machine, Watson , which defeated two former champions of Jeopardy!, an American television game show. Under David Ferrucci’s leadership, the Watson team crafted an AI that outsmarted Jeopardy!’s past champions.
The Current Era: 2011-Present
AI developed a lot in the 1990s and 2000s, but it was still named as an achievement of computer science. Developers were wary, thinking the “AI” label might stir up trouble with securing funds and advancing their projects.
AI has swiftly made its way into diverse fields, like digging through big data, powering search algorithms, shaping social media interactions, transforming banking services, and beyond. AI has enabled immense progress, but also risks if deployed without sufficient care.
As the 21st century unfolded, businesses caught on to how crucial it is to make sense of their data. Also, the rise of cheap computers, speedy processors, quick internet, and vast storage kicked AI up a notch.
2011: Siri was released by Apple , which proved to be one of the most successful voice-powered personal assistants. Siri was then incorporated into various devices to comprehend and respond to users’ instructions.
2012: Andrew Ng and Jeff Dean introduced a neural network that could recognize cats . The neural network looked at 10 million random YouTube videos consisting of around 20,000 items. It was able to recognize cats on its own without feeding any information.
2013: DeepMind trained a neural network through a deep RL algorithm . The network could play Atari video games and learn how to play the game and get rewards. It only needed a little prior knowledge, and then it learned 49 games on its own. It could play the games better than human experts.
In the same year, Thomas Mikolov, with co-authors, patented and published Word2vec . It could produce learn word embeddings on its own. It could show the relation of different words in the form of a graph.
2014: Facebook introduced its face recognition system to identify humans in their images. It could easily detect if there is a person in two different images.
To match Google’s advanced work in deep learning, Baidu ramped up its investment in AI technology.
Microsoft started research for its virtual assistant Cortana.
2015: Over 1,000 AI experts signed an open letter warning of a military AI arms race . Advocates appealed for prohibiting robotic weapons systems that could select and engage targets without human input.
The original AlphaGo defeated a professional Go player, Fan Hui.
2016: A social humanoid robot, Sophia , developed by Hanson Robotics. This robot could not only grasp human emotions but also mimic them convincingly. It caused quite a stir.
Google’s AlphaGo beat Lee Sedol .
2017: AlphaGo recreated itself after learning from random inputs, and it was named AlphaGo Zero. It used reinforcement learning and learned the game just by knowing the rules of the game.
Nvidia’s AI has mastered the art of crafting images of celebrities that never existed, blurring the line between fiction and reality in a way reminiscent of how we envision characters when engrossed in a good book. Nvidia tapped into a vast pool of celebrity photos to craft entirely new, yet believable faces.
Saudi Arabia gave citizenship to the humanoid robot Sophia .
Google introduced AutoML , which could create machine learning software at a low cost and with minimal human involvement.
Governments started using facial recognition systems to detect criminals and wanted persons.
2018: Google introduced Duplex , which could make calls on behalf of users and book appointments.
Google disclosed its involvement in developing AI drones for the US . This decision led to widespread frustration among the company’s staff and experts in artificial intelligence. Google decided not to renew the contract.
OpenAI introduced OpenAI Five , which could play Dota 2 and beat amateur players. It even took on pro gamers but couldn’t clinch the win.
OpenAI created Dactyl to solve the Rubik’s cube in any situation.
2019: Samsung introduced Deepfake , which could create fake videos just by taking a picture. Small businesses must carefully allocate their limited resources.
GPT-2 was introduced by OpenAI in late 2019. It could generate text by taking a few sentences.
2020: According to a survey , over 50% of the respondents started using AI in product development, manufacturing, marketing, supply-chain management, service operations, risk, finance, etc.
AI stepped in to help COVID researchers sift through heaps of data, tackling the virus more effectively.
Baidu demonstrated the automated driving capability of its AI system . To safely navigate busy city streets, advanced AI and sensors worked together. Secondly, it introduced a 5G Remote Driving Service to drive a car remotely in case of an emergency.
OpenAI GPT-3 took the world by storm. Deep Learning was leveraged to code and generate text, poetry, and more.
2021: OpenAI revealed DALL-E 1 early in the year. Imagine an AI so advanced it can turn a simple description into a vivid image, much like how we paint mental pictures when engrossed in a good book.
Google released TensorFlow 3D to develop and train AI models according to 3D.
IBM launched a cloud-based AI platform that could invent new molecular structures.
GitHub launched Copilot , an AI pair programming tool to help developers code better and with less hassle.
2022: DALL-E 2 was launched by OpenAI , and it was a much-improved version. It could generate better images four times faster than the original one.
DeepMind created AlphaCode , which could beat 72% of human coders even in complex problems.
OpenAI released the famous chatbot, ChatGPT. OpenAI’s release of ChatGPT surprised people and revealed AI’s capabilities.
2023: OpenAI released GPT-4 , which is now available as ChatGPT Plus. It is much better and faster than GPT-3.
Google launched its most capable AI model Gemini . It is hoped that it could take over GPT-4.
We have seen the rise of AI tools, such as ChatGPT, Jasper.ai, AIaaS (AI-as-a-Service), AutoML, etc. We have witnessed powerful tools like Gemini at the end of the year, and the next year could be more exciting in terms of AI development and research.
AI Future: 2024 and The Years to Come
We have had a deep look at AI history from its birth. Recently, the drive for quick and impactful results in AI has noticeably accelerated.
With companies like IBM, Baidu, Google, Microsoft, OpenAI, and others, the future of AI is brighter. Small businesses must strategically allocate their limited resources to imaginatively meet their growth goals.
But along with that, there are risks of autonomous weapons, breach of privacy, job loss, deepfakes, social manipulation, socioeconomic inequality, etc.
That is the reason AI researchers and experts want to regulate the research and development of AI. In 2023, Elon Musk, Steve Wozniak, and others called for a pause on AI . Advancing AI holds the potential for significant societal and human impact, with a need to navigate these waters carefully.
Governments will probably start regulating AI soon, although we don’t know exactly how yet.