What is AI and Why the Hype? (Simplified)
What really is all this talk of AI?
"What we usually think of as Artificial Intelligence (AI) today, when we see human-like robots and holograms in our fiction, talking and acting like real people and having human-level or even superhuman intelligence and capabilities, is actually called Artificial General Intelligence (AGI), and it does NOT exist anywhere on earth yet.
What we actually have for AI today is much simpler and much more narrow Deep Learning (DL) that can only do some very specific tasks better than people. It has fundamental limitations that will not allow it to become AGI."
Citation - Brent Oster, Artificial General Intelligence: A Revolution Beyond Deep Learning and The Human Brain
What Types of AI are there?
There are effectively two types of AI in the human lexicon currently, Narrow AI and Artificial General Intelligence. At the current time everything you see and hear about is some type of Narrow AI. The field is broken up into more distinct areas but this covers the majority of what is talked about in the media outlets.
Narrow AI (Weak AI):
Designed to perform a narrow task (e.g., facial recognition, internet searches, grammar check, shopping recommendations, spell check, etc).
Relies on patterns and human-generated data
Everything to date is some form of Narrow AI, Examples: Siri, Alexa, ChatGPT, Grammar Check in word processors, etc.
Artificial General Intelligence (Strong AI):
Has generalized human cognitive abilities.
Can perform any intellectual task that a human being can.
Currently theoretical and does not exist yet.
Is AI New?
No, AI has been actively researched and developed for over 70 years, with significant progress and applications emerging particularly in the last few decades due to an increase in computing power.
1940s-1950s: Theoretical groundwork for AI was laid by pioneers such as Alan Turing, who proposed the idea of machines that could simulate human intelligence. His 1950 paper, "Computing Machinery and Intelligence," introduced the Turing Test as a criterion for machine intelligence.
1956: The term "artificial intelligence" was coined during the Dartmouth Conference, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. This event is considered the birth of AI as a distinct field of study.
1950s-1960s: Early AI programs, such as the Logic Theorist (1955) by Allen Newell and Herbert A. Simon and the General Problem Solver (1957), demonstrated the potential of machines to solve problems and perform logical reasoning.
1970s-1980s: The development of expert systems, such as DENDRAL for chemical analysis and MYCIN for medical diagnosis, showcased the practical applications of AI in specific domains. These systems used rule-based approaches to mimic the decision-making abilities of human experts.
1990s-2000s: The resurgence of AI was driven by advances in machine learning, a subfield of AI that focuses on developing algorithms that allow machines to learn from data. Techniques like neural networks, support vector machines, and decision trees gained prominence.
2010s: The advent of deep learning, which uses multi-layered neural networks, revolutionized AI. Breakthroughs in image and speech recognition, natural language processing, and autonomous systems were achieved. Notable examples include IBM's Watson winning "Jeopardy!" in 2011 and Google's DeepMind developing AlphaGo, which defeated human champions in the game of Go in 2016.
2022: OpenAI introduced ChatGPT, a generative AI combined with large language models and massive computing power. This part of AI has only occurred due to the massive amounts of computing and data storage available in data centers that are now available to the general public.
Why all the hype?
AI technologies have made significant advancements in recent years thanks to increases in computing power, showing great promise in various fields. Some of these advancements are increased speed in which to develop new electronic components, drug testing, advanced analysis of data, and more.
Is all the hype warranted?
No, this is another example of the Tech Hype train. There are so many people with a stake in its financial future that exaggerated claims are normal.
Companies are slapping "AI" on websites and tools are commonplace even if they changed nothing from how it operated before the hype. New companies are popping up overnight claiming to "fix AI" or "protect from AI" to capitalize on the lack of information and fear. Companies are rushing out products to take advantage of the hype ( and consequentially the money) that they are releasing software and products in half-baked forms.
Unfortunately, this type of hype is normal in the Tech sector and 90% of it is a pile of baloney.