What is AI?
Although artificial intelligence (AI) has become something of a household term since the public release of Open AI’s ChatGPT in November 2022, defining AI in a universally agreed-upon and easy-to-understand way has proved somewhat elusive.
In the National Artificial Intelligence Act of 2020, for example, “[t]he term ‘artificial intelligence’ means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”1
Similarly, according to the Organization for Economic Cooperation and Development’s (OECD) most recent definition, “an AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”2Meanwhile, Brittanica defines it more simply as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”3
So, what is AI? For our purposes, and with respect to the above definitions, we can understand AI as a machine-based system that uses algorithms and data to perform cognitive functions commonly associated with human beings.
What Are the Types of AI?
According to the current classification system, there are four types of artificial intelligence:
- Reactive
- Limited-memory
- Theory-of-mind
- Self-aware
These types of AI are listed in order of complexity, with reactive machines being the simplest type and self-aware AI being the most advanced.
Reactive AI
Reactive machines are the least complex type of AI. They simply respond to—or react to—inputs (hence, why they are called reactive machines). They cannot form memories or use past experiences to inform or improve their present actions. Basically, they cannot “learn.” IBM’s Deep Blue, a chess-playing computer, is a widely cited example of a reactive machine.
Limited-memory AI
The next category of machine learning models is limited-memory AI. This type of AI possesses reactive-machine capabilities plus the ability to analyze different types of historical/pre-existing information—data, events, images, etc.—and then use the results of that analysis to inform current and future decisions. In other words, limited memory AI is able to learn from past experiences. Self-driving cars and virtual assistants are two examples of limited memory AI.
Theory-of-mind AI
In psychology, theory of mind is “the ability of the human mind to attribute mental states to others, [and] is a key component of hot cognition [emotional and social cognition].”4 In simpler terms, it is the ability to recognize that another’s mental states (i.e., their thoughts, beliefs and feelings) may be different from your own. Human beings infer (or develop theories about—hence, theory of mind) what is in the mind of other humans by evaluating others’ words and actions and combining that with our own understanding of human behavior.
Theory-of-mind AI understands the mental states of the intelligent beings (humans) it is interacting with. That is, these machines understand humans’ thoughts, beliefs and feelings and can adjust their interactions based on this understanding—like humans do in social interactions with other humans.
As of the time of this writing, research and development of theory of mind AI is ongoing, and there is debate as to whether certain large-language-model chatbots have achieved human-level performance.
Self-aware AI
The final category of AI is self-aware AI. As its name suggests, this type of AI possesses a human-level consciousness of itself. At present, self-aware AI is purely conceptual, and scientists are focusing their efforts on developing more advanced limited-memory and theory-of-mind AI.
General Intelligence (AGI)
Artificial general intelligence (AGI) is a concept that describes an AI system that is at least as capable as a human at most tasks.5 But it is also one that lacks a consensus definition as of the time of this writing.
There are currently a number of unresolved questions when it comes to defining what is AGI. For example: Is achieving consciousness or self-awareness a necessary component of AGI? Does AGI hinge on a machine’s processes or capabilities—or both? Does AGI encompass narrow and general intelligence or just general intelligence, and must it also be reliably accurate? Must it operate autonomously or at least semi-autonomously?
In their November 2022 paper, researchers from Google DeepMind attempt to tackle the problem of the lack of a single AGI definition by proposing a six-principle framework for defining it, one focusing on (1) capabilities, not processes; (2) generality and performance; (3) cognitive and metacognitive tasks; (4) potential, not deployment; (5) ecological (real-world) validity; and (6) the path to AGI, not the endpoint.5
But as this is only a proposed framework for defining AGI, the reality, for the time being, is that what you read about AGI in one article or paper will not necessarily fully align with what you read in another—right down to the question of whether AGI currently exists or is years away from being developed.
Machine Learning vs. Deep Learning
Machine learning and deep learning systems are sub-fields/subsets of artificial intelligence. Machine learning has many different applications—in everyday situations, it is the technology that drives viewing recommendations for movies and TV shows, provides targeted advertising on social media and makes interactions with Alexa or Siri possible. The main difference between classic machine learning and deep learning systems is how the algorithms learn.
What is machine learning?
Machine learning is a subset of AI that can be used to make predictions. In classic machine learning, human intervention is needed for a computer system to identify patterns, learn, perform specific tasks and provide accurate results. For instance, to inform a computer algorithm, a human can create labeled datasets (supervised learning) that directly identify an object’s distinguishing characteristics.6
What is deep learning?
Deep learning, or deep machine learning, does not require human experts to provide the computer algorithm with labeled datasets or distinguishing characteristics. Deep learning systems have the ability to ingest raw data and make sense of it without human intervention.6 This makes possible the use of larger data sets in making predictions and recommendations.7 Deep learning systems require the use of neural networks, so called because they mimic the neural networks in the human brain. A “deep” neural network is one that comprises more than three layers.8
Strong AI and Weak AI
Strong AI is a theoretical type of artificial intelligence in which the machine has an intelligence equal to that of human beings. It is “theoretical” because it does not currently exist; in order for strong AI to exist, the machine would have to be self-aware (and as previously discussed, self-aware AI has not been developed).9
Weak AI, or narrow AI, describes AI that is not equal to human intelligence—the only type of AI that exists today.9 Though sophisticated weak AI systems, like Open
AI’s ChatGPT, may appear to possess human cognitive abilities, these machines can only simulate them.
An example of the difference between strong AI and weak AI is that strong AI would be able to perform a number of different tasks and could even teach itself to solve new problems. Weak AI, in contrast, is only able to perform the specific task it was designed for (e.g., self-driving cars, virtual assistants, chatbots).
How Does AI Work?
As discussed above in the section on limited-memory AI, artificial intelligence ingests data, analyzes that data and then uses the results of that analysis to make decisions and/or predictions. AI systems can adapt to new data and acquire skills in this way thanks to learning algorithms.
AI may also rely on neural networks to analyze patterns and correlations in big data. Large amounts of data are needed to train deep learning models (those that use more than three layers of neural networks). Because deep learning systems use new data to make themselves “smarter,” the more they are used, the more accurate they become.10
Supervised Learning and Unsupervised Learning
What is supervised learning?
Supervised learning is a machine-learning approach that requires human intervention. In this type of machine learning, a human expert provides an algorithm with a labeled dataset and instructions on what to do with that data. Housing prices, weather predictions, stock price trends and text recognition are just a few examples of applications involving supervised learning.
What is unsupervised learning?
Unsupervised learning is a machine-learning approach in which no labeled dataset or set of instructions is provided to the algorithm. It is entirely up to the algorithm to identify underlying patterns, correlations, similarities, differences, etc. and to provide output based on what it has found. Unsupervised learning can be useful in data exploration or in targeted marketing, for example. Some potential benefits of unsupervised machine learning are time savings (since there is no need for a human to label/classify any data); cost savings (unlabeled data is less expensive); identification of novel insights; and a reduced chance of introducing human error and/or bias through the data-labeling process.11
Generative AI Models and ChatGPT
Generative AI is artificial intelligence that creates new outputs (content) based on the data it has been trained on. Generative AI models ingest huge amounts of data and then employ deep-learning algorithms to produce (generate) original content (e.g., images, text, audio).12 OpenAI’s ChatGPT (Chat Generative Pre-trained Transformer) may be one of the most well-known generative AI chatbots as of the time of this writing.
ChatGPT is built on a large language model (LLM), a type of deep learning algorithm that is pretrained on massive datasets and is designed specifically to carry out language-related tasks. Its underlying transformer (the T in GPT) is a set of neural networks consisting of an encoder and decoder, which give the machine the ability to extract meanings from and understand the relationships between words in a sequence.13 In other words, ChatGPT’s foundation is an algorithm that enables it to process natural-language inputs to predict and output the next word, and the word after that, and the word after that—until it has created a complete, coherent and (usually/hopefully) contextually accurate response.
Generative AI is not confined to chatbots, however. Midjourney is another example of generative AI, but instead of producing conversational dialogue like ChatGPT, it creates images based on users’ text descriptions—that is, it converts text prompts into AI art.
Applications of AI
AI has a number of useful applications. We have already touched on a few familiar applications of AI in daily life, such as virtual assistants and self-driving cars, but the technology holds current and future promise for various industries in different and perhaps even surprising ways.
For example, AI has the potential to enhance full-stack development by promoting efficiency (automating certain repetitive processes) and by providing developers with insights and recommendations derived from its data analysis. This, in turn, could help free up developers’ time and help them to make more informed decisions.
Here are some other examples of applications of AI:
- AI in Healthcare: AI has a range of current and potential applications in the healthcare sphere. For example, AI is able to provide automated diagnosis and prognosis based on medical imaging, complementing human expertise. AI has been used in “the diagnosis of breast and prostate cancer from MRI, the diagnosis of COVID-19 from medical images, and fault detection in health management…. AI-based methods have led to state-of-the-art results in lesion detection and classification.”14 AI is also being used in biomedical research involving early diagnosis of Alzheimer’s disease, prediction of blood glucose levels using wearable sensors, enhanced image analysis in colorectal screening, and cyber-physically assistive clothing to help reduce lower back pain.15
- AI in Engineering: Generative AI has the potential to revolutionize the design process for engineers. While current text-to-image programs are not suitable for engineering work, the development of “text-to-design” programs could help to streamline engineering design in the future.16
- AI in Finance: Artificial intelligence in the finance industry has many useful applications. It can be used to streamline traditional manual banking processes, inform investment decisions, enable real-time credit approvals, improve fraud protection and cybersecurity, and help financial institutions maintain compliance with laws and regulations.17
- AI in Education: AI in education can be used to provide support to students with disabilities; to help educators successfully teach and provide support to students who fall on the “long tail” of learning variations, instead of teaching only to the middle or the most common learning pathways; to adapt to a student’s learning process, focusing on a student’s strengths and working through obstacles; and to enhance student–teacher feedback loops.18
- AI in Business: AI has myriad uses in business, including in targeted advertising, customer service (chatbots), big data analysis, supply chain and logistics optimization, and measuring customer satisfaction.
- AI in Everyday Life: Music and media streaming services, facial recognition technology on your smartphone, quick-reply and sentence suggestions that pop up as you type emails, customer service chatbots and navigation apps are just a few examples of real-world applications of AI.
Explore DigitalCrafts’s Certificate Bootcamp Programs
DC offers convenient online bootcamp programs that utilize hands-on practice and project-based learning methods, do not require prior experience and can be completed in as little as 15 weeks. We also offer a complimentary AI training course to learners who complete a flex- or full-time program: Introduction to Artificial Intelligence & ChatGPT. This three-week course is designed to provide non-programmers with a foundational understanding of AI.
Software Development Certificate Bootcamp
Study front-end and back-end web development in DC’s Software Development Certificate Bootcamp. Learners who complete this certificate program may pursue a potential career path as a full-stack software developer. Courses include the following:
- Introduction to Full Stack Software Development
- Web Page Design and Layout
- Introduction to JavaScript
- Creating Interactive Content with JavaScript
- Databases and Data-driven Content
- Full Stack Solutions
Cybersecurity Certificate Bootcamp
Study offensive and defensive tools, tactics and strategies in cybersecurity in the Cybersecurity Certificate Bootcamp—with no prior computer science or security experience required. This bootcamp is suitable for motivated learners who are serious about launching a career path in the cybersecurity field. Courses include the following:
- IT Fundamentals for Cybersecurity
- Networking Fundamentals
- Cybersecurity Fundamentals
- Ethical Hacking and Penetration Testing
- Network Defense & Countermeasures
- Cybersecurity Operations
DigitalCrafts is a CompTIA Authorized Partner. Learners who successfully complete the cybersecurity flex program will receive one voucher for one of the following CompTIA exams:*
- CompTIA A+
- CompTIA Security+
- CompTIA Network+
- CompTIA PenTest+
- CompTIA Cybersecurity Analyst (CySA+)
* Additional study and preparation are always recommended and may be needed before taking any exam. DigitalCrafts cannot guarantee that graduates of this program will be eligible to take third-party certification examinations. Certification requirements for taking and passing these exams are controlled by outside entities and are subject to change without notice to DigitalCrafts.
DigitalCrafts cannot guarantee employment, salary, or career advancement. Not all programs are available to residents of all states. REQ2054595 8/2024
1 15 U.S.C. § 9401(3), https://uscode.house.gov/view.xhtml?req=(title:15%20section:9401%20edition:prelim (last visited 8/21/2024).
2 Stuart Russell, Karine Perset, & Marko Grobelnik, “Updates to the OECD’s Definition of an AI System Explained,” OECD.AI Policy Observatory (11/29/2023), https://oecd.ai/en/wonk/ai-system-definition-update.
3 “Artificial Intelligence,” Brittanica.com, https://www.britannica.com/technology/artificial-intelligence (last visited 8/21/2024).
4 F. Cuzzolin, A. Morelli, B. Cîrstea, & B.J. Sahakian, “Knowing Me, Knowing You: Theory of Mind in AI,” Psychol Med. 50, no. 7 (May 2020): 1057–1061, doi: 10.1017/S0033291720000835, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7253617/.
5 Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, & Shane Legg, “Levels of AGI: Operationalizing Progress on the Path to AGI” (Google DeepMind, 11/4/2023), https://doi.org/10.48550/arXiv.2311.02462.
6 IBM Data & AI Team, “AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?,” IBM (7/6/2023), https://www.ibm.com/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks/.
7 “What Is Machine Learning?,” IBM, https://www.ibm.com/topics/machine-learning (last visited 8/21/2024).
8 “What Is Artificial Intelligence (AI)?,” IBM, https://www.ibm.com/topics/artificial-intelligence (last visited 8/21/2024).
9 “What Is Strong AI?,” IBM, https://www.ibm.com/topics/strong-ai (last visited 8/21/2024).
10 “Artificial Intelligence (AI): What It Is and Why It Matters,” SAS, https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html (last visited 8/21/2024).
11 “Unsupervised Learning: Algorithms and Examples,” AlexSoft, https://www.altexsoft.com/blog/unsupervised-machine-learning/ (last visited 8/21/2024).
12 Published in collaboration with Visual Capitalist, “What Is Generative AI? An AI Explains,” World Economic Forum (2/26/2023), https://www.weforum.org/agenda/2023/02/generative-ai-explain-algorithms-work/.
13 “What Are Large Language Models (LLM)?,” AWS, https://aws.amazon.com/what-is/large-language-model/ (last visited 8/21/2024).
14 Efrat Shimron & Or Perlman, “AI in MRI: Computational Frameworks for a Faster, Optimized and Automated Imaging Workflow,” Bioengineering (Basel) 10, no. 4 (Apr. 20, 2023): 492, doi: 10.3390/bioengineering10040492, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10135995/.
15 U.S. Department of Health & Human Services, National Institutes of Health, National Institute of Biomedical Imaging & Bioengineering, “Artificial Intelligence (AI),” NBIB, https://www.nibib.nih.gov/science-education/science-topics/artificial-intelligence-ai (last visited 8/21/2024).
16 Joseph Flaig, “FEATURE: How AI Is Already Changing Engineering—and the Role of the Engineer,” Institution of Mechanical Engineers (2/18/2023), https://www.imeche.org/news/news-article/feature-how-ai-is-already-changing-engineering-and-the-role-of-the-engineer.
17 “What Is AI in Finance?,” Hewlett Packard Enterprise, https://www.hpe.com/us/en/what-is/ai-in-finance.html (last visited 8/21/2024).
18 U.S. Department of Education, Office of Educational Technology, “Artificial Intelligence,” https://tech.ed.gov/ai/ (last visited 8/21/2024).