Chapter 12: Must-Know Terminology

Overview

AI comes with its own vocabulary. Learning these key terms helps you understand discussions about AI systems, how they work, and how to use them effectively.

This chapter serves as your quick-reference glossary — clear, simple definitions you’ll encounter often in the AI world.

AI Terminology at a Glance

Visual collage of AI terminology such as model, data, prompts, tokens, and neural networks
These terms form the core vocabulary of modern artificial intelligence.

Artificial Intelligence (AI)

The field of creating systems that can perform tasks requiring human-like intelligence — such as recognizing patterns, making predictions, or generating content.

Machine Learning (ML)

A branch of AI where algorithms learn from examples instead of being explicitly programmed. The system improves over time as it is exposed to more data.

Neural Network

A computational structure inspired loosely by the human brain. Neural networks learn patterns by adjusting connections (weights) between layers of nodes.

Transformer

A breakthrough model architecture introduced in 2017. Transformers use a mechanism called attention, allowing them to understand relationships across long sequences of text or data.

Transformers power modern large language models and many multimodal AI systems.

Large Language Model (LLM)

A type of AI model trained on massive amounts of text. LLMs generate language by predicting the most likely next token based on context.

Examples include ChatGPT, Claude, Gemini, and Llama.

Parameter

A numerical value inside a neural network that gets adjusted during training. Parameters encode everything the model has learned. Frontier models may contain billions or trillions of them.

Token

A unit of text that AI models process — often a whole word, part of a word, or symbol. Both prompts and outputs are counted in tokens.

Models today can handle context windows of hundreds of thousands to over a million tokens.

Hallucination

When an AI system generates incorrect, fabricated, or misleading information with confidence. Hallucinations occur because AI predicts patterns — it does not verify facts on its own.

Bias

Unfair or skewed results that appear when AI models learn from biased or imbalanced data. Reducing bias requires diverse training data and careful evaluation.

Alignment

The effort to ensure AI systems behave safely and according to human values. Alignment research aims to prevent harmful or unintended behavior.

Inference

The process of using a trained model to generate predictions or outputs. Every message you send to an AI system triggers inference.

Training

The process of teaching a model by exposing it to large datasets and adjusting its parameters. Training requires significant computing power and time.

Generative AI

AI systems that create new content — such as text, images, audio, or code — based on patterns learned from data. Examples include chatbots, image generators, and video synthesis tools.

Multimodal AI

AI models capable of processing more than one type of input — such as text, images, audio, and video. Multimodal models unlock capabilities like understanding charts, describing images, or answering questions about videos.

Retrieval-Augmented Generation (RAG)

A technique where an AI model retrieves relevant information from trusted sources in real time, then uses that information to produce grounded, accurate answers.

RAG improves reliability and reduces hallucinations by supplementing the model’s knowledge with real data.

Key Takeaway

These terms form the core vocabulary of modern AI. Understanding them helps you communicate clearly, evaluate AI behavior, and use AI tools with confidence.

The more fluent you are with this terminology, the easier it becomes to navigate the rapidly evolving world of artificial intelligence.

End of Chapter 12: Must-Know Terminology

57% Complete12 of 21 Chapters