Glossary of Terms

A quick reference guide to the technical and instructional design terms used throughout this book.

A

Agent / Agentic Workflow

An AI system capable of autonomous decision-making and tool use to achieve a high-level goal. Unlike a standard chatbot that just answers a prompt, an agent can plan steps, browse the web, or execute code to complete a complex task.

Artificial Intelligence (AI)

The simulation of human intelligence processes by machines, especially computer systems. In this book, we primarily focus on Generative AI and Large Language Models.

C

Context Window

The amount of information (measured in tokens) an LLM can retain and process in its "working memory" during a single conversation. A larger context window allows the model to "read" and reference larger documents.

Co-Intelligence

A concept coined by Ethan Mollick describing a partnership model where humans and AI work together in a symbiotic loop, with each party enhancing the capabilities of the other.

G

Generative AI (GenAI)

A subset of AI focused on creating new content—including text, images, audio, and code—in response to user prompts. Examples include ChatGPT, Claude, and Midjourney.

H

Hallucination

When an AI model generates information that is grammatically correct and confident but factually incorrect or nonsensical. This occurs because LLMs predict words based on probability, not truth.

Human-in-the-Loop (HITL)

A design methodology where human judgment is integrated into the AI workflow to verify accuracy, check for bias, and ensure ethical standards are met before content reaches the learner.

L

Large Language Model (LLM)

A type of AI model trained on massive amounts of text data. It uses statistical patterns to understand, summarize, generate, and predict new content. Examples include GPT-4, Claude 3.5, and Gemini.

Learning Management System (LMS)

A software application for the administration, documentation, tracking, reporting, and delivery of educational courses. AI is increasingly being integrated into LMS platforms to provide personalized recommendations.

P

Prompt Engineering

The art and science of crafting inputs (prompts) to guide Generative AI models to produce optimal outputs. It involves techniques like persona adoption, chain-of-thought reasoning, and constraint setting.

R

RAG (Retrieval-Augmented Generation)

A technique that connects an LLM to a specific, private knowledge base (like a company handbook). Before answering a question, the AI retrieves relevant facts from this trusted source, significantly reducing hallucinations.

S

SCORM (Shareable Content Object Reference Model)

A set of technical standards for e-learning software products. It tells programmers how to write their code so that it can "play well" with other e-learning software.

Synthetic Learners

AI-simulated personas that mimic the behaviors, questions, and misconceptions of real students. Designers use them to stress-test curriculum and practice handling difficult classroom scenarios.

T

Token

The basic unit of text that an LLM reads and generates. A token can be a word, part of a word, or a character. A useful rule of thumb is 1,000 tokens ≈ 750 words.

Z

Zero-Shot Prompting

Asking an AI model to perform a task without providing any examples. (Contrast with Few-Shot Prompting, where you provide a few examples of the desired output format).