Chapter 9

Glossary

Definitions of key terms used in DSPy and LLM development.

Adapter
A component that connects DSPy to external tools or APIs.
BootstrapFewShot
An optimization technique in DSPy that automatically discovers and selects effective examples to include in the prompt.
ChainOfThought
A reasoning technique where the model generates a sequence of intermediate steps before arriving at the final answer.
Compilation
The process of optimizing a DSPy program by training its parameters (like instructions and examples) against a metric.
Context
Information provided to the language model to help it answer a query, often retrieved from external documents.
Demonstration
An example input-output pair included in the prompt to guide the model's behavior (few-shot learning).
DSPy
Declarative Self-improving Python. A framework for programming with language models as composable modules.
Evaluation
The process of measuring the quality of a system's outputs using defined metrics and a test dataset.
Hallucination
A phenomenon where a language model generates incorrect or nonsensical information that appears plausible.
Metric
A function that takes an example and a prediction (and optionally specific trace information) and returns a score indicating quality.
Module
A building block in DSPy (like dspy.Predict or dspy.ChainOfThought) that encapsulates a transformation from input to output.
Optimizer
An algorithm (often called a Teleprompter) that tunes the parameters of a DSPy program (prompts, examples) to maximize a metric.
Predictor
A module that uses a language model to predict outputs based on inputs and a defined signature.
RAG (Retrieval-Augmented Generation)
A pattern where relevant documents are retrieved and fed to the LLM to ground its responses in specific data.
Signature
A declarative specification of the input/output behavior of a DSPy module (e.g., "question -> answer").
Teleprompter
The older term for an Optimizer in DSPy; responsible for automatically generating and selecting effective prompts.