AI & ML Tech Glossary
Clear definitions of 500+ AI, ML, and systems terms, built for professionals.
What You'll Find in This Glossary
Get the latest and most used terms in AI/ML and never miss any reference
AI Foundations
Core concepts that explain how modern AI systems work. And learn essential terms around models
Generative & Agentic AI
Terms covering generative models, autonomous agents, and AI workflows.
AI Systems & Infrastructure
Concepts related to deploying, scaling, and operating AI systems. Includes tooling, architectures
Machine Learning & Data
Key terminology across supervised, unsupervised, and applied ML. Covers data pipelines, features.
Popular Terms
Get the latest and most used terms in AI/ML and never miss any reference
Parameter-Efficient Fine-Tuning (PEFT)
PEFT adapts a pretrained model by training only a small number of additional or selected parameters, which reduces the compute, memory, and storage required compared...
Reinforcement Learning from Human Feedback (RLHF)
RLHF aligns a model with human preferences by training a reward model from human judgments and then using reinforcement learning to optimize the model to...
Agent-to-Agent Protocol (A2A)
Agent-to-Agent Protocol (A2A) is a structured messaging pattern that lets AI agents delegate tasks, share results, and coordinate safely using shared schemas, retries, and traceable...
Prompt Leakage
Prompt leakage is when an AI system unintentionally reveals hidden prompts or sensitive context—like system instructions, tool schemas, or private RAG documents—often due to prompt...
KV Cache Quantization
KV cache quantization stores a transformer’s attention KV cache in lower precision (e.g., INT8/INT4) to cut GPU memory use and serve more long-context requests concurrently,...
Hybrid Search
Hybrid search combines keyword (BM25) and vector (embedding) retrieval to improve relevance—capturing exact matches like IDs and error codes while still handling paraphrases and semantic...
RAG Groundedness
RAG groundedness measures how well an LLM’s answer is supported by the retrieved context. It focuses on claim-level faithfulness and citation support, helping reduce hallucinations...
Reranking (RAG Reranker)
Reranking is a second-stage retrieval step that reorders initially retrieved RAG chunks using a stronger relevance model (often a cross-encoder) so the LLM receives higher-precision...
Go from concepts to career application
Join a free live session on applying AI concepts in real interviews
Explore free webinar



