Agent, RAG, MCP & ML

Regenerate Background

Anatomy of an AI agent

How a modern LLM agent is put together: sense inputs, think (deterministic, LLM, or hybrid), act and observe in a loop, finish conditions, plus optional evaluate, memory, description, planning, chain-of-thought, ask — and how RAG grounds answers.

Read More27.03.2026

AI Agent Loop Patterns

Twelve Re-Act-style agent loop patterns: think–act–observe and extensions — dialogue, description, multi-tool, reflection, memory, planning, chain-of-thought, and learning — with pros, cons, and model notes.

Read More20.03.2026

Anatomy of RAG

Typical vector RAG pipeline: ingest, chunk, embed, store, query, retrieve, rerank, augment, generate — plus core components (data layer, chunking, embeddings, storage, retriever, generator).

Read More27.03.2026

RAG Design Patterns

Twenty-two practical retrieval-augmented generation patterns: from fixed-size chunking, optional overlap, metadata per chunk, contextual retrieval, context-aware and layout-aware chunking, hierarchy (child/parent), late chunking, to re-ranking, query rewrite, query expansion, multi-query, multi-hop query RAG, agentic RAG, graphs, RAG Wiki, vectorless RAG, CAG (cached attention), domain-tuned embeddings, self-reflective RAG, RAGAS evaluation, and hybrid RAG — plus three stacks teams actually ship and mistakes to avoid.

Read More24.03.2026

Contextual Engineering

Context engineering is assembling the full operating envelope for an LLM app before you send work: instructions, grounded knowledge and live data, memory, tools, and user-specific facts so behavior stays consistent and on-policy.

Read More27.04.2026

LLM Engines

Reference list of LLM and embedding models by provider: OpenAI, Google, Cohere, Voyage AI, Qwen, DeepSeek, Kimi. Generative models, reasoners, embeddings, rerankers.

Read More21.03.2026

Evaluation of LLM systems

Offline and online evaluation for LLM products: task benchmarks, human and model judges, RAG and tool-use metrics, safety and latency, and how to tie scores to release gates and regression checks.

Read More02.06.2026

Security for LLM and agent systems

Production-style checklist: secure ingest and index, API and edge, LLM and prompt safety, plus privacy and ops—mirroring the Security deep dive on Agent-RAG architectures.

Read More09.02.2026

Production Agent-RAG Architectures

Enterprise knowledge copilot as agentic RAG: ingest and hybrid retrieval, optional Re-Act, grounding and citations — workflow, typical tools, and when it fits.

Read More30.03.2026

Model Context Protocol (Anthropic)

The Model Context Protocol (MCP) as defined by Anthropic is a standardized client–server protocol and runtime ecosystem for connecting AI applications to external tools, data sources, and prompts. It is not an architectural pattern for designing agents; it is the concrete protocol and specification that enables applications like Claude, Cursor, and other MCP clients to discover and use Resources, Tools, and Prompts exposed by MCP servers.

Read More02.01.2026

Machine learning and ML models

What machine learning is, how ML models relate to training and inference, and typical applications — plus how LLMs, RAG, and agents fit into the same picture.

Read More16.04.2026