A field guide from the team
building enterprise agentic AI.
We are ASCENDING — an AWS Advanced Consulting Partner that builds Jarvis AI, a governance-first, MCP-native agent platform. This is the public research arm of that team: what we've learned shipping agentic systems for enterprises, written for the people doing the same work.
On our desks
Anthropic's MCP changelog, the NIST AI RMF update, and too much coffee.
Four pillars, one argument.
The enterprise AI conversation in 2026 still circles the same four topics. Each pillar here is a long read, not a landing page. Start where your quarter is burning hottest.
Agentic AI
The operating theory of autonomous agents — where they work, where the hype still outruns the evidence, and which workloads actually cost less than the humans.
Model Context Protocol
MCP went from Anthropic research draft to foundation-backed standard in fourteen months. A reader-friendly reference for clients, servers, and gateways.
AI Governance
Policy templates, approval workflows, and the uncomfortable organizational questions — written alongside CISOs who already filed their ISO 42001 paperwork.
Enterprise RAG
Retrieval is still the hardest part of the stack. A pillar on document pipelines, re-rankers, evals, and when agentic RAG earns its seat.
What we published this quarter
A reader's guide to evaluating MCP gateways
The evaluation criteria we use when readers ask which gateway to pilot: tool-level authorization, credential brokering, per-tool observability, egress enforcement, and policy-as-code. Drawn from the published documentation of the ~15 MCP gateway vendors tracked in this space.
How to measure AI agent ROI without embarrassing yourself
Productivity-minute arithmetic is how the first wave of agent programs embarrassed themselves. A framework from CFO-side reviewers who now require direct P&L impact.
Moveworks vs Glean, after the ServiceNow acquisition
Moveworks closed into ServiceNow at $2.85B in late 2025. A side-by-side rebuilt from public product documentation, Moveworks' and Glean's own homepages, AWS Marketplace listings, and analyst commentary.
Practitioner-written, openly sponsored.
We are not an independent publication. We are the ASCENDING team that ships Jarvis AI — the same people building the gateway, the governance layer, and the MCP integrations we write about. Writing from inside the problem is the point; pretending otherwise would be dishonest and bad for trust.
Every claim is anchored to a public source we can link to — vendor documentation, standards bodies (ISO, NIST, Linux Foundation), analyst reports (Gartner, Futurum), and peer-reviewed papers.
Every page that discusses Jarvis opens with a disclosure. Every comparison that includes Jarvis marks it clearly. We rank Jarvis honestly in our own tables — where it loses, we say so.
Pricing pages are dated. Comparisons show sources column-by-column. When our reading is directional rather than authoritative, we say so on the page, not in a footnote.
Read most this month
Who writes here
Every piece carries a byline and — where the claim is load-bearing — a separate reviewer. Contributors' LinkedIn profiles are linked from every byline for transparent verification.
Founder and editor of Explore Agentic. Writes across the enterprise agentic AI stack: MCP, governance, and the buying cycles that determine what actually ships.
Covers MCP server implementation patterns, A2A protocol design, and the runtime trade-offs platform teams face when shipping multi-agent systems.
Covers AWS-native agent infrastructure: Bedrock, AgentCore Runtime, and the deployment patterns that survive enterprise security review.
Covers the identity layer of governed AI: OAuth/OIDC for MCP, RBAC propagation, and the on-behalf-of patterns that pass security review.
Writes the Enterprise RAG pillar and the retrieval- and evaluation-heavy glossary entries on Explore Agentic.
Covers natural-language data interfaces: text-to-SQL, semantic layers, and the edge cases that make BI agents production-fragile.
Covers vector search, embedding models, and the evaluation frameworks that separate retrieval that works from retrieval that demos well.
Covers customer programs and case study methodology. The practitioner side of how Jarvis customers actually deploy and what gets measured.
Covers product strategy for enterprise AI: positioning, pricing, and the buyer journey from pilot to procurement. Anchors the comparisons library.
Covers customer outcomes and the storytelling that turns post-deployment data into actionable case studies, including the metrics that don't show up in the dashboard.
Covers AI governance, procurement, and enterprise buying cycles. Reviews every comparison and playbook on Explore Agentic before publication.
Covers go-to-market patterns for enterprise AI: partner ecosystems, channel motions, and the procurement-to-pilot bridge.
Covers AI vendor evaluation, RFP cycles, and the procurement questions enterprise buyers actually negotiate: cost, contractual data terms, and exit clauses.
Advises on the agentic AI pillar and reviews technical claims across the site before publication.
What this hub answers, in plain English.
The six questions our readers — CIOs, AI leads, and platform architects — ask before they bookmark this site. Each answer links into the deeper pillar where it is sourced.
- 01
What is agentic AI?
- Agentic AI describes systems built around an autonomous loop — observe, reason, act, and replan without waiting for a human click. The distinction matters: a workflow with a language model bolted on is not an agent. Real agents can rewrite their own plan mid-execution when a tool call fails or new evidence surfaces. Gartner counted only about 130 vendors shipping anything that meets that bar in mid-2025, against thousands of self-described agentic vendors.
- 02
What is the Model Context Protocol (MCP)?
- MCP is the open standard agents use to discover and call external tools. Anthropic released the draft in November 2024; by December 2025 it had been donated to the Linux Foundation and adopted by every major model provider. Servers expose tools, resources, and prompts; clients (the agent runtime) consume them through a uniform JSON-RPC interface. The practical payoff: one integration per backend, instead of one per agent framework.
- 03
How is agentic AI different from enterprise RAG?
- Vanilla RAG retrieves passages from your corpus and stuffs them into a prompt — a one-shot read-and-reply. Agentic systems can chain multiple retrievals, call other tools between them, and decide when they have enough evidence. Agentic RAG (the hybrid) is now standard for any retrieval workload that requires cross-document reasoning. Vanilla RAG is still the right answer for short factual questions where one retrieval will do the job.
- 04
How does AI governance change when agents are autonomous?
- Static review boards do not catch agents that change behavior between runs. Governance has to move to runtime: tool-level authorization, per-call audit trails, policy-as-code that gates tool execution, and guardian agents that supervise primary agents. ISO/IEC 42001 and the NIST AI Risk Management Framework both expect this loop for agentic deployments. The compliance question stops being "what model did you use" and becomes "what did the agent actually do, and who approved it."
- 05
Who edits Explore Agentic, and why does that matter for trust?
- This hub is published by ASCENDING — the AWS Advanced Consulting Partner that builds Jarvis AI, a governance-first, MCP-native agent platform. We disclose this on every page that mentions Jarvis or a competitor, and we mark the Jarvis row in every comparison table. Writing from inside the problem lets us show production patterns instead of secondary-source summaries; the visible disclosure is there so readers can weigh the source.
- 06
How often is the hub updated?
- The four pillars — Agentic AI, MCP, AI Governance, Enterprise RAG — are reviewed each quarter and refreshed against the latest standards drafts, vendor releases, and analyst reports. Glossary entries update on demand whenever the underlying spec or category shifts. Every page carries a visible "Updated [month, year]" stamp; the publish and modified dates also appear in the page's Article structured data so search engines see the freshness directly.
- 07
Should we build our own agents or buy an enterprise platform?
- Build for unique workflows where the agent loop is itself a competitive moat — proprietary planning logic, regulated decision paths, or deeply embedded internal tools. Buy for the long tail: support triage, sales research, IT helpdesk, document workflows, employee Q&A. The economics in 2026 favor buy-then-extend: a governance-first platform like Jarvis AI gives you the gateway, registry, audit trail, and MCP integrations on day one, then you author the prompts and tools that encode your specific work. The DIY path adds 6–12 months of platform engineering before the first agent ships, plus the ongoing burden of keeping pace with model and protocol churn. Most enterprises that started DIY in 2024 are now consolidating onto a platform.
- 08
How do you measure ROI on enterprise agentic AI?
- Productivity-minute arithmetic — "each agent saves N minutes per task" — is how the first wave of programs embarrassed themselves in 2024–2025. The framework that survives CFO review pins agent value to direct P&L impact: reduced fully-loaded support cost per ticket, deflected hires in a backfill plan, accelerated revenue from faster sales-research cycles, or recovered margin from automated reconciliation. Salesforce reported $1.7M of new sales pipeline generated by Agentforce in year one as a benchmark of the magnitude that lands in board-deck slides. Our /playbooks/ai-agent-roi piece walks through the model line-by-line, including the metrics CFOs reject and the substitutes that hold up.
- 09
Which industries have deployed agentic AI in production right now?
- Customer support and IT helpdesk are the furthest along — Moveworks (now part of ServiceNow), Salesforce Agentforce, and Zendesk's AI agents are all generally available with reference customers reporting 20–40% deflection of tier-one tickets. Financial services have moved on document-heavy workflows: KYC reviews, loan processing, claims triage, with governance layered to satisfy regulators. Healthcare is slower but real for revenue-cycle management and prior-authorization workflows. The pattern: bounded workflows with rich evidence trails and clear undo paths deploy first; open-ended creative or strategic work stays human-led for now.
The team writing this ships Jarvis AI
This hub is the editorial layer. Jarvis is where the patterns we cover — governance, registry, guardrails — get deployed. If you're scoping a program rather than just reading, the product page is the next step.
ascendingdc.com/jarvis-ai — ASCENDING's enterprise agentic AI platform.