Security builder & leader

Realm Labs: RSAC 2026 Innovation Sandbox Profile

← Back to comparison

This profile was compiled in March 2026 using AI tooling guided by security product strategy guidance from Lenny Zeltser's MCP server. The analysis was performed by AI without direct human validation, to demonstrate the capabilities of AI agents guided by an expert framework. Outside this demo, a human analyst would conduct iterative conversations with the AI agent to arrive at more accurate conclusions.

Executive Summary

Realm Labs builds AI security tools that monitor how large language models reason internally, not just what they output. Founded in 2023 by former Symantec and Splunk AI security researcher Saurabh Shintre, the company offers three products: an AI firewall (OmniGuard), an internal observability platform (Prism), and a data governance/DLP tool (DataRealm). The company raised $5 million from Crosspoint Capital Partners at RSAC 2026 and counts Anthropic among its early customers. Its core differentiator is “internal observability,” which claims to inspect attention patterns, chain-of-thought processes, and token probabilities inside LLMs during inference.

Company Overview

FieldDetailEvidence
Founded2023Company website; LinkedIn
HeadquartersSunnyvale, CaliforniaCompany website
Funding$5M from Crosspoint Capital Partners (March 2026)NSFOCUS RSAC analysis; Security Boulevard
StageSeedInferred from funding amount, team size, and product maturity
Employees~6 (6 named on website)Company page
Key InvestorsCrosspoint Capital Partners, Tola Capital, First Rays Venture Partners, Silver Buckshot, Firestreak VenturesCompany website; First Rays announcement

Investor Track Records:

Problem Definition and Market Opportunity

Enterprises deploying AI face a visibility gap. Current guardrails and observability tools monitor inputs and outputs but treat models as black boxes. This creates blind spots for prompt injection, data leakage, hallucination, and privilege escalation in AI applications.

The problem intensifies as organizations move from simple chatbots to autonomous AI agents. Agents that access internal data, execute multi-step tasks, and interact with external APIs expand the attack surface beyond what traditional perimeter-based monitoring can cover. Palo Alto Networks’ Chief Security Intelligence Officer has called AI agents “the biggest insider threat of 2026”, a framing Realm Labs explicitly endorses in its marketing.

The AI security market is growing rapidly. Cybersecurity startups raised $9.4 billion in H1 2025 alone, the highest in three years. AI trust and security has become a distinct product category with dedicated budget lines. [Confirmed, third-party financial reports cited by company]

Realm Labs targets the intersection of three markets: AI guardrails, AI observability, and data loss prevention for AI. Each market is nascent, with no dominant incumbent. [Inferred]

Product Capabilities

Realm Labs ships three products:

1. Realm OmniGuard (AI Firewall) Blocks harmful content, jailbreaks, and prompt injections across text, audio, images, and video. Supports 50+ languages. Key specs include 20-100ms latency for text, sub-100ms for audio, and streaming support. Processes audio directly without text conversion and detects harmful visual content (including AI-generated) without OCR. Deployable on-premises via Docker, Nvidia Triton, or custom ML infrastructure across AWS, GCP, Azure, and Oracle Cloud. [Confirmed, company claim, partially verifiable via playground.realmlabs.ai]

2. Realm Prism (AI Observability) The flagship product. Monitors AI systems across five layers: infrastructure (CPU/GPU, memory), data (drift, completeness, RAG timeliness), application (LLM logs, API errors, response times), internal reasoning (attention patterns, chain of thought, token probabilities), and output quality. Offers four deployment modes: batch analysis, real-time sidecar, inline guardrails, and generative endpoint for open-weight models. Claims to capture 10,000+ “thought patterns” for anomaly detection. [Confirmed, company claim, unverifiable without independent testing]

The internal observability layer is the core differentiator. Realm claims to identify “regions in the LLM where harmful information is stored” and detect when queries access those regions before harmful outputs materialize. The technical mechanism for this inspection is not publicly documented. [Confirmed, company claim, unverifiable]

3. DataRealm (Data Governance and DLP) Discovers and classifies sensitive unstructured data across SharePoint, Box, and Google Workspace. Lightweight browser-based endpoint agent blocks sensitive data uploads to AI tools (ChatGPT, Cursor, Claude) in real time. Supports document-level permission enforcement for RAG applications and RBAC. Claims petabyte-scale scanning “in days not months.” [Confirmed, company claim, unverifiable]

SOC 2 Compliance: The company website displays a SOC 2 badge, suggesting completed or in-progress compliance certification. [Confirmed, company claim, verifiable]

Competitive Positioning

Realm Labs competes across a fragmented landscape of AI guardrail, observability, and security vendors. Its positioning rests on a single technical claim: it inspects model internals, not just inputs and outputs.

CompetitorFocusRealm Labs Differentiator
HiddenLayerBroad ML/DL model security (MLDR/AIDR)Realm focuses exclusively on LLMs with deeper internal monitoring vs. HiddenLayer’s broader but shallower coverage
WhyLabsOpen-source AI observability, guardrailsRealm adds internal reasoning inspection beyond WhyLabs’ behavioral monitoring
Fiddler AIAI observability and explainabilityFiddler focuses on feature importance/drift; Realm targets internal chain-of-thought
CalypsoAIAI security (red-teaming + guardrails)Realm adds observability layer; CalypsoAI stronger on red-teaming
Protect AIML supply chain securityDifferent attack surface focus; Protect AI targets model supply chain
Robust Intelligence (acquired by Cisco)AI firewall and validationValidates acquisition interest in the space; Realm positions as next-gen alternative

The “internal observability” claim is compelling but unproven publicly. The company’s blog explains the concept but provides no technical architecture, peer-reviewed validation, or independent benchmarks. The competitive moat depends on whether this capability is real, defensible, and materially better than external-only approaches. [Inferred]

Go-to-Market and Traction

Named Customers:

CTF Challenge: Realm Labs hosted a public capture-the-flag challenge since September 2025, with 100+ participants attempting 2,000+ attacks across a four-layer defended chatbot. Zero successful breaches at the fourth layer as of October 2025. [Confirmed, company claim via NSFOCUS analysis]

Playground Demo: A public playground at playground.realmlabs.ai allows hands-on testing of OmniGuard capabilities. [Confirmed, verifiable]

Conference Presence: Attended Black Hat and DEF CON 2025. Named RSAC 2026 Innovation Sandbox finalist. [Confirmed, company website + RSAC press release]

Revenue and ARR: Not publicly disclosed.

GTM Model: Enterprise sales with demo-driven approach. No self-serve pricing visible. The Anthropic relationship suggests a bottom-up adoption path through AI-native companies. [Inferred]

Team and Credibility

The team combines deep AI security research credentials with production engineering experience at major tech companies.

Saurabh Shintre, CEO and Founder PhD in Electrical and Computer Engineering from Carnegie Mellon University. B.Tech from IIT Bombay. 7+ years as Principal/Sr. Principal Researcher at Symantec (2016-2021) and Principal Threat Scientist at Splunk (2021-2023). 10+ patents, 2,000+ citations in AI security, adversarial ML, and cryptography. Published at USENIX Security, AsiaCCS, and IEEE TDSC. RSA Conference AI/ML track program committee member (2018-2023). Featured on CNBC and in Washington Post. Named Future Leader at the Science and Technology in Society Forum in Kyoto. [Confirmed, third-party: Google Scholar, DBLP, RSA Conference]

Akash Mukherjee, Cofounder and Head of Engineering Security Leader for AI/ML at Apple (2023-2024), where he worked on Private Cloud Compute (PCC). Tech Lead for software security at Google (2020-2023), where he co-developed SLSA (Supply-chain Levels for Software Artifacts). Previously at Proofpoint and Credit Karma. Best-selling cybersecurity author (The Complete Guide to Defense in Depth, Packt). Also serves as advisor to Tola Capital. USC, IIT BHU. [Confirmed, third-party: LinkedIn, Packt]

Piotr Mardziel, Head of AI PhD in Computer Science from University of Maryland (2008-2014). Post-doc and Systems Scientist at Carnegie Mellon University (2015-2020). Software Engineer at TruEra (2021-2024), an AI observability company later acquired by Snowflake. 16+ years of research experience in trustworthy ML and AI interpretability. [Confirmed, third-party: LinkedIn, CMU]

Saahil Agrawal, Founding ML Engineer AI/ML Lead at Abnormal AI and Walmart. Stanford University, IIT Madras. [Confirmed, company website]

Freyam Mehta, Founding Engineer AI/ML security researcher at AI Vulnerability Database (AVID). Published at CHI ‘24 on bias in GenAI systems. Previously at Oracle (cloud security). IIIT Hyderabad. [Confirmed, third-party: LinkedIn]

Nina Wei, Founding Designer Founding designer at Lamini, AI Fund, and Baidu Research. University of Washington. [Confirmed, company website]

Advisors:

The advisory board is strong. Jason Clinton’s Anthropic CISO role directly connects to the Anthropic customer relationship. Paul Kocher is a cryptography legend. Nicole Perlroth brings media credibility and investor alignment. [Inferred]

Trust Readiness

RSAC Judging Criteria

RSAC does not publish an official judging rubric. The five criteria below are extrapolated from press descriptions of what judges evaluate: the problem a company addresses, the originality of its technology, its go-to-market strategy and team, market validation, and product demonstration.

CriterionScore (1-5)Assessment
Problem/Market5AI security is the defining challenge of 2025-2026. Prompt injection, data leakage, and model manipulation are real enterprise risks with no dominant solution. Market timing is ideal.
IP Originality5”Internal observability” is a genuinely novel claim in a market crowded with input/output guardrails. The 10+ patent portfolio and academic publication record support deep technical IP. The mechanism is not publicly documented, which limits independent verification.
GTM/Team4CEO has rare combination of published AI security research (2,000+ citations), Symantec/Splunk experience, and RSAC program committee membership. CTO brings Apple PCC and Google SLSA pedigree. Anthropic’s Deputy CISO and cryptography legend Paul Kocher serve as advisors.
Validation/Revenue2Anthropic is a named customer. Two additional unnamed enterprise customers exist. Revenue, ARR, and customer count are not disclosed. The company has only 6 employees.
Product/Demo4Three shipping products with public playground at playground.realmlabs.ai. Multiple deployment modes (batch, sidecar, inline, endpoint). Multi-cloud and on-prem support. SOC 2 badge displayed.

Overall RSAC Fit: 20/25. Realm Labs checks the core Innovation Sandbox criteria with novel technology in an urgent market, credible technical founders, early enterprise validation, and a demonstrable product. The “internal observability” narrative is tailor-made for the competition’s emphasis on innovation.

Startup Readiness Assessment

This eight-dimension assessment appears in the comparison matrix on the main page. It evaluates broader startup readiness using dimensions from the security product analysis framework. Five dimensions overlap with the RSAC criteria above. Three are added: funding efficiency, category clarity, and incumbent defensibility.

DimensionScore (1-5)Assessment
Problem Clarity5AI security is a top-of-mind enterprise priority with clear buyer urgency. Prompt injection, data leakage, and model manipulation are well-documented risks that every AI-deploying organization faces.
Capability Depth5”Internal observability” is genuinely novel. 10+ patents and peer-reviewed publications support the technical claim. Three shipping products cover firewall, observability, and DLP. Public playground at playground.realmlabs.ai demonstrates real functionality.
Market Timing5Enterprise AI adoption is accelerating while security tooling lags behind. No dominant vendor has emerged. Cisco’s acquisition of Robust Intelligence validates the market. Buyers are actively seeking solutions.
Team Credibility5PhD-level AI security research from Carnegie Mellon. CTO co-developed SLSA at Google and worked on Apple PCC. Head of AI has deep interpretability background at CMU. Advisors include Paul Kocher (SSL 3.0 inventor) and Anthropic’s Deputy CISO.
GTM Proof3Anthropic as a named customer is a strong signal. Two additional unnamed enterprises exist. No revenue disclosed. The Anthropic advisory relationship (Jason Clinton, Deputy CISO) and deep academic credentials suggest engagement beyond what is publicly visible. Score reflects a small upward adjustment for inferred traction.
Funding Efficiency3Only $5M raised for 6 employees shipping three products. Capital-efficient but potentially underfunded to compete with HiddenLayer, WhyLabs, and platform vendors entering the space.
Category Clarity3AI security is a recognized market, but Realm targets three sub-segments simultaneously (guardrails, observability, DLP). Buyers may struggle to classify the product against a single budget line.
Incumbent Defensibility3Foundation model providers (OpenAI, Anthropic, Google) may internalize observability. Cisco acquired Robust Intelligence. Internal observability may only work for open-weight models, limiting addressable market.

Overall: 32/40.

Key Risks

  1. Unproven core claim. The “internal observability” mechanism is not publicly documented with technical detail. No peer-reviewed paper, independent benchmark, or third-party audit validates the claim that Realm can inspect LLM reasoning in production. The blog explains the concept but not the implementation. If this capability is overstated, the primary differentiator collapses.

  2. Tiny team, broad product surface. Six employees shipping three products (firewall, observability, DLP) across four modalities and 50+ languages is ambitious. Sustaining quality across OmniGuard, Prism, and DataRealm simultaneously will strain resources. The risk is that no single product achieves depth before larger competitors catch up.

  3. Crowded and converging market. HiddenLayer, WhyLabs, Fiddler, CalypsoAI, and platform vendors (Cisco/Robust Intelligence, Palo Alto) are all building AI security products. Foundation model providers (OpenAI, Anthropic, Google) may internalize observability features. Realm Labs could get squeezed between incumbents adding AI security and AI companies adding native observability.

  4. Limited disclosed traction. Only three customers (one named), no revenue figures, and $5M in total funding. The Anthropic relationship is notable but could reflect an advisory/design-partner arrangement rather than a commercial contract. Customer depth and retention are unknown.

  5. Open-weight model dependency. Internal observability likely requires access to model weights and activations. This works for open-weight models (Llama, Mistral) but may not apply to closed API-based models (GPT-4, Claude) unless deployed through Realm’s generative endpoint. This could limit addressable market.

  6. Single investor concentration. The $5M round appears to come from Crosspoint Capital Partners alone. While Crosspoint is a strong cybersecurity investor, a single-investor seed round provides less market validation than a syndicated round with multiple institutional leads.

Sources

← Back to comparison