Skip to content

HiddenLayer AI Security Platform Documentation

HiddenLayer provides security for AI through its AI Security Platform. The Platform provides detection and response for Generative AI and traditional AI models to detect prompt injections, adversarial AI attacks, and digital supply chain vulnerabilities. The Platform delivers an automated and scalable defense tailored for Generative AI, enabling fast deployment and proactive responses to attacks without necessitating access to private data or models.

One Platform, Four Modules

AI Discovery

Know what AI exists before it becomes a risk


AI Discovery automates discovery and inventory of cloud providers, providing a centralized inventory and dashboard of AI assets. This includes in production and in development models, applications, datasets, and dependencies. This ensures end-to-end visibility of the AI pipeline across teams.


AI Discovery enables automatic model scanning as models are discovered, as well as displaying asset relationships that maps how assets are connected to each other in a top down hierarchical view.


What it covers:

  • HiddenLayer Console
  • AI Asset Inventory
  • AI Asset Discovery

AI Attack Simulation

Continuously test AI like attackers would


AI Attack Simulation is a comprehensive platform for testing and strengthening your AI system's security defenses.


AI Attack Simulation provides automated security testing for your AI systems through two complementary approaches: System Prompt Evaluation and Red Team Evaluation. Together, these evaluation types help you build robust, secure AI applications.


What it covers:

  • Automated Red Teaming for AI (AutoRT)
  • Adversarial threat simulation
  • Security policy validation
  • System prompt hardening
  • Continuous security testing



AI Supply Chain Security

Ensure only trusted AI enters production


AI Supply Chain Security analyzes Machine Learning Models to identify hidden cybersecurity risks and threats such as malware, vulnerabilities, and integrity issues.


Its advanced scanning engine is built to analyze your machine learning models, meticulously inspecting each layer and component to detect possible signs of malicious activity, including malware, tampering, and backdoors.


What it covers:

  • Model Scanner for scanning and analysis
  • AI Bill of Materials (AIBOM)
  • Model genealogy and integrity
  • AI risk management
  • AI governance and security posture management

AI Runtime Security

Detect and stop AI attacks in real time


AI Runtime Security is a real-time input and output monitor for hosted or custom LLMs.


AI Runtime Security detects malicious input prompts and undesired output, and can block content from being sent to the LLM or returned to the user.


It has different modes of operation which can be flexibly employed, depending on the architecture already in place and the desired level of integration.


What it covers:

  • AI Detection & Response (AIDR)
  • AI firewall and guardrails
  • Agentic and MCP protection
  • Real-time attack detection and response