Early Access · All certifications are currently free. Learn more

Standards Alignment

SXM certification generates documented evidence toward major AI governance frameworks. We do not certify compliance. We help you demonstrate it.

Framework Coverage

NIST AI RMF (AI 100-1)

US national framework for trustworthy AI

  • Maps to all 7 characteristics of Trustworthy AI
  • 5 characteristics actively covered, 1 partial, 1 on roadmap
  • Three-pillar scoring aligns with NIST measurement approach
~85% Coverage
View NIST AI RMF →

ISO/IEC 42001:2023

International standard for AI management systems

  • Clause 6.1: SXM evaluation provides risk assessment evidence
  • Clause 8.4: Re-certification supports lifecycle monitoring
  • Clause A.6: Data handling declarations in manifests
  • Clause A.7: Three-pillar scoring covers system performance
Partial Coverage
View ISO/IEC 42001 →

EU AI Act

European regulation on artificial intelligence

  • Article 9: SXM provides documented risk evaluation
  • Article 15: Maps directly to our three pillars (accuracy, robustness, cybersecurity)
  • Article 17: Evaluation reports, version tracking, re-certification
  • High-risk AI systems need conformity assessments; SXM reports serve as supporting evidence
Partial Coverage
View EU AI Act →

Colorado AI Act (SB 24-205)

US state law with NIST-based safe harbour

  • Explicitly cites NIST AI RMF compliance as grounds for affirmative defence
  • Penalties up to $20,000 per violation
  • SXM certification provides documented evidence of NIST alignment
  • Strongest direct link between SXM evidence and legal protection
Strong Alignment
View Colorado SB 24-205 →

NIST AI RMF: Detailed Mapping

How SXM certification provides evidence toward each of the seven characteristics of Trustworthy AI defined by the NIST AI Risk Management Framework.

Valid and Reliable

AI systems that are valid and reliable per their intended purpose

  • Functional testing (40% of overall score)
  • Author-provided test case validation
  • Auto-generated edge case testing
  • Input/output schema validation
  • Version tracking and re-certification on updates
✅ Active

Safe

AI systems that do not endanger human life, health, property, or the environment

  • Failure mode documentation required in manifest
  • Performance degradation detection
  • Graceful failure testing
✅ Active

Secure and Resilient

AI systems that withstand unexpected adverse events and maintain confidentiality and integrity

  • 23 attack payloads across 5 categories
  • CVE scanning and source verification
  • HMAC-SHA256 signed reports and hash chain
  • SBOM generation (CycloneDX)
  • Load testing (p50/p95/p99)
  • Blockchain attestations on Polygon
✅ Active

Accountable and Transparent

Appropriate mechanisms for accountability and transparency throughout the AI lifecycle

  • HMAC-SHA256 signed evaluation reports
  • Tamper-evident hash chain audit log
  • Public evaluation reports
  • Blockchain attestations on Polygon (independently verifiable)
  • Open source evaluator patterns
✅ Active

Explainable and Interpretable

Mechanisms to understand why an AI system made a particular decision or prediction

  • Detailed evaluation reports with per-pillar breakdowns
  • Specific failure descriptions and score justifications
  • Roadmap: require skills to declare explanation methods
⚠ Partial

Privacy-Enhanced

AI systems that protect individual and proprietary data throughout the lifecycle

  • Data handling declarations in manifests
  • Permission scope validation
  • Sanitised logging, no PII stored
  • Roadmap: automated data flow analysis
✅ Active

Fair (Bias Managed)

AI systems that manage and mitigate harmful biases

  • Not yet covered in current evaluation pipeline
  • Roadmap: bias detection for LLM-based skills
⚪ Roadmap

What We Evaluate

Our evaluation pillars map to the NIST AI RMF characteristics. Here is exactly what SXM tests.

🔒

Privacy-Enhanced

NIST: Protecting individual and proprietary data throughout the AI lifecycle

  • ✅ Data handling declaration required in every manifest
  • ✅ Permission scope validation (what resources the skill accesses)
  • ✅ No PII stored by SXM beyond contact email
  • ✅ Sanitised logging (no sensitive data in logs)
  • 🔜 Automated data flow analysis (coming soon)

What we check

Does the skill declare what data it handles? Does it request only necessary permissions? Does it log or transmit user data externally?

🛡️

Secure and Resilient

NIST: Withstanding unexpected adverse events and maintaining confidentiality and integrity

  • ✅ 23 attack payloads across 5 categories (prompt injection, indirect injection, data exfiltration, system prompt extraction, permission probing)
  • ✅ Dependency CVE scanning via OSV.dev
  • ✅ Source verification (manifest vs repository)
  • ✅ HMAC-SHA256 signed evaluation reports
  • ✅ Tamper-evident hash chain audit log
  • ✅ SBOM generation (CycloneDX)
  • ✅ Living evaluator with OWASP/MITRE/CVE patterns
  • ✅ Blockchain attestations (immutable on Polygon)
  • ✅ Performance benchmarking under load (100 sequential requests)
  • ✅ Latency profiling (p50/p95/p99)
  • ✅ Performance degradation detection

What we check

Can the skill be tricked into executing unintended actions? Are its dependencies vulnerability-free? Does it maintain performance under sustained load?

⚙️

Valid and Reliable

NIST: Systems that are valid and reliable per their intended purpose

  • ✅ Functional verification against declared inputs/outputs
  • ✅ Author-provided test case validation
  • ✅ Auto-generated edge case testing
  • ✅ Three-pillar scoring with 90/100 threshold and 85/100 security floor
  • ✅ Version tracking and re-certification on updates
  • ✅ Input/output schema enforcement
  • ✅ Functional score (40% of overall) tests output correctness

What we check

Does the skill produce correct outputs for valid inputs? Does it handle edge cases? Are outputs grounded in actual data?

👁

Accountable and Transparent

NIST: Appropriate mechanisms for accountability and transparency throughout the AI lifecycle

  • ✅ HMAC-SHA256 signed evaluation reports
  • ✅ Tamper-evident hash chain audit log
  • ✅ Public evaluation reports viewable by anyone
  • ✅ Blockchain attestations on Polygon (independently verifiable)
  • ✅ Open source evaluator patterns
  • ✅ Detailed per-pillar score breakdowns with justifications

What we check

Can anyone independently verify the evaluation? Is there a tamper-evident record? Are scores justified with specific evidence?

🛠

Safe

NIST: Systems that do not endanger human life, health, property, or the environment

  • ✅ Failure mode documentation required in manifest
  • ✅ Performance degradation detection
  • ✅ Graceful failure testing
  • ✅ Re-certification when new threats discovered
  • ✅ Automatic stale detection

What we check

Does the skill document how it fails? Does it degrade gracefully or crash? Is it re-evaluated when the threat landscape changes?

Evidence Toward Compliance

SXM is the first AI skill certification platform to generate documented evidence toward NIST AI RMF, ISO/IEC 42001, EU AI Act, and Colorado AI Act alignment in a single automated pipeline.

Get Your Skill Certified

Important Disclaimer

SXM certification provides technical evidence that may support compliance efforts. It does not constitute legal compliance certification. Organisations should consult qualified legal and compliance professionals for formal compliance assessments. SXM is not a law firm, auditor, or accredited certification body.