Last updated: 23 February 2026
SXM certification is one component of a responsible AI governance strategy. It is not a substitute for comprehensive security review, compliance assessment, or professional advice.
SXM evaluations assess AI skills against a defined set of automated tests covering functional verification, security patterns, and performance benchmarks. These tests represent known vulnerability patterns and best practices as of the evaluation date. The AI security landscape evolves rapidly, and no automated evaluation can guarantee complete coverage of all possible risks.
A passing SXM certification (Standard or Hardened) indicates that a skill performed well against our test battery. It does not constitute a guarantee that the skill is free from vulnerabilities, suitable for production deployment in all contexts, or compliant with any regulatory standard. Organisations deploying AI skills in sensitive environments should conduct additional testing appropriate to their risk profile.
We recommend SXM certification as part of a layered approach to AI governance that includes:
Content on this website, including security advisories and certification reports, is provided for informational purposes only. It does not constitute professional security, legal, or compliance advice. Consult qualified professionals for decisions affecting your organisation's security posture.
SXM certifies skills developed by third parties. We do not control, endorse, or take responsibility for the ongoing behaviour of certified skills after evaluation. Developers may modify their skills after certification. If a certified skill's source code changes, we recommend re-evaluation.
SXM services are provided on an "as is" basis. We make reasonable efforts to maintain service availability but do not guarantee uninterrupted access to evaluations, reports, or the certification registry.