What is Certifyi’s AI Risk Framework .
Certifyi’s AI Risk Framework helps AI‑first startups and digital teams manage AI risk in a structured, audit‑ready way. It connects your AI systems and data flows to governance, risk, and compliance (GRC) workflows, automates tasks like evidence collection, risk assessments, and reporting, and gives you real‑time insight into vulnerabilities and control gaps that could affect customers, auditors, or regulators.
ISO/IEC 42001 – AI management system (AIMS)
Certifyi aligns its AI risk workflows with ISO/IEC 42001, the first global standard for managing AI systems responsibly. In practice, this means:
Ethical governance – helping you define and document policies for transparency, accountability, and fairness in AI operations.
Risk‑based approach – structuring how you identify, assess, and treat risks across the AI lifecycle (from data to deployment).
Continuous improvement – tracking actions, incidents, and monitoring results so your AI controls improve over time instead of staying static.
NIST AI RMF
Certifyi incorporates concepts from the NIST AI Risk Management Framework to help you surface and treat key AI risks. Example capabilities include:
Bias detection – logging AI use cases, datasets, and model behaviour so you can identify and mitigate bias in training data and outputs.
Adversarial robustness – capturing controls and tests designed to protect models from manipulation, prompt injection, or other malicious inputs.
Societal and human‑rights impact – recording impact assessments and approvals for higher‑risk AI systems, so you can show due‑diligence to boards, auditors, or regulators.
Google SAIF (Secure AI Framework)
Certifyi applies Google’s Secure AI Framework (SAIF) principles across the AI lifecycle to strengthen your security posture. This includes support for:
Model integrity checks – documenting and monitoring how models are built, approved, and promoted so changes are reviewed and tampering can be detected.
Data exfiltration prevention – aligning access‑control, logging, and encryption controls with SAIF‑style guidance to reduce the risk of sensitive data leaking through AI systems.
Incident‑response workflows – capturing playbooks, contacts, and evidence so AI‑related security incidents can be triaged and reported quickly.
EU AI Act
Certifyi helps organisations prepare for their obligations under the EU AI Act by structuring AI risk and documentation in line with the regulation’s concepts. For example, you can:
Classify AI systems by risk – log use cases and assign them to prohibited, high‑risk, or limited‑risk categories based on intended use.
Generate transparency documentation – maintain records that explain how AI‑assisted decisions are made, what data is used, and what controls are in place.
Evidence human oversight – record approval steps, review processes, and escalation paths for high‑risk AI systems that require human involvement.