Certifyi Responsible AI Policy
Last Updated: 03,September,2025
This Responsible AI Policy (“Policy”) applies to all use, development, and deployment of artificial intelligence (AI) and machine learning (ML) systems, features, and functionalities at Certifyi.
1. Purpose
Certifyi is committed to building and using AI systems that are safe, fair, accurate, and aligned with human values. This Policy sets out our requirements, prohibitions, and practices for responsible AI adoption across the organization.
2. Prohibitions
AI at Certifyi must NOT be used for:
Deliberate disinformation or deception.
Violating the privacy rights of others, including unlawful monitoring, tracking, identification, or unauthorized use of personal data.
Depicting or impersonating any person’s voice, likeness, or identity without explicit consent.
Harming or exploiting minors in any manner, including grooming or abuse.
Harassing, harming, or encouraging harm to individuals or groups.
Circumventing safety controls or attempting to make AI systems perform in ways that violate this Policy or applicable laws.
Lethal applications (e.g., autonomous weapons) without robust human oversight and ethical review.
3. Responsible AI Requirements
Human Oversight: Critical or high-impact decisions (such as those affecting access to opportunities, healthcare, legal status, or financial products) must involve appropriate human review, thorough testing, and risk mitigation.
Transparency: Certifyi will document AI/ML use cases, intended purpose, and limitations, and disclose AI involvement where meaningful impacts occur.
Accuracy & Evaluation: Outputs from AI/ML must be tested for reliability, accuracy, and appropriateness for their context. Users must be informed that generative AI results can be probabilistic or inaccurate.
Law & Regulations: All AI/ML deployments must comply with local, national, and international laws, including (if applicable) the EU AI Act and relevant data protection regulations (such as GDPR).
4. Accountability
Responsibility: All Certifyi employees and users are responsible for ensuring compliance with this Policy.
Reporting & Compliance: Any concerns or violations should be reported to the Ethics & Compliance Officer. Certifyi will investigate all credible reports and may suspend or revoke access to AI systems as necessary.
Review: This Policy will be reviewed periodically and updated in line with technological and regulatory advances.
5. Resources
Certifyi provides training, tools, and resources to support responsible development and use of AI. Employees and users should seek additional guidance when in doubt about ethical or legal use of AI.