AI Governance, Risk & Compliance (GRC) Managers
2021.ai ApS
On our steep growth trajectory, 2021.AI now has open international positions for highly skilled and detail-oriented AI GRC Managers. Join our team and help deliver Responsible AI – by implementing AI systems and help companies to align with regulatory requirements. As an AI GRC Manager, you will guide our clients on Responsible AI and how to ensure compliance with both existing and emerging AI regulations, privacy laws, and ethical principles.
Your role will involve working closely with cross-functional teams at our clients, including legal, data privacy, product management, and engineering. In a consultancy setting, you will help ensure that AI systems are designed, implemented, used and monitored so that legal, regulatory, and organizational governance standards are met. You will also advise our clients and partners on the impact of new and evolving AI policies and assist in implementing frameworks for responsible AI.
class=”hthird”>About you
You are a driven and ambitious individual who is passionate about AI to make a positive impact on the world.
You have experience working with AI, AI model risk management, data security, and IT Security. You are eager to help our clients navigate this complex landscape. You are Independent, motivated, and solution oriented, you excel in both teamwork and client relationships — whether collaborating with internal teams or working directly with clients on-site or remotely.
With a background in compliance and / or risk management, you are dedicated to guiding clients on their journey toward responsible AI.
About the role
In this international role, you will work closely with our global customers to help them leverage the full functionality of 2021.AI’s GRACE AI Governance platform, guiding them in achieving compliance with the EU AI Act and other AI / model risk management standards. You will play a key part in ensuring that our clients can effectively navigate Responsible AI and complex regulatory requirements, integrating compliance best practices into their AI systems and operations.
Key responsibilities
- Regulatory Compliance & Risk Management advisory: Provide guidance on relevant global AI regulations (such as EU AI Act, and other national or regional AI related policies). Help design processes using the GRACE AI platform to ensure compliance with these laws and guidelines as well as designing and deploying AI systems in an ethical manner, with a focus on fairness, transparency, accountability, and bias mitigation.
- Risk Management: Conduct risk assessments to identify and address potential compliance gaps or ethical concerns within AI models, algorithms, and data processing pipelines.
- Responsible AI Training and Awareness: Provide training to clients teams on Responsible AI, AI compliance, privacy laws, and ethical considerations. Help raise awareness about compliance risks and best practices within
the organization.
Skills and qualifications
- Master’’s degree in Business Compliance, Risk Management, Computer Science, Data Science, or a related field.
- 5+ years of work experiences post Master’s degree.
- Proven experience (2+ years) in AI compliance, Risk management, Data privacy, IT Security or related fields.
- Knowledge of AI regulations, data protection laws (e.g., GDPR, CCPA), and ethical AI principles.
- Familiarity with AI/ML algorithms, data governance, and technical aspects of AI systems.
- Project Management is a discipline that you master from experiences in this field.
- Consultancy experience from a Software vendor or GRC vendor would be a great plus.
- Ability to analyze complex regulatory environments and translate them into actionable business processes.
- Strong communication skills, both written and verbal, with the ability to present complex compliance issues to non-technical stakeholders.
- Attention to detail and a commitment to maintaining the highest standards of integrity and compliance.
Opslaget er indhentet automatisk fra virksomhedens jobsider og vises derfor kun som uddrag. Log ind for at se det fulde opslag eller gå videre til opslaget her: