AI Assurance ML Ops SME
(124578)
Overview
Reference
124578
Salary
£600 - £700/day
Job Location
- United Kingdom -- England -- Greater London -- London
Job Type
Contract
Posted
04 September 2025
Morela is supporting our client in securing an experienced AI Assurance ML Ops professional to join a high-impact programme. This is a chance to ensure AI/ML systems are safe, auditable, and fully aligned with regulatory and ethical standards in complex operational environments.
Contract Details:
Role Overview: In this role, you will integrate assurance into every stage of the AI/ML lifecycle. From data preparation to production deployment, you will embed governance, risk management, and monitoring practices to make AI systems reliable, transparent, and compliant.
Key Responsibilities:
-
Establish and maintain assurance processes within ML Ops pipelines
-
Apply governance and risk frameworks throughout model development and deployment
-
Implement and lead monitoring practices for bias, data drift, model drift, explainability, and system robustness
-
Provide guidance on TEV&V practices for models under development or in production
-
Partner with Data Science, Risk, and Compliance teams to align AI systems with organisational, ethical, and regulatory requirements
Required Expertise:
-
Extensive ML Ops / ML Engineering experience, particularly in assurance, governance, and monitoring
-
Hands-on knowledge of ML Ops platforms and tools (Kubeflow, MLflow, SageMaker, Azure ML, or equivalent)
-
Proven ability to deploy and maintain AI solutions in regulated or complex operational settings
-
Strong grounding in responsible AI practices, including explainability and fairness
-
Experience ensuring AI systems comply with regulatory frameworks (EU AI Act, ISO, NIST, or industry standards)
-
Skilled at translating assurance requirements into technical processes and collaborating across multidisciplinary teams
Preferred Skills:
-
Practical experience implementing responsible AI practices directly in ML pipelines
-
Ability to operationalise bias detection, monitoring, explainability, and TEV&V within ML Ops
-
Demonstrated capacity to turn regulatory guidance into actionable pipeline processes
Why This Role: This is a rare opportunity to influence the trustworthiness and compliance of AI/ML systems in real-world deployments. Join a programme that prioritises responsible AI while working with expert teams on cutting-edge projects.
Morela is proud to support our client in this crucial AI assurance initiative.
|