Descripción de la oferta
Role Overview
Qualifyze is on an ambitious journey to embed AI capabilities across the business – from internal tooling to customer-facing products. As our AI Platform Engineer, you will sit at the heart of this transformation, acting as the critical bridge between cutting‑edge AI adoption and the rigorous standards our industry demands.
The role focuses on owning the evaluation framework for AI tools and models, guiding teams in responsible integration, and ensuring every AI initiative meets our security, compliance, and cost governance standards. This is a high‑impact, cross-functional role with visibility across Engineering, Product, Legal, and Operations.
Main Responsibilities
AI Evaluation & Validation
Define and maintain, in collaboration with the AI Team Lead, a structured framework for evaluating AI tools, models, and vendors – covering capability, reliability, bias, and explainability.
Conduct structured proof‑of‑concepts (PoCs) and benchmark assessments for AI solutions under consideration.
Produce clear validation reports with go/no‑go recommendations for stakeholders.
Stay up to date with the evolving AI landscape (LLMs, agents, automation tools) and proactively surface relevant opportunities.
Integration Support
Partner with engineering and product teams to design sound AI integration patterns aligned with Qualifyze's technical architecture.
Define and enforce best practices.
Support teams through the full integration lifecycle, from architecture review to post‑deployment monitoring.
Security & Compliance
Assess AI tools and integrations against information security policies, data privacy regulations (GDPR, ISO 27001, ISO 27701), and sector‑specific requirements.
Collaborate with the InfoSec team to evaluate risks such as data leakage, prompt injection, model poisoning, and third‑party dependencies.
Maintain an AI risk register and ensure mitigations are properly documented and tracked.
Contribute to the company's AI governance policy and keep it current as the regulatory landscape (e.g., EU AI Act, ISO 42001, GAMP5) evolves.
Cost Governance
Define usage guidelines and guardrails to prevent cost overruns in AI‑heavy workloads.
Partner with Finance and Engineering to forecast AI spend and identify optimisation opportunities (model selection, caching, batching strategies).
Enablement & Culture
Act as an internal AI advisor aligned with team working agreements, running workshops, writing guidelines, and creating resources that help teams use AI confidently and responsibly.
Champion a culture of responsible AI adoption across the company.
Main Requirements
3+ years of experience in a technical role involving AI/ML systems, software engineering, or data engineering.
Hands‑on experience with LLM‑based solutions (OpenAI, Anthropic, Mistral, or similar) and cloud AI services (AWS or GCP).
Strong understanding of information security principles, particularly as they apply to AI systems (data handling, access controls, third‑party risk).
Familiarity with data privacy regulations and compliance frameworks.
Ability to communicate technical findings clearly to both engineering teams and business stakeholders.
Structured, methodical mindset.
Nice‑to‑Have
Experience in regulated industries (pharma, life sciences, healthcare, or fintech).
Knowledge of the EU AI Act, ISO 42001 or other AI governance frameworks.
Background in FinOps or cloud cost management.
Experience building internal tooling or developer enablement programs.
What do we offer?