AI Productivity Lead (Java + Angular)
We are looking for an AI Productivity Lead to standardise and scale AI-assisted engineering practices across a Java (Spring) and Angular delivery organisation. This role is accountable for improving developer throughput and predictability while maintaining or improving code quality, security, and long-term maintainability. You will define the team’s AI-enabled delivery workflow, create enablement materials, establish quality guardrails, and instrument productivity and quality metrics to drive continuous improvement.
Requirements
- Strong experience delivering production systems in Java (commonly Spring) and Angular/TypeScript.
- Demonstrated leadership in engineering standards: code review practices, testing strategy, refactoring, and maintainability.
- Hands-on experience using AI developer tools to accelerate delivery while preventing regressions and hallucination-driven errors.
- Working knowledge of CI/CD, static analysis, dependency hygiene, and secure development practices.
- Strong written communication and facilitation skills for specs, playbooks, and cross-team alignment.
Preferred Qualifications
- Experience introducing developer productivity programs and measuring results (DORA, cycle time, PR/defect metrics).
- Familiarity with observability and performance profiling (backend and frontend).
- Experience in regulated or security-conscious environments with clear data handling requirements.
- Prior ownership of internal engineering enablement, onboarding, or platform/tooling initiatives.
Responsibilities
- Own the AI-enabled delivery workflow from ticket intake through release (clarification, planning, implementation, testing, and documentation).
- Build and maintain playbooks, templates, and a prompt library for common Java/Angular tasks (bug fixes, refactors, tests, performance work).
- Establish and enforce governance: PR review standards, Definition of Done, testing expectations, and risk escalation paths for sensitive changes.
- Improve CI/CD and developer tooling to support consistent quality gates (linters, static analysis, test automation, code scanning).
- Enable the team through onboarding, workshops, office hours, and coaching to reduce rework and increase self-sufficiency.
- Track and report outcomes using delivery and quality metrics (DORA plus defect and review indicators), and run experiments to validate improvements.
- Define and train on security and privacy practices for AI usage, ensuring safe handling of credentials, customer data, and proprietary code.