Job Description
Job Description\nThe AI manager own outcomes for the teams LLM and GenAI portfolio by translating business problems into an LLM roadmap, leading delivery of production-grade LLM solutions, and ensuring reliability, scalability, governance, and measurable business impact.\n\nRequirements\nAdvanced English Level\n5+ years in software/AI roles with 2+ years focused on LLM/GenAI\n2+ years leading teams and delivering cross-functional products\n2+ years leading teams and delivering cross-functional products\nEngineer Degree (Software Engineer would be a plus)\nPython and/or NodeJS for LLM application development\nAPI design, service integration, cloud deployment patterns\nRAG quality tuning (chunking, retrieval strategy, grounding, evaluation sets)\n\nJob Activities:\nOwn the LLM product portfolio Define the GenAI roadmap, prioritize use cases, and deliver measurable business impact.\nDeliver production LLM solutions end to end Lead discovery MVP pilot scale, with clear milestones, acceptance criteria, and success metrics.\nSet LLM technical standards and architecture Establish patterns for RAG, prompting, tool calling, evaluation, and monitoring so solutions are consistent and maintainable.\nOperate and govern LLM systems Ensure reliability, cost controls, security, privacy/PII handling, and responsible AI guardrails.\nLead the team and execution cadence Coach engineers, run planning/reviews/demos, enforce accountability, and raise delivery quality.\nManage stakeholders and adoption Align with business owners and partners (Product/Eng/Security/Ops), drive rollout, and communicate trade offs and progress.\nOwn the LLM platform architecture - Define the reference architecture for LLM services (APIs, RAG layers, tool integrations, identity/access, secrets management) and ensure designs are scalable, secure, and reusable across teams.\nEstablish DevOps/SRE practices for AI services - Implement CI/CD standards, environment promotion (dev/test/prod), release gating, automated regression/eval tests, rollback strategies, and on-call/incident workflows for LLM applications.\nBuild observability and cost governance into the stack - Standardize logging/tracing/metrics, quality dashboards, token and latency monitoring, budget alerts, and usage analytics to control reliability and unit costs at scale.\nBenefits\nChristmas bonus (above law)\nSavings Fund & Voluntary Savings\nProfit Sharing (PTU)\nVacation Days (above law)\nVacation Premium\nPersonal Days\nMajor Medical Insurance\nManagement Bonus",