SPECIFIC DUTIES AND RESPONSIBILITIES
- Design, build, and deploy AI/ML solutions using GCP Vertex AI, BigQuery ML, and AI Platform for model training, deployment, and serving
- Develop and maintain platform services for AI agent orchestration, RAG pipelines, and LLM integration using Python, FastAPI, and Flask
- Collaborate with cross-functional teams to support CI/CD pipelines for ML models, GitLab workflows, and MLOps automation
- Work with GCP services including Cloud Functions, Cloud Run, Dataflow, and Pub/Sub to build event-driven AI architectures
- Build and optimize data pipelines using BigQuery, Dataflow, and Apache Airflow for AI model training and inference
- Contribute to automation, observability, and reliability initiatives for AI systemsusing GCP Cloud Operations and New Relic
- Lead and mentor team members in AI/ML engineering best practices, fostering a culture of learning and innovation
- Explore and integrate cutting-edge AI capabilities (LLMs, vector databases, prompt engineering) into platform solutions
- Develop customized AI solutions and integrations to meet development teams' requirements, leveraging GCP APIs, SDKs, and LangChain
- Create technical documentation, tutorials, and training materials to support AI/ML adoption and facilitate knowledge transfer
- Stay up to date on latest AI/ML technologies, GCP products, and industry trends, serving as a subject matter expert for the organization
- Analyze existing AI workflows and processes to identify areas for optimization and efficiency gains
- Implement scalable AI systems and tools to automate repetitive tasks, streamline ML operations, and enhance developer productivity
COMPETENCIES
Core Competencies (Must-have Competencies)
- 7+ years of experience in Software Engineering
- 3+ years of experience focusing on AI/ML engineering, specializing in scalable AI infrastructure and model deployment
- 3+ years of experience in Python for AI/ML development (TensorFlow, PyTorch)
- 2+ years of experience with GCP AI/ML services (Vertex AI, BigQuery ML, AI Platform, AutoML)
- 1+ years of experience building and deploying production AI systems including LLM integration, RAG architectures, and vector databases
- 5+ years of experience with Git, Docker, and Kubernetes for containerized ML workloads
- 1+ years of experience in problem-solving skills and a proactive attitude toward learning new AI technologies
Complementary Competencies (Good-to-have Competencies)
- Experience with LangChain or other LLM orchestration frameworks
- Knowledge of vector databases and semantic search
- Experience with agentic AI frameworks, React patterns, or autonomous agent architectures
- Familiarity with prompt engineering, fine-tuning, and LLM evaluation techniques
- Knowledge of service mesh, API gateways, or event-driven architecture on GCP
- Contributions to open-source AI/ML projects or technical communities
- Experience with Cohere, Anthropic Claude, or Google Gemini
- Knowledge of testing frameworks for ML systems (unit tests, model validation, A/B
- testing)
QUALIFICATIONS
Educational Qualification/s
- University degree in computer science, software engineering, or other relevant discipline, or equivalent combination of education and experience.