Value Quantification : Pre-Model Development, Model Provisioning: Kubernetes, Kibana, Model Monitoring, Cloud Computing, Python/PySpark, SAS/SPSS, Great Expectation, Evidently AI, Deployment Strategies (A/B, Blue green, Canary), Model testing, Tools(KubeFlow, BentoML), Integration testing, ML Frameworks (TensorFlow, PyTorch, Sci-Kit Learn, CNTK, Keras, MXNet), Value Quantification: Post-Model Deployment, Model Experimentation, R/ R Studio
Specialization
ML Engineering: AI/ML Engineer
Job requirements
Skills Data Engineering: SQL, BigQuery, Apache Airflow Cloud: GCP (BigQuery, Dataflow), Programming: Python (data processing, basic scripting) and Java Data Handling: Data modeling, query optimization Roles and Responsibilities • Designed, developed, and optimized complex data pipelines to ingest, process, and store data from Google Cloud Storage (GCS) to BigQuery. • Built API-integrated workflows to fetch data from external APIs, process responses, and load results into downstream systems. • Implemented Pub/Sub–based event-driven pipelines supporting both real-time and batch data processing, including triggers and message handling. • Created and managed BigQuery DDLs, views, and authorized views, ensuring secure data access through appropriate roles and permissions. • Improved ETL pipeline and SQL query performance, reducing processing time and enhancing BigQuery warehouse efficiency. • Conducted DEV and UAT testing, validating business logic, data quality, and end-to-end pipeline stability. • Applied business transformation logic to convert raw data into analytics-ready datasets. • Generated weekly and monthly SQL-based reports and automated their distribution via email for business stakeholders. • Collaborated with client and business stakeholders to gather requirements and deliver accurate, efficient, and interpretable data solutions. • Developed interactive dashboards using Looker Studio to enable clear data visualization and reporting.