AuxoAI is hiring a Senior Applied AI Engineer to design and deploy production-grade computer vision systems that operate reliably in real-world environments.
This role focuses on building end-to-end visual intelligence systems, combining deep learning, classical computer vision techniques, and multimodal models. It is not limited to model training and requires strong ownership of system design, deployment, and real-world performance.
You will work on systems that perform perception, understanding, and reasoning over visual data, and integrate these capabilities into larger AI platforms and agent-based workflows.
You will also work on problems where existing approaches may not be sufficient, and will be expected to combine deep learning, geometric methods, and multimodal reasoning to build robust, production-grade systems.
Location – Mumbai / Bangalore / Hyderabad / Gurgaon (Hybrid – 3 days per week in office)
Responsibilitiesp:
- Design and deploy computer vision systems for tasks such as:
- Object detection, segmentation, and tracking
- Scene understanding and structured perception
- Video understanding and temporal reasoning
- Object detection, segmentation, and tracking
- Build and optimize models using architectures such as:
- CNNs (ResNet, EfficientNet)
- Vision Transformers (ViT, Swin, DeiT)
- Detection/segmentation models (YOLO, DETR, Mask R-CNN)
- CNNs (ResNet, EfficientNet)
- Develop multimodal systems combining vision and language:
- CLIP-style models
- Vision-language models (VLMs)
- Visual grounding and captioning systems
- CLIP-style models
- Implement algorithms for:
- Multi-object tracking (SORT, DeepSORT, ByteTrack)
- Feature matching and representation learning
- Temporal modeling (RNNs, Transformers for video)
- Multi-object tracking (SORT, DeepSORT, ByteTrack)
- Apply geometric and classical computer vision methods where relevant:
- Camera calibration
- Epipolar geometry
- Pose estimation
- 3D reconstruction or depth estimation
- Camera calibration
- Optimize systems for:
- Low-latency, real-time inference
- Throughput and scalability
- Edge and distributed deployment
- Low-latency, real-time inference
- Design and build data pipelines for:
- Annotation workflows
- Dataset curation
- Synthetic data generation
- Annotation workflows
- Integrate vision systems into:
- Multimodal AI pipelines
- Agent-based systems
- Decision-making workflows
- Multimodal AI pipelines
Requirements
- 5+ years of experience building computer vision systems in production environments
- Strong experience with deep learning frameworks (PyTorch / TensorFlow)
- Hands-on experience with:
- Detection, segmentation, or tracking systems
- Model training, fine-tuning, and evaluation
- Detection, segmentation, or tracking systems
- Strong understanding of:
- Representation learning
- Loss functions (contrastive loss, focal loss, etc.)
- Evaluation metrics (mAP, IoU, precision/recall)
- Representation learning
- Experience building and deploying end-to-end vision systems, not just training models
Candidates whose primary experience is limited to academic projects or model experimentation without real-world deployment may not be a fit for this role.
Nice to Have:
- Experience with multimodal systems (vision + language)
- Familiarity with models such as:
- CLIP, BLIP, Flamingo, or similar
- CLIP, BLIP, Flamingo, or similar
- Experience with 3D vision:
- NeRFs
- SLAM
- Point clouds
- NeRFs
- Experience with video understanding:
- Action recognition
- Event detection
- Action recognition
- Experience building data engines:
- Active learning
- Hard negative mining
- Active learning
- Experience working with large-scale datasets and distributed training pipelines