Quadric has created an innovative general purpose neural processing unit (GPNPU) architecture. Quadric's co-optimized software and hardware is targeted to run neural network (NN) inference workloads in a wide variety of edge and endpoint devices, ranging from battery operated smart-sensor systems to high-performance automotive or autonomous vehicle systems. Unlike other NPUs or neural network accelerators in the industry today that can only accelerate a portion of a machine learning graph, the Quadric GPNPU executes both NN graph code and conventional C++ DSP and control code.
Role:
You will be joining the data science team focused on model optimization for Quadric's custom GPNPU architecture. You will research, prototype, and implement novel quantization algorithms tailored to our hardware constraints. Beyond applying existing techniques, you'll develop custom low-precision methods that maximize performance on the Chimera GPNPU. Your work will directly shape the quantization capabilities in the Chimera SDK and influence future hardware features.
This California Bay Area based engineering role is intended to be primarily in-office at our Burlingame location, with the ability to commute regularly. We believe strong technical collaboration, rapid iteration, and shared problem-solving are well supported by working together in person. The team and company also gather periodically for onsite meetings and offsite events to connect, collaborate, and align on priorities.
Responsibilities:
- Design statistically rigorous experiments to compare PTQ, QAT, and mixed-precision schemes on vision, language, and multimodal models.
- Implement custom quantization algorithms from scratch, adapting existing techniques or developing novel approaches to match Chimera GPNPU's unique architectural features and numerical formats.
- Build calibration datasets; develop Python notebooks/dashboards to track accuracy, latency, power, and memory trade-offs.
- Perform layer-level error analysis to guide numerical-format choices.
- Partner with compiler team to convert your findings into turnkey SDK flows and reference configs.
- Publish internal white papers, external benchmarks, and present results to customers and at industry events.
- Monitor academic literature in compression and efficient inference; translate promising ideas into reproducible prototypes.