Adel Research Group (ARG) at Princeton University invites applicants for full-time Postdoctoral Research Associate(s) to join our team. The postdoc(s) will conduct interdisciplinary research in robotics and autonomous systems with an emphasis on robot learning and real-world deployment for construction-scale robotic fabrication, human–robot collaboration, and advanced manufacturing. The position(s) are advised by Dr. Arash Adel (Assistant Professor of Architecture; Core Faculty, Princeton Robotics; Associated Faculty, Computer Science) and include opportunities for collaboration across Princeton Robotics labs. Appointments are for one year, with the possibility of renewal contingent on satisfactory performance and continued funding.
About Us
ARG is an interdisciplinary laboratory for advanced research at the intersection of robotics, artificial intelligence (AI), and computational design. We develop learning-enabled and perception-driven robotic systems that operate in complex, unstructured environments, with particular focus on: 1) Robot learning and embodied AI; 2) Multimodal perception and state estimation; 3) Closed-loop planning and control; and 4) End-to-end design-to-fabrication workflows. ARG’s research has been supported by highly competitive grants, including a National Science Foundation’s Future of Work at the Human-Technology Frontier grant.
Job Description
We seek outstanding Postdoctoral Research Associate(s) with a strong interest in interdisciplinary research on Robotics and Autonomous Systems within the context of construction robotics, human-robot collaboration, and advanced manufacturing. The role is especially suitable for candidates with a strength in robot learning and autonomy who want to see learning-based methods deployed on real and construction-scale robotic platforms. Example research directions may include:
Training visuomotor policies for dexterous manipulation and long-horizon assembly, leveraging imitation learning, reinforcement learning, diffusion policies, and/or VLA-style approaches
Integrating multimodal perception with task and motion planning for robust building-scale assembly, using RGB/RGB-D sensing, multi-view geometry, and 3D reconstruction
Enabling reliable sim-to-real deployment for long-horizon tasks, including domain randomization, uncertainty-aware control, and safety/constraint handling
Designing and evaluating human–robot collaboration and shared autonomy for construction workflows, including interaction paradigms, communication interfaces, and performance metrics
Developing closed-loop pipelines for building-scale additive manufacturing, including online state estimation, adaptive toolpath re-planning, and process control to improve geometric fidelity, inter-layer adhesion, and robustness to environmental and material variability
Primary responsibilities include:
Leading collaborative research projects from problem formulation through experimental validation and publication
Conducting research and disseminating results through top-tier conferences and journals (e.g., ICRA, CoRL, The International Journal of Robotics Research, Automation in Construction, Advanced Engineering Informatics)
Collaborating on large-scale robotically fabricated demonstrators (planning, integration, field testing, and iteration)
Contributing to lab operations (e.g., maintaining core software stacks, supporting experimental protocols, and mentoring graduate/undergraduate researchers)
Collaborating with the PI and team on proposal development and research translation
Required qualifications:
Doctoral degree in a related field, such as Robotics, Computer Science, Mechanical/Aerospace Engineering, Civil/Construction Engineering, Architecture, or related disciplines
Excellent track record of research and publications related to the job description
Strong scientific writing and communication skills
Excellent programming skills (Python required; C++ preferred; C#/Unity a plus if you work in simulation or digital-twin environments)
Fluency in English
Desired qualifications:
Experience with Robot Operating System (ROS) (ROS 2 preferred)
Experience with sensing technologies, computer vision, and perception
Demonstrated experience with machine learning for robotics (e.g., training/evaluating policies in PyTorch/JAX; imitation learning, reinforcement learning, or diffusion-based policy learning)
Experience with Generative AI, specifically Large Language Models (LLMs), Vision Language Models (VLMs), and/or Vision-Language-Action Models (VLAs) as applied to robotics (e.g., grounding, tool use, policy learning, or multimodal reasoning)
Experience developing custom tools and end effectors for robotic assembly and advanced manufacturing
Experience with robotics simulation and evaluation (e.g., MuJoCo, Isaac Sim/Gym, PyBullet) and/or motion planning frameworks (e.g., MoveIt/OMPL)
Evidence of strong software engineering practices (version control, testing, documentation, reproducibility) and, ideally, open-source contributions
Application Instruction
Applicants must apply online via the application portal and submit:
Cover letter (summarizing research fit and interests)
Curriculum vitae (CV)
Three publication samples (conference and/or journal papers)
Portfolio (optional; recommended for candidates with substantial design/fabrication work)
Contact information for three references
We will review applications on a rolling basis, and the application portal will stay open until the position is filled. For full consideration, please submit your application as soon as possible.
This position is not eligible for sponsorship of an H-1B visa requiring consular processing.
If you have questions regarding the position(s), please contact Dr. Arash Adel ([email protected]). We look forward to receiving your application.