Senior Perception Engineer

TAU Ventures

TAU Ventures

Other Engineering

San Francisco, CA, USA

Posted on May 14, 2026

Chef Robotics is accelerating the deployment of intelligent machines in the physical world, starting with food production — the sector facing the largest labor shortage in the U.S., with 1.14M unfilled jobs today and 3.1M projected by 2030. These roles can't be offshored, making robotics essential to keeping production onshore and strengthening America's manufacturing base.

Our AI-powered robots automate food prep and assembly in commercial kitchens and food manufacturing, and have already produced over 110 million meals in production — generating the world's largest proprietary dataset for deformable food manipulation. Backed by investors including Kleiner Perkins, Construct, Bloomberg Beta, and Promus Ventures, and built by a team from Cruise, Zoox, Google, Tesla, and Amazon Robotics, Chef is rapidly scaling with multiple multi-year contracts and a mission to put an intelligent robot in every commercial kitchen.

About the Role

Chef Robotics is building autonomous robots that work alongside humans in commercial food preparation environments — and perception is at the heart of what makes them reliable. As a Perception Engineer, you will own the full stack of how our robots see and understand the world: from integrating cutting-edge camera hardware, to training production-grade deep learning models, to ensuring those models perform accurately and efficiently in real-time on the factory floor.
You will work on some of the most technically rich problems in applied robotics — dense instance segmentation of deformable food items, real-time inference under tight latency constraints, sensor fusion, and robust tracking in cluttered, dynamic environments. You will not just train models; you will design the pipelines that gather and curate data, define the architectures that balance accuracy and speed, and own the deployment and field troubleshooting of what you build.
We are a small, high-ownership team. We work onsite five days a week and move with startup urgency — you will be expected to go deep technically while staying pragmatic about what ships.

In this role, you will:

  • Design, train, and optimize deep learning models for detection, segmentation, segmentation, pose estimation, and classification — with a focus on real-world robustness over benchmark performance.
  • Build low-latency inference pipelines that approach real-time performance; profile and optimize models for deployment on embedded and edge hardware.
  • Develop and improve multi-object tracking algorithms for reliable identification and motion prediction of items across frames.
  • Solve challenging perception problems specific to food robotics: deformable objects, occlusions, varying lighting, and high visual similarity between categories.
  • Own the end-to-end ML lifecycle: data collection strategy, annotation tooling, dataset curation, augmentation pipelines, model training, evaluation, deployment, and field debugging.
  • Develop tooling to monitor model performance in production and drive continuous improvement cycles.
  • Partner closely with robotics, hardware, and software engineers to translate perception capabilities into reliable end-to-end robot behaviors.
  • Help define the perception roadmap and influence technical direction as the team grows.
  • Assist in integrating new cameras and sensors for enhanced robotic vision.

What You Bring:

  • BS, MS, or PhD in Computer Science, Robotics, Electrical Engineering, or a closely related field.
  • 5+ years of combined research and industry experience in computer vision and machine learning, with a track record of shipping perception systems to production.
  • Deep expertise in at least two of: instance/semantic segmentation, object detection, 3D perception, or multi-object tracking.
  • Strong Python skills; experience building production-quality, maintainable code — not just research prototypes.
  • Hands-on experience with deep learning frameworks (PyTorch strongly preferred) and the full training pipeline from data to deployed model.
  • Experience working with RGBD sensors, depth cameras, and point cloud data.
  • Proven ability to build and optimize models for low-latency, real-time inference.
  • Familiarity with ROS or similar robotics middleware.

Nice-to-have:

  • Experience using simulation environments (e.g. Isaac Sim, Gazebo) for synthetic data generation, domain randomization, and sim-to-real transfer of perception models.
  • C++ proficiency for performance-critical modules and embedded deployment.
  • Experience with cloud ML infrastructure (GCP, AWS) and containerization (Docker, Kubernetes).
  • Background in autonomous vehicles, warehouse robotics, or other perception-heavy robotics applications.
  • Contributions to open-source CV/ML projects or publications in top-tier venues (CVPR, ECCV, NeurIPS, etc.).

Chef Robotics is solving one of the hardest problems in AI and robotics — and we ship. Our robots are in production today, generating real data that trains the next generation of food AI. Backed by Kleiner Perkins, Construct, Bloomberg Beta, and Promus Ventures, and built by a team from Cruise, Zoox, Google, Tesla, and Amazon Robotics, we're scaling fast with multiple multi-year enterprise contracts. If you want to build physical AI with real-world deployments and real impact, Chef is the place.

170000 - 240000 USD a year

Chef is an early-stage startup where equity is a major part of the compensation package. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position. Within the range, individual pay is determined by additional factors, including job-related skills, experience, and relevant education or training.
In addition to salary and early-stage equity, we offer a comprehensive benefits package that includes medical, dental, and vision insurance, commuter benefits, flexible paid time off (PTO), catered lunch, and 401(k) matching.