Groundbreaking, Ground-Based Autonomy
About Forterra
Forterra develops autonomous systems tailored for ground-based operations in critical industrial and defense applications. As a pioneer in driverless technology, the company delivers real-time, resilient perception platforms that support both civilian and military operations. Recognized as a trusted ground autonomy partner by the U.S. Department of Defense, Forterra enables mission success in high-stakes environments through advanced AI-driven mobility solutions.
About the Role
This opportunity involves developing real-time perception capabilities for autonomous vehicles operating in complex, unpredictable environments. The role focuses on building and optimizing core perception algorithms including object detection, semantic segmentation, tracking, and OCR across multiple sensor modalities such as cameras, LiDAR, and radar. It’s a systems-level engineering position, ideal for someone who thrives on bridging deep learning research with real-world deployment.
Responsibilities
- Design and implement perception algorithms for detection, segmentation, classification, and OCR across diverse sensor types
- Optimize and monitor deep learning models used in real-time environments
- Conduct data pipeline design for model training, validation, and performance evaluation
- Stay informed on cutting-edge research in machine learning and perception technologies
- Collaborate with engineering and field teams to validate systems in live deployments
- Analyze real-world data to identify and resolve system-level perception issues
Required Skills
- Proficiency in machine learning model development and deployment workflows
- Demonstrated experience building real-time perception systems including object tracking and segmentation
- Programming expertise in Python and C++ within robotic or embedded system environments
- Familiarity with inference optimization and deploying models to edge devices (e.g., NVIDIA Jetson, TensorRT)
- Strong understanding of ML training lifecycle, data annotation, and model evaluation
- Ability to debug field-collected perception data and rapidly iterate on improvements
- Experience maintaining thorough documentation of algorithms and test processes
- Problem-solving mindset with initiative to own complex technical challenges
Preferred Qualifications
- Bachelor’s, Master’s, or PhD in Computer Science, Electrical Engineering, Robotics, or equivalent experience
- 2+ years of hands-on academic or industry experience in a related role
- Background in SLAM, visual odometry, or multimodal sensor fusion
- Familiarity with ROS, Docker, and CI/CD systems for robotics software development
- Experience managing ML datasets and training infrastructure
- Knowledge of deploying AI systems under real-time performance constraints
- Experience with cloud infrastructure, particularly AWS
Visit the official website below to access the full details of this vacancy: