Our approach

General Robotic intelligence

Research meets industry

Perfecting each component

Instead of learning to master specific tasks separately, Covariant robots learn general abilities such as robust 3D perception, physical affordances of objects, few-shot learning and real-time motion planning. This allows them to adapt to new tasks just like people do — by breaking down complex tasks into simple steps and applying general skills to complete them.

Bringing practical AI Robotics into the physical world is hard. It involves giving robots a level of autonomy that requires breakthroughs in AI research. That’s why we have assembled a team that has published cutting-edge research papers at the top AI conferences and journals, with more than 50,000 collective citations. In addition to our research, we’ve also brought together a world-class engineering team to create new types of highly robust, reliable and performant cyber-physical systems.

We’re only as strong as our weakest link. If one component lags, the whole system fails. That’s why we’ve built a full-stack team, investing in making each robotic component world-class, from software systems to hardware stations, from AI algorithms to end-effectors.

Building on the team's academic research

Before Covariant, we conducted pioneering AI research at U.C. Berkeley, OpenAI, and other top institutions
37 results
  • IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, June 2020

    Embodied Language Grounding with Implicit 3D Visual Feature Representations

    Mihir Prabhudesai, Hsiao-Yu Fish Tung, Syed Ashar Javed, Maximilian Sieb, Adam W. Harley, Katerina Fragkiadaki

  • International Conference on Robotics and Automation (ICRA), Paris, France, May 2020

    Guided Uncertainty Aware Policy Optimization: Combining Model-Free and Model-Based Strategies for Sample-Efficient Learning

    Michelle Lee*, Carlos Florensa*, Jonathan Tremblay, Nathan Ratliff, Animesh Garg, Fabio Ramos, Dieter Fox

  • International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, April 2020

    Subpolicy Adaptation for Hierarchical Reinforcement Learning

    Alexander Li*, Carlos Florensa*, Ignasi Clavera, Pieter Abbeel

  • Neural Information Processing Systems (NeurIPS), Vancouver, Canada, December 2019

    Goal Conditioned Imitation Learning

    Yiming Ding*, Carlos Florensa*, Mariano Phielipp, Pieter Abbeel

  • Conference on Robot Learning (CoRL), Osaka, Japan, November 2019

    Graph-Structured Visual Imitation

    Maximilian Sieb*, Zhou Xian*, Audrey Huang, Oliver Kroemer, Katerina Fragkiadaki

  • International Conference on Machine Learning (ICML), Long Beach, USA, June 2019

    Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules

    D. Ho, E. Liang, I. Stoica, P. Abbeel, X. Chen

  • Neural Information Processing Systems (NeurIPS), Montreal, Canada, December 2018

    Evolved Policy Gradients

    Rein Houthooft, Richard Y. Chen, Phillip Isola, Bradly C. Stadie, Filip Wolski, Jonathan Ho, Pieter Abbeel

  • IEEE-RAS International Conference on Humanoid Robots (Humanoids), Beijing, China, November 2018

    Data Dreaming for Object Detection: Learning Object-Centric State Representations for Visual Imitation

    Maximilian Sieb, Katerina Fragkiadaki

  • IEEE/RSJ International Conference on Intelligent RObots and Systems (IROS), Madrid, Spain, October 2018

    Domain Randomization and Generative Models for Robotic Grasping

    Joshua Tobin, Lukas Biewald, Rocky Duan, Marcin Andrychowicz, Ankur Handa, Vikash Kumar, Bob McGrew, Jonas Schneider, Peter Welinder, Wojciech Zaremba, Pieter Abbeel

  • Conference on Robot Learning (CoRL), Zurich, Switzerland, October 2018

    Model-Based Reinforcement Learning via Meta-Policy Optimization

    Ignasi Clavera*, Jonas Rothfuss*, John Schulman, Yasuhiro Fujita, Tamim Asfour, Pieter Abbeel

Page 1 of 4

Want to join our team?

Interested in working at Covariant? We’re hiring!