MIT recently unveiled a novel approach to training robots that mimics the vast amount of data used to train large language models (LLMs). Unlike traditional methods that rely on focused data, this new model takes a more expansive approach.
### Imitation Learning Challenges
The researchers discovered that imitation learning, where robots learn by observing an individual perform a task, can be ineffective when faced with minor challenges like changes in lighting or obstacles. In such cases, robots lack the necessary data for adaptation.
### Heterogeneous Pretrained Transformers
To address this issue, the team developed a new architecture called heterogeneous pretrained transformers (HPT). This architecture integrates data from different sensors and environments, utilizing a transformer to train models. The size of the transformer directly impacts the quality of the output.
### Future Prospects
The ultimate goal is to create a universal robot brain that can be easily downloaded and used without the need for extensive training. While still in the early stages, researchers are optimistic that further scaling efforts will lead to significant breakthroughs in robotic policies, similar to advancements in large language models.
The research received support from the Toyota Research Institute (TRI), which has been actively involved in training robots and recently formed a partnership with Boston Dynamics to combine robot learning research with advanced hardware technologies.
