Figure 3 (IMAGE)
Caption
Analysis of what kind of data is used by these models (knowledge domain), how to understand the zero-shot generalization task (task indicator), what kind of component the sequence model is deployed as (what to pre-train), how to pre-train the model, and how to use the pre-trained model. Below is an explanation of the abbreviations in the table: Language model (LM), language and vision model (LVM), and behavior cloning (BC).
Credit
Muning WEN, Runji LIN, Hanjing WANG, Yaodong YANG, Ying WEN, Luo MAI, Jun WANG, Haifeng ZHANG, Weinan ZHANG.
Usage Restrictions
none
License
Original content