ToolLLM is a project that aims to create a large-scale dataset for training language models with tool-use capabilities. They collect instructions involving real-world APIs and develop a new annotation approach to improve efficiency. The project provides the ToolLLaMA model, which performs well in handling single-tool and complex multi-tool instructions. They also release the ToolLLaMA-7b, ToolLLaMA-7b-LoRA, and ToolLLaMA-2-7b models, along with a tool retriever. They evaluate the models using pass rate and preference metrics, showing good performance compared to other models. Overall, ToolLLM empowers language models to understand and use real-world tools effectively.