What models are supported for fine-tuning? Is Llama 3 supported for fine-tuning?
Yes, Llama 3 (8B and 70B) is supported for fine-tuning with LoRA adapters, which can be deployed via our serverless and on-demand options for inference.Capabilities include:
LoRA adapter training for flexible model adjustments
Serverless deployment support for scalable, cost-effective usage
On-demand deployment options for high-performance inference
A variety of base model options to suit different use cases
For a complete list of models available for fine-tuning, refer to our documentation.