- No infrastructure: Train frontier models without managing GPUs or RL infra
- Production-ready: Built-in tracing, monitoring, security & one-click deploy
- Fast iteration: From evaluator setup to deployed model in hours, not weeks
Quickstart: Pick Your Training Approach
Single-Turn Training
⏱️ 15 minutesBest for: Testing locally, simple task trainingHow it works: Iterate on your evaluator and use it to train a small model on Fireworks.
Remote Agents
⏱️ 1-2 hoursBest for: Agents, multi-turn workflows, existing servicesHow it works: Rollouts happen in your environment. Connect via HTTP with tracing.
Secure Training (BYOB)
⏱️ 2-4 hoursBest for: Sensitive data, compliance, enterpriseHow it works: Training data never leaves your GCS/S3 bucket. Full data isolation.
Launch Training
Prerequisites & Validation
Requirements, validation checks, and common errors before launching
CLI (eval-protocol)
Fast, scriptable, reproducible. Perfect for automation and iteration
Web UI
Visual, guided, beginner-friendly. Great for exploring options
Already familiar with firectl? You can create RFT jobs directly.
RFT Concepts
How RFT Works
The RL training loop explained
Evaluators
How reward functions guide training
Environments
Local vs remote evaluation environments
Parameter Tuning
Optimize your training configuration
Cost Estimator
Estimate and optimize your training costs