API Documentation
A Pythonic API your ML team already knows how to use.
Gym-compatible environments. Vectorized rollouts. First-class support for PyTorch and JAX. Hosted training infrastructure that scales from one GPU to a thousand.
Python 3.11
import kyros# Load a Transfer Verified environmentenv = kyros.make("kitchen_residential_v4.2", randomize=True)# Train your policy with the standard PPO looptrainer = kyros.PPOTrainer(env, num_envs=4096)policy = trainer.fit(steps=10_000_000)# Score it against the transfer eval suitereport = kyros.evaluate(policy, suite="manipulation_v1")print(f"transfer_likelihood: {report.score:.2f}")Five lines from import to evaluation.
Every Kyros environment implements the standard Gymnasium interface, returns batched observations, and integrates with the trainer of your choice.
- • Gymnasium-compatible environments and wrappers
- • Vectorized: 4,096+ parallel envs on a single GPU
- • Reference trainers (PPO, SAC, BC) included
- • Bring your own trainer — it just works
- • Deterministic seeding and full run reproducibility
SDK surface area.
Environments
make(), list(), versions(). Load, fork, and version any environment from the marketplace.
Trainers
PPOTrainer, SACTrainer, BCTrainer. Or pass any callable as a custom trainer.
Hosted runs
kyros.cloud.launch() to run a training job on managed GPU infrastructure with full logs.
Eval & webhooks
Programmatic transfer evaluation, with webhook callbacks when policies cross thresholds.