Train, test and improve your models declaratively. Scale up to hundreds of GPUs with a single command.
Dataset management, model traceability and reproducibility are handled for you. Models are tested, scored and trended automatically.
Deployment to production with a single command. Quality gates ensure you only deploy improved models.
Get in touch and we'll contact you to talk about how Peraspera.io can make your machine learning development faster and easier.
Our declarative, low-code framework handles most machine learning tasks with simple definitions
# Train a model without a line of imperative code @Requires( train=Dataset('iris.train'), parameters=Parameters('my-hyperparameters') estimator=Estimator(xgb.XGBClassifier) ) @Produces(Model{'iris-model'}) def train_model(train, parameters): pass
# Train your model with a single command $ ml train Trained model iris in 0.1 seconds
Never email another CSV file again
# Define a dataset and it becomes available to everyone def create_dataset(): Dataset('iris') .of('https://example.com/iris.csv') .split(test=0.2) # Reference a dataset by name and it appears like magic @Requires( train=Dataset('iris.train') ) def train_model(train): df: pd.DataFrame = train.get() ...
# We handle version control for you $ ml datasets Name Parent Type Shape Revisions -------- ------ --------- ------- --------- iris DataFrame (150,5) 1 iris.train iris DataFrame (120,5) 1 iris.test iris DataFrame (30,5) 1
We track model performance over time for you.
# Run tests like you would any program with any dataset you want
$ ml test iris-model --dataset=new-iris
Accuracy Recall Precision F1
-------- ------ --------- ----
0.9 0.99 0.99 0.99
# Easily track metrics over time
$ ml metrics iris-model
Revision Test Accuracy Recall Precision F1
-------- -------- ------ ------ --------- ----
1 iris.test 0.93 0.99 0.99 0.99
2 new-iris 0.9 0.99 0.90 0.99
We handle the complexity of training models at any scale.
# Train on your local machine
$ ml train
Training locally on 16 CPU cores
# or train on a dozen GPUs
$ ml train --on my-gpu-cluster
Training on 12 GPU cluster
For every model you train, we automatically track the estimators, datasets, hyperparameters and code that created it.
$ ml models
Name Type Revisions
------------ ------------- ---------
iris-model XGBClassifier 1
$ ml model iris-model
Dataset
train = iris.train@1
test = iris.test@1
Estimator = xgboost.XGBClassifier
Hyperparamters = my-hyperparamters@124
# To check in your code, model, dataset, and hyperparameters, just do
$ ml save
Saved iris-model revision=12345
Deploy models to production with a single command
# Declare a request handler for your model @Server(model=Model('iris-model')) def request_handler(model, request): pass
# Deploy to the cloud $ ml deploy iris-model Endpoint: https://peraspera.io/api/iris-guesser/iris-model $ curl -s -X POST \ -d '{"sepal_length":5.9,"sepal_width":3.0,"petal_length":5.1,"petal_width":1.8}' \ https://peraspera.io/api/iris-guesser/iris-model { "species": "virginica" }
We'd love to talk to you about how we can help your team perform better. Grab a time slot on our calendar and we'll be in touch.