features

Platform features

Cubonacci offers a number of features modeled around the requirements of mature machine learning operations and the natural workflow of data scientists and engineers.

ml-libraries

Keep using the tools you know and love

Cubonacci does not dictate which machine learning libraries you should be using. As a data scientist, you can keep using the tools that you are familiar with, and plug your project into Cubonacci seamlessly.

One-click deployment

When your model is ready for deployment, Cubonacci will package it, ready for deployment as API, batch process, or stream consumer. Auto-generated code is available in many languages to bootstrap consumption of model predictions in downstream systems.

box
notebook

Notebooks

Would you prefer the interactive experience of notebooks or the robustness of a production-grade implementation? What if you don’t have to choose? Cubonacci features a tailor-made notebook solution that combines the convenience of notebooks for machine learning development with the reliability of structured code and version control.

DevOps best practices

Cubonacci embraces the philosophy and best practices from the related field of DevOps. The usage of Git is enforced as the main source of truth. All hyperparameter searches, trained models, and deployments will be traceable back to their origin, providing full lineage and traceability in production.

git
hyperparameter-search

Scalable hyperparameter search

By leveraging the power of Kubernetes, Cubonacci can scale your workloads indefinitely. Unleash the power of the cloud by parallelizing your hyperparameter search, all the while keeping a clear overview of your expenses.

Monitoring

Never lose any sleep over your machine learning models in production. Cubonacci leverages monitoring and alerting techniques to inform you when one of your models needs closer attention, requiring your attention only when required.

laptop
calendar

Scheduled model retraining

The world is continuously changing and the amount of data is increasing daily. Keeping models up to date is essential to maximize performance. Retraining models regularly prevents concept drift and increases the value of productionalized models.

Testing

Cubonacci provides A/B and multi-armed bandit deployments to properly validate the performance of machine learning models when complete offline validation is not possible, for example in personalization or recommender systems. Even when offline evaluation is available, models can be deployed in shadow mode to verify that production performance is on par with an existing deployment.

erlenmeyer