Machine learning lifecycle management

Continuous Integration / Continuous Delivery for data science

Train, deploy and maintain in no time

Developing a machine learning model is an iterative research process that takes time. Due to complexity in industrializing these models, deployment is usually done only after a number of iterations. Cubonacci minimizes the time between development and deployment by adopting a DevOps mindset to machine learning.

Core principles

Focus

Many of the challenges around deploying and maintaining machine learning models are very constant between projects. Instead of solving e.g. deployment, testing, monitoring and scaling complexities on a case by case basis, Cubonacci provides a solution allowing data scientists to focus on their expertise within every unique project they work on, using their favorite tools — with minimal setup.

Agility

Handovers between development and deployment are costly. Cubonacci minimizes the time to market of machine learning models. During development, models can be trained in a distributed manner, after which they can be deployed in seconds. Deployments are scaled automatically based on load, keeping costs down while maintaining optimal uptime and performance.

Confidence

Overview, quality and testing methods of traditional software have been hard to apply to machine learning projects. Cubonacci has looked at the DevOps mindset taking over development processes and applied these concepts to the machine learning life cycle. Using the visual user interface, all stakeholders can get a full overview and control the state of their machine learning projects. Tracking of models and experiments, role based access control, monitoring, and alerting allow for unprecedented confidence.

Workflow

Data scientist maintains code and configuration in Git

New model is validated and built for every new code version

Model available for experimenting, comparable to previous versions

One click testing, deployment and monitoring

Training

  • Distributed training
  • Experiment tracking
  • Smart optimization
  • Save best model
  • Active learning

Deployment

  • One click API
  • Horizontal autoscaling
  • Rollback to previous versions
  • Same model in batch and API
  • A/B testing and multi-armed bandit
  • Shadowrun multiple models

Maintenance

  • Scheduled retraining
  • Performance monitoring
  • Alerting
  • Smart input/output validation

Bringing data science to the next level

Focus. Agility. Confidence.