Deploying an MLFlow Remote Server with Docker, S3 and SQL

Deploying an MLFlow Remote Server with Docker, S3 and SQL

MLFlow is an open-source platform for managing your machine learning lifecycle. You can either run MLFlow locally on your system, or host an MLFlow Tracking server, which allows for mutiple people to log models and store them remotely in a model repository for quick deployment/reuse.

In this article, I’ll tell you how to deploy MLFlow on a remote server using Docker, an S3 storage container of your choice Minio or Ceph and SQL SQLite or MySQL.

Read more
Deploying a Spark Model with REST Inference API

Deploying a Spark Model with REST Inference API

Deploying a machine learning model built with Apache Spark isn’t as straight forward as the deployment of a PyTorch model or a TF model. Especially when you’re planning on having a REST API for inference requests. One way of going about it is use MLeap, but that would require modifications to training code, as MLeap relies on it’s own serialization.

The best approach that I’ve found is using Openscoring and PMML (Predictive Model Markup Language). PMML is a an XML based markup language that stores your predictive model and openscoring is used to create the inference REST API. The steps for doing so are as follows:

Read more