Canonical Voices

Posts tagged with 'machine learning'

Carmine Rimi

This article is the first in a series of machine learning articles focusing on model serving. I assume you’re reading this article because you’re excited about machine learning and quite possibly Kubeflow as well. You might have done some model training and are now trying to understand how to serve those models in production. There are many ways to serve a trained model in both Kubeflow and outside of Kubeflow. This post should help the reader explore some of the alternatives and what to consider.

Here’s a summary of what we’ll explore in this article:

  • What is model serving?
  • How do applications interact with models?
  • What is the Kubeflow approach to model serving?
  • Model serving examples
  • Developer Setup

As the title suggests, this article is only the first part in a series of posts. Sign up to the newsletter to be notified of the next post in this series, as well as technical posts discussing:

  • TensorFlow Serving
  • TensorRT Serving
  • TensorFlow.js
  • Seldon Core
  • Kubeflow Serving

What is model serving?

In simple terms, it is making a trained model available to other software components. How you’ve arrived at a trained model – what framework you used to produce it – will play a role in what options are available to you. And you may not have produced the trained the model yourself – there are open source, pre-trained models that can be used today, models that were trained on data that you may not have access to. BERT is an example of an area that produces pre-trained models. We’ll discuss BERT in more detail in a future article.

How do applications interact with models?

Probably the most immediate concern is determining how you want to integrate the model into your application. Should it be embedded? Should other systems be able to access it? Is scaling a concern?

For embedded model serving, the model can be compiled into the application and accessed via native function calls. This could be done within a Python application, or it could be done from within a JavaScript application in a browser.

For API model serving – where others can access your model dynamically – the most common approach is to put a REST API in front of the model. Most of the popular frameworks like TensorFlow come with native mechanisms for this, and there are some links below. But API model serving creates another concern – does it need to scale? For instance, assume your model can handle 100 requests a second. Is that enough? Could there be a spike of 5000 requests a second? If so, you need to think about scaling the model.

What is the Kubeflow approach to model serving?

Fortunately there are a few frameworks included with Kubeflow that will help accomplish both tasks – put an API in front of your model, and allow it to scale based on demand. The Kubeflow community has included a couple of examples, using different frameworks – a TensorFlow serving example and a Seldon example. The community is also in the middle of creating a new, generic approach to model serving. This new approach is in flight and we will write about this more later, once it is closer to release.

Model serving examples

Using a crawl, walk, run approach, one of the best next steps is to run through some of the examples below so that you can get grounded in the manual approach to serving models. After a low level understanding of how these things work, try the more automated approach with Kubeflow. In summary, if you are just getting started, I suggest these steps:

  1. Basic TensorFlow example 
  2. REST TensorFlow example
  3. Kubernetes TensorFlow example
  4. Kubeflow TensorFlow example

Developer Setup

An easy way to explore the examples above is to get access to the Ubuntu platform. This starts with the Ubuntu operating system. If you’re on a Windows or a Mac desktop, you can start with Multipass – a native application for Windows, Mac, and Linux that will let you create a virtual machine. Here’s a complete list of software that you are free to use:

  • Multipass – A mini-cloud on your Mac, Windows or Linux workstation.
  • MicroK8s – A single package of K8s that installs on Linux
  • Kubeflow – The Machine Learning Toolkit for Kubernetes

Resources


The post Machine Learning: serving models with Kubeflow on Ubuntu, Part 1 appeared first on Ubuntu Blog.

Read more
Michael Hall

Late last year Amazon introduce a new EC2 image customized for Machine Learning (ML) workloads. To make things easier for data scientists and researchers, Amazon worked on including a selection of ML libraries into these images so they wouldn’t have to go through the process of downloading and installing them (and often times building them) themselves.

But while this saved work for the researchers, it was no small task for Amazon’s engineers. To keep offering the latest version of these libraries they had to repeat this work every time there was a new release , which was quite often for some of them. Worst of all they didn’t have a ready-made way to update those libraries on instances that were already running!

By this time they’d heard about Snaps and the work we’ve been doing with them in the cloud, so they asked if it might be a solution to their problems. Normally we wouldn’t Snap libraries like this, we would encourage applications to bundle them into their own Snap package. But these libraries had an unusual use-case: the applications that needed them weren’t mean to be distributed. Instead the application would exist to analyze a specific data set for a specific person. So as odd as it may sound, the application developer was the end user here, and the library was the end product, which made it fit into the Snap use case.

Screenshot from 2017-03-23 16-43-19To get them started I worked on developing a proof of concept based on MXNet, one of their most used ML libraries. The source code for it is part C++, part Python, and Snapcraft makes working with both together a breeze, even with the extra preparation steps needed by MXNet’s build instructions. My snapcraft.yaml could first compile the core library and then build the Python modules that wrap it, pulling in dependencies from the Ubuntu archives and Pypi as needed.

This was all that was needed to provide a consumable Snap package for MXNet. After installing it you would just need to add the snap’s path to your LD_LIBRARY_PATH and PYTHONPATH environment variables so it would be found, but after that everything Just Worked! For an added convenience I provided a python binary in the snap, wrapped in a script that would set these environment variables automatically, so any external code that needed to use MXNet from the snap could simply be called with /snap/bin/mxnet.python rather than /usr/bin/python (or, rather, just mxnet.python because /snap/bin/ is already in PATH).

I’m now working with upstream MXNet to get them building regular releases of this snap package to make it available to Amazon’s users and anyone else. The Amazon team is also seeking similar snap packages from their other ML libraries. If you are a user or contributor to any of these libraries, and you want to make it easier than ever for people to get the latest and greatest versions of them, let’s get together and make it happen! My MXNet example linked to above should give you a good starting point, and we’re always happy to help you with your snapcraft.yaml in #snapcraft on rocket.ubuntu.com.

If you’re just curious to try it out ourself, you can download my snap and then follow along with the MXNet tutorial, using the above mentioned mxnet.python for your interactive python shell.

Read more