It has been a little while now since I last compiled an update, actually almost two months since I was a lot on the road. However, the upside is that now there is indeed a lot to report on: from model interoperability discussions to self-paced course material to exciting (new) tooling to community updates. Without further ado, let's get to it!
So, a new player in town! From this article we learn that Polyaxon:
… is made use for building, training, as well as monitoring deep learning applications that are at a larger scale.
OK, that sounds interesting. It does indeed support a range of libraries from Tensorflow to scikit-learn. I yet have to give it a spin myself and I'd be interested to learn from you if you have any hands-on stories to share with the community. Check it out via polyaxon.com.
Model Interoperability & Interchange
One challenging task is oftentimes to export a model from one environment and then import this very model in a different environment. You run into this if, for example, your data scientists use R to create a model and in your app you're using something Python-based like Tensorflow.
I stumbled upon a really detailed and helpful article on model interoperability that explains how trained models can be persisted and reused across machine learning libraries and environments.
Two formats stood out for me, the Open Neural Network Exchange (ONNX) format backed by Microsoft and Facebook as well as Apple's CoreML. It's likely too early to hope for standardization on a certain format, but I suppose keeping an eye on those two wouldn't hurt.
Tooling & Algorithms
- Sarasra/models … a collection of TensorFlow models and examples
- inc0/video_detection … a hands-on walkthrough of a Tensorflow model for object detection in video streams
- deepmipt/DeepPavlov … an open source library for building end-to-end dialog systems and training chatbots
- MXNet … a training and inference framework with a Gluon API by Amazon
- Binder … turns a GitHub repo into a collection of interactive Jupyter notebooks
- Reptile … a meta-learning algorithm that repeatedly samples a task, performing stochastic gradient descent on it
Articles & Videos
- Setting up a GPU Enabled Kubernetes for Deep Learning by Steven Cheng of Buffer
- Continuous Delivery to Kubernetes for Machine Learning by Anita Buehrle via DZone
- Simplifying machine learning on open hybrid clouds with Kubeflow by Kirat Pandya, Hybrid ML Technical Lead, Google Cloud
- Let's Flow within Kubeflow by Ala Raddaoui of the Intel AI group
- From the CNCF webinar video series: Machine Learning in the Datacenter with Nick Chase of Mirantis
The awesome Ce Gao, part of the Kubeflow team, maintains a Kubeflow Weekly on GitHub. Thanks for this!
Google is increasingly investing in the machine learning space: on the one hand you can now (on a high level) learn about it via
ai.google and, for aspiring practioners, they're now offering an online machine learning crash course. Well done :)
Great use case here and thanks for sharing this: How Booking.com Uses Kubernetes for Machine Learning.
We recently had an OpenShift Commons machine learning briefing where I had the opportunity to provide an intro to KAML-D, a collaborative machine learning workbench leveraging Tensorflow, JupyterLab, dotmesh, Kubernetes and many other open source components. You can also check out the design, if you're interested in it.
Header photo by Quino Al / Unsplash