Share
Preview
Why do we need MLOps
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
The year is about to wrap-up and it's been a good one. Let us know what you've enjoyed most about the community in slack.
 
Coffee Session
What is MLOps
Niklas Kühl, Data Science Managing Consultant at IBM Consulting and an author of the famous MLOps paper, shared his knowledge of what it means to do MLOps.

MLOps MLOps!!!!
In the early years, the industry standards for ML adoption for software solutions had low expectations in terms of its application because ML was solely researched-focused. The research was mostly driven toward developing new architectures and algorithms but less on their application.

In today's world, the expectations are quite high in terms of ML applications. Realistic expectations of ML products must be carefully spelled out from the beginning.

The ultimate goal of MLOps is to ensure that every ML endeavor results in an ML product. The paradigm requires a lot of principles and roles to interface to accomplish this claim.

ML Budgeting
When implementing ML at scale, being able to optimize both the engineering and infrastructural cost is one of the significant challenges.

This is a result of the different roles that are required to successfully excel at MLOps.

At IBM, they build ML products in a way that is productive and scalable from day zero. In order to estimate the value of ML products at very early stages. This makes the ML product tangible to the customers.

Another trick is to show the shiny parts of the product in order to show what the product is trying to achieve and get the customer hooked.

Luckily there are tools that can help in carrying out rapid development cycles. This helps to secure the budget that is needed to build a successful MLOps team and ML product.
 
Past Meetup
Vertex AI Workshop
Sascha Heyer, a senior ML Engineer at DoiT, was the guest host on this meetup. He talked about how Vertex AI works and its usage at every stage of the ML pipeline.

Why Vertex AI
The journey to put your model into production for an ML project usually starts in a jupyter notebook environment. This is where you try to implement a POC of your ML solution.

Over time the POC code needs to be converted into a standard and robust code for your solution.

Vertex AI helps you automatically scale and provision the infrastructure to train your models. ML training can be done at any scale with powerful machine-type accelerators GPUs and TPUs.

Vertex AI Training
The training code needs to be packaged with all its dependencies to run training jobs on Vertex AI.

The two main ways to package the training code and its dependency are custom containers or a python source distribution.

The custom container approach is very flexible. It typically involves packaging training code and dependencies into a docker container and uploading that into the Google container registry.

While in the python source distribution approach, the training code and dependencies are uploaded to a Google cloud storage location. With this method, it's harder to track code versions properly.


 
Blog post
Redis Vector Search
Tyler Hutcherson and  Samuel Partee wrote this blog.

November 7th marked the official close of the first Redis Vector Similarity Search (VSS) Engineering Lab using the arXiv scholarly papers dataset. This lab aimed to expand the envelope of semantic search and intelligent document processing applications.

Not too long ago, Sam Partee covered vector search basics, and Tyler Hutcherson explored intelligent document search in a series of posts dedicated to the topic. While vector similarity search has proven itself at companies like Google, Microsoft, Facebook, and Amazon, Redis is focused on bringing it to the masses.


In this blog, we highlight how three solutions from the fantastic hackathon submissions used vector similarity search.
 
We Have Jobs!!
There is an official MLOps community jobs board now. Post a job and get featured in this newsletter!
IRL Meetups
Luxembourg — December 15
Bristol — December 16

Los Angeles — December 17
Toronto — January 31

Thanks for reading. This issue was written by Nwoke Tochukwu and edited by Demetrios Brinkmann and Jessica Rudd. See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.



Email Marketing by ActiveCampaign