Share
Preview
GPUs, CPUs and how to manage all that fun
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

Ask me anything with the Fiddler team was awesome! Thank you to everyone that participated! Check out the artifacts we left behind in the  #ask-me-anything channel!

Forgot about our events? Stay up to date with what's going on in the community by subscribing to our
public cal.

Coffee Session
Real-Time at Uber
Last week we released the exactly once real-time ML ads platform at Uber system design review. This week one of the main authors of the Uber blog post came on to talk to us about the story behind the blog.

Why real-time? Jacob worked at Shopify on a recsys for merchant ads.  From that experience, he realized unless the new data coming in every millisecond will actually change the prediction, think hard about if real-time complexity is needed. That said, for some use cases if you DO have real-time capabilities it can open up a world of potential products to build on top of the platform.  One way this could play out is by giving merchants notifications when ads aren't performing as well as in previous campaigns.

Surprisingly (or not), many of the constraints came from within Uber itself. Could Jacob and team have used a different messaging bus than Kafka? Yes technically they could have, but considering Uber is one of the greatest contributors to the Kafka project it wouldn't have made sense.

Here is a tweet with a few other takeaways, or click below to check out the full episode.

Past Meetup
The Role of Resource Management in MLOps
Fractionalizing GPUs? Scaling up and scaling down GPUs as you see fit for inference and training? These may seem like pipe dreams. Ronen CTO of Run:ai along with Gijsbest came to the meetup to talk to us about how this can actually be possible.

Allocating the right resources can be a major issue especially if you are a data scientist. The understanding of how to set up a system on the lower levels is not for the faint of heart. Ronen talked about the work the team has put into making that process easier for everyone involved and some of the main challenges that came along with it.

Bonus Round - We also just put out a blog post with the Run:ai team about why you should use GPUs for your end-to-end data science workflows – not just for model training and inference, but also for ETL jobs.
Current Meetup
DataOps is a Software Engineering Challenge
This week we'll have with us Micha Kunze, Lead Data Engineer at Maersk.

Micha's team delivers millions of forecasts a day for the global operations of one of the largest ocean logistics companies in the world. They need reliable systems while also changing quickly.

In this talk, Micha will share how they achieved this by following simple software engineering practices and how you can implement them into your life!

Join us today at 9am PST/5pm BST by clicking the button below!
Sponsored Post
8 Best Practices to Improve Your Model Performance
Model Performance Management (MPM) serves as the centralized control system at the heart of ML workflows, tracking and monitoring model performance at all stages. Powered by Explainable AI, MPM is essential for model risk management, model governance, and standardizing MLOps.

Improve your model performance with these 8 best practices used by industry leaders.
We Have Jobs!!
There is an official MLOps community jobs board now. Post a job and get featured in this newsletter!
Best of Slack
Best of Slack is its own newsletter now. Sign up for it here.
See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.



Email Marketing by ActiveCampaign