Share
Preview
Please, don't be that guy...
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

Yep, we back

Past Meetup
The Million Dollar Problem
We all know someone or experienced first hand how Covid affected the models in production with changes in literally everything and showed us the importance of model monitoring.

During this meetup, the Loka Team took us
through the typical steps in a monitoring system by providing an introduction to several managed and open source solutions including SageMaker Model Monitor and Evidently AI. Each tooling breakdown was accompanied by a hands on demo.

So... what are you waiting for? Let's get started!
Coffee Sessions
Wicked Data
I love when we have community members on the Coffee Session. It affirms what a unique, brilliant, and helpful group of people we have on our Slack and in the community. This time, we were joined by one of the most prolific and helpful members, Skylar Payne!

Skylar, with his past experiences at LinkedIn and Google, consistently brings thoughtful takes on how to make the right engineering tradeoff based on the team's goal. I think this is a crucial part of being an effective (not just good) engineer. We explored with Skylar some of his frameworks for how to make tradeoffs between traditional data engineering work and focusing more on modeling (as many ML professionals are prone to do).

We also explored Skylar's great new article on his blog Data Chasms about Wicked Data. The contrast between machine learning systems and analytics systems inspired him to write this piece. It's a consistent theme we talk about (like in Mike and Erik's talk), and Skylar shared his valuable opinions about what ML systems can learn from analytics systems. As an ML engineer myself, I really appreciate these discussions because they show me how the "modern data stack" is going to meet the future of ML systems, which have to use the data to create business value.

Listen to this talk to hear a very special and skilled community member drop some serious knowledge!

Till next time,
Vishnu
Current Meetup
Reasons Model Monitoring Fails
You may have noticed this is the month of monitoring talks! right now we are on talk three of four! I never want to hear a model gone rogue story again! With the amount of knowledge we have on the meetups during the monitoring month, there are no excuses!

This week we are talking to the CEO of Superwise Oren Razon. With over 15 years of experience leading the development, deployment, and scaling of ML products, Oren is an expert ML practitioner specializing in MLOps tools and practices.

Previously, Oren managed machine learning activities at Intel’s ML center and operated a machine learning boutique consulting agency helping leading tech companies such as Sisense, Gong, AT&T, and others, to build their machine learning-based products and infrastructure.

The meetup will be about The Not So Talked About Reasons Model Monitoring Fails. It’s tempting to focus on the technical and dive into drift and data anomalies, but there are other critical organizational challenges that can negatively impact your ML operations just as severely.

Oren will cover both hard organizational challenges like building signal vs noise tolerances, and soft organizational challenges like stakeholder identification and aligning expectations. We’ll also share some best practices on how to lead model observability discovery in your organization and build measurable KPIs for success.


Sub to our public calendar or click the button below to jump into the meetup on Wednesday at 9am PST/5pm BST
Reading Group
The ML Test Score
This Friday, December 3, we will have a special guest that has hands-on experience on how to operationalize the The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction (Breck et al. 2017) - from Google Research. Link: https://lu.ma/5msbgqt4

Skylar Payne is a Machine Learning Engineer at HealthRhythms, has several years of working as a ML Tech Lead focus on Data Science & Engineering problems at Google and LinkedIn. Furthermore, Skylar seeks to help Data Scientists and Data Engineers bridging the gap among them.

Our motivation resides in better understanding how can we improve reliability, reduce technical debt, and lower long term maintenance cost in our ML projects. From this perspective, while bearing in mind that the behavior of a ML system depends strongly on data and models that cannot be strongly specified a priori, we will discuss how to apply the ML Test Score.

Moreover, this recipe consists in a set of 28 tests that measure
how ready for production a given ML system is.Jump on our reading group session this Friday at 6pm CET to engage in our discussion. If you are not able to come, feel free to watch it later on the MLOps Community YouTube channel as it will be recorded.
Best of Slack
Jobs
See you in slack, youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.



Email Marketing by ActiveCampaign