Share
plus leadership, testing and a guide to monitoring tools
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
Apologies, updated links included.

He’s back! And he’s here to make sure you’re RAG ready!

Not only has Rahul Parundekar developed the intro course to Q&A systems, but he’s now got a RAG project course on offer.

And get this - it’s FREE! He’s offering mentorship and support or you to build your own Retrieval Augmented Generation project. FOR FREE!

But places are limited, so you’ll have to be quick.

And don't forget to tell us what you want from the Learning Platform here!

All treat, no trick! Happy Halloween! 🎃
MLOps Community Podcast
The Future of Feature Stores and Platforms // Mike Del Balso & Josh Wills // MLOps Podcast # 186

Everyone loves a deal, and we’ve got one for you here!

This podcast isn’t just 2 for 1 on guests, it’s kind of a 2 for 1 podcast as a whole!

The original plan was to have Mike as co-host and chat with Josh, but that went out the window when Josh didn’t show!

So the first half I have a great chat with Mike, we go in to his experience working at Google and Uber and how that inspired him with feature infrastructure and developing Tecton as a feature platform.

I mean, that’s a whole podcast right there, right?

But then boom, Josh joins us, and they go deep in what the future might hold for feature stores and platforms and what they’d like to see. That’s another podcast right in this one! What a bargain!

All this and it’s not even Black Friday yet!


Lessons on Data Teams Leadership // Luigi Patruno // MLOps Podcast #185

Move over LLMs!
A new beast is in town!
LLLMs.
Don’t worry, it doesn’t stand for Ludicrously Large Language Models or even Larger Large Language Models.

It stands for: Luigi’s Leadership Language Model.

He gets in to his career from graduation to management and what he’s learnt, including balancing experimentation and confidence, viewing the data team as a profit center and the significance of buy-in and commitment from stakeholders.

And it’s that last point leads in to the the LLLM; to paraphrase:
"Your CEO doesn't care about machine learning, so you have to speak to them in a language that they care about. Figure out what that language is, and you will be much more successful."

Sound advice to live by, until someone develops LLLLMs, what ever they may be!

Job of the week

Senior Applied Scientist // Ramp (New York, Miami, remote)
Founded in 2019, Ramp powers the fastest-growing corporate card and bill payment platform in America, and enables tens of billions of dollars in purchases each year. We seek a Senior Applied Science leader to define the analytical frameworks and strategic roadmaps for growth optimization.

You will:
  • Employ statistical, machine learning, and econometric models on large datasets to evaluate channel performance
  • Build attribution models and investment frameworks to inform future investments
  • Contribute to the culture of Ramp’s data team by influencing processes, tools, and systems that will allow us to make better decisions in a scalable way
MLOps Community IRL Meetup
RAG: Hosting LLMs in the Enterprise// Bernard Camus // IRL Meetup Bristol # 51

In this brief look at the theory and practice of RAG architecture for large language models, Bernard shares insights on the theory behind LLMs and their architecture.

It’s a good foundation in theory before discussing real-world use cases and the benefits and potential problems with them, and how RAG can be used to mitigate some of those issues, before giving a quick comparison of self hosting Vs API hosted.

Watch it here
Live Event - Systematic ML Testing in partnership with Kolena
Struggling with untrustworthy AI models is a common challenge that will continue as AI becomes even more integrated into our daily lives. Learn how rigorous unit testing for machine learning dramatically increases trust in your AI/ML systems, and how you can attain deeper insights into ML model performance in a fraction of the time.

What we’ll cover:
  • Building an end-to-end model quality management process
  • How to standardize the product test coverage and release process
  • Model robustness, fairness, and biases
  • A blueprint to build an infrastructure for rigorous model testing
  • Live Q&A to get your answers in real time

REGISTER NOW!
Hidden Gem
There are many tools to monitor and provide reports the performance of models, but, who is monitoring the monitoring tools?

This post is a useful guide to ML model monitoring and observability tools and highlights there is significant opportunity for expanding and diversifying the range of products within this field.

Many thanks to Alex Irina Sandu for the contribution.

Looking for a job?
Add your profile to our jobs board here
IRL Meetups
Oslo - November 1
Luxembourg - November 7
San Francisco - November 8
Amsterdam - November 14

Thanks for reading. See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on X.



Email Marketing by ActiveCampaign