Share
plus the impact of LLMs on the tech stack, and a declarative feature engineering framework
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
Ms. Lockwood, I hope you’re proud.

She was my 3rd grade teacher, and always told me I needed to apply myself.

And that’s just what I’m doing on Nov 14 when I’ll be hosting the apply(ops) virtual conference.

There’s a load of great talks lined up that will focus on platforms and architectures for production machine learning projects.

Check out the full agenda here.

MLOps Community Podcast
Building Effective Products with GenAI // Faizaan Charania // MLOps Podcast #187

Some kids sit in school and daydream of being a famous film star.

Not Faizaan though, he sat on film sets and dreamed of going to school - he knew the real superstars are in machine learning!

At the grand age of 14, he turned his back on acting to pursue a life in tech.

So we dig into his career a little and he shares his experience of the dynamic between product managers and machine learning teams, stressing the value of product-oriented ML engineers. We also delve into assessing AI-driven experiences, using Large Language Models for quick testing and market entry, and examining the pros and cons of generative AI.

Most valuable though are the practical tips he offers based on his LinkedIn AI implementation experience, with transparency and trust in AI technologies as recurrent themes.

He’s well on his way to getting a star on the MLOPs Walk of Fame!

In partnership with Tecton: Scaling ML Engineering at Flo Health
Flo Health is the maker of the most popular women’s health app in the world, with over 56 million monthly users. At Flo, ML is an engineering discipline — as a quickly growing company, their ML team faces significant operational challenges, such as a disjointed approach to ML, with systems spread across the company.

Join Flo Health and Tecton for this webinar on Thursday, December 7, to learn why they implemented a centralized ML platform and the myriad benefits realized, including enabling the team to:
  • Build and use the same pipelines for training and inference of their ML models
  • Leverage built-in materializations for the online store
  • Generate point-in-time correct joins for dataset collection from offline storage
  • Easily share features across teams and projects.

Register for the webinar here
MLOps Community Podcast
Impact of LLMs on the Tech Stack and Product Development // Anand Das // MLOps Podcast #188

As the weekend comes around, you may be thinking about treating yourself to some takeaway.

But what to have? Pizza? Jambalaya? Shawarma? Tacos? Bao buns? So many choices! How to choose?

Why not go for all of them? That’s the approach of Anand and the rest of the folks at Bito! Well, at least when it comes to which LLM to use!

To be fair, they don’t use them all at once. He talks about how they look at which model they’re going to hit, and also the load balancer they've created to counter the rate limits.

He also talks about guard rails and making sure the prompts are right, giving as much context as you can give, and not just the cost factor of hosting locally, but the security aspect of it too.

There are also some pearls of wisdom about how hallucinations are a necessary evil. The same answer could be wrong for one person, but right for another -  just like a takeaway order, I suppose.

MLOps Community IRL Meetup
ML @ BT: Evolution, Lesson Learned and Looking Ahead // Manish Sharma // IRL Meetup #52 Bristol

As well as sharing a lovely picture of himself and a skeleton, in this talk Manish Sharma shares BT's journey in creating a scalable machine learning product, emphasizing stakeholder input and ethical considerations. Their AI Center of Excellence, featuring a data science unit and machine learning platform, plays a pivotal role.

He talks about how they have integrated Google Cloud services into their infrastructure to address challenges related to online serving of ML models and delves into the structure of their ML projects, the libraries they use, the various components of their ML pipelines, Continuous Software Continuous Deployment (CSCD) practices, and the processes they follow for deploying ML models.

Watch it here
MLOps Community Mini Summit - Nov 8
Mini Summit Meetup on Nov 8 brought to you by LatticeFlow!

Data is often touted as the new oil.

It’s fundamental to what we do, but not without its headaches.

One source of pain is dealing with unstructured data, another is dealing with biases.
While those aren’t the only issues, they are ones that are being addressed in this mini-summit.

Join Ben Epstein as he hosts two talks from CTOs Pavol Bielik (LatticeFlow) & David Garnitz (VectorFlow) that will help identify model blind spots and look at experimenting with different ingestion techniques.

Register here

Job of the week

ML Platform Engineer // Salesforce (US based)

Salesforce seeks a senior platform engineer to build the next generation of high-scale, large-data, and AI-driven products and features. The role encompasses architecture, design, implementation, and testing to ensure we build products right and release them with high quality. As part of the C360 Intelligent Automation team, you will be responsible for paving the way towards automation services orchestration platforms powered by AI.

Required qualifications include:
  • 6+ years of Platform Engineering working with public cloud computing architecture
  • Strong programming skills in Java, Javascript, C++, or other object-oriented languages
  • Hands-on experience with performance measurement, evaluation, and optimization
  • Experience with CI/CD and microservice platforms
Hidden Gem
Chronon — A Declarative Feature Engineering Framework
Building features for your ML models is not an easy task, involving significant engineering and infrastructure efforts. To simplify this process, Airbnb developed an "easy-to-use" internal feature store with the following functionalities:
  • Centralized Data Computation: ML practitioners can define and compute data centrally for both training and production, ensuring consistency.
  • Versatile Data Ingestion: Diverse data sources like event streams, warehouse tables, and snapshots are supported.
  • Real-Time and Batch Processing:  Flexibility to produce feature data in real-time for online services or in batches for training datasets.
  • Python API for Time-Based Aggregations: Robust Python API that treats time-based aggregation and windowing as first-class citizens, along with SQL-like transformations.

The article drills down into the specifics of each component. If you're exploring the world of feature stores, this read is a great resource to deepen your understanding.


Many thanks to Mohamed Sadek for the contribution.

Looking for a job?
Add your profile to our jobs board here
IRL Meetups
Luxembourg - November 7
San Francisco - November 8

Amsterdam - November 14
San Francisco - November 15
London - November 16
Scotland - November 16

Thanks for reading. See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on X. The MLOps Community newsletter is edited by Jessica Rudd.



Email Marketing by ActiveCampaign