|
|
|
|
We're doing it - we're going LIVE! June 25th. Put it in your diary. We're having the first in-person MLOps community conference in San Francisco. You know it’s going to be quality, not just because we’re doing it, but because it’s in the name: AI Quality Conference. Not only is there a fantastic set of speakers from the likes of Nvidia, Google, Cruise and You.com, but there’ll also be some shenanigans to give it the MLOps Community vibe. 🕊️There's still a few days left for early bird prices, and because we love you guys so much, get an extra 15% off using the code 'Community". 💵
|
|
|
|
|
|
|
Data Engineering in the Federal Sector // Shane Morris // MLOps Podcast #223
How many bands must a young man sign before he's booked all the talent? And how many lines must a young man code before they're government-compliant? The answer, my friend, is all in his grin; the answer is all in his grill. Yeah, when the guest has a set of gold teeth, you know it’s going to be an interesting listen!
From booking acts to securing government contracts, we talked about Shane's varied career and the crucial role that data has played throughout. Now a board member at DataX and a senior executive advisor at Devis, he shares insights from navigating the unique challenges of government work. These include the clearance process, FedRAMP classifications, and the need to adapt the pitch when evangelizing about new tech to upgrade legacy systems without the focus on ROI and profit lines.
We also chat about his involvement with DataX and their innovative application of Rapide, a programming language designed for heavy-duty tasks. It's ideal for IoT devices as it’s capable of managing vast amounts of sensor data through parallelization and fast data processing.
That’s right, the run-times, they are a-changing.
|
|
|
MLOps Community Panel - Rewind
|
|
|
|
|
From MVP to Production Panel // AI in Production Conference Ahead of our in-person conference, remind yourself about how great our last virtual conference was! This panel, hosted by Alex Volkov, looked at challenges when moving AI models from an MVP phase into full-scale production, like integrating new data, refining processes, and managing versions. The panel of four, Eric Peter from Databricks, Donné Stevenson from Prosus Group, Phillip Carter from Honeycomb, and Andrew Hoh from LastMile AI, shared insights on AI model
evaluation, from manual checks to rule-based and advanced language methods. They also highlighted quick feedback, ongoing training, and staged releases for user testing as crucial. Almost as crucial as you watching the panel!
|
|
|
Upcoming MLOps Community Mini Summit
|
|
|
|
|
|
|
AI Innovations: The Power of Feature
Platforms
Join us for feature packed talks on feature platforms!
Uncover the core components and challenges of modern feature platform architecture and learn about Tecton's approach to building a scalable, Python-powered feature platform for AI applications.
So, if you're looking to build large-scale real-time AI applications with Python, or maybe you're curious about the nuts and bolts of modern feature platforms, this event is for you! Register here
|
|
|
💡Job of the weekFounding ML Engineer (LLM focus) // Zep (US, Remote) Zep is
building the long-term memory layer for the LLM application stack. You will be responsible for model selection, evaluation, and performance, and the processes and tools around supporting activities.
Responsibilities:
- Oversee the full lifecycle of LLM development, including model selection, fine-tuning, and evaluating both models and prompts.
- Establish and implement infrastructure, processes, and metrics.
- Enhance low-latency inference solutions.
- Collaborate across teams to develop, deploy, and maintain LLM applications.
Requirements:
- Minimum of 5 years in ML engineering, including at least 18 months dedicated to LLM applications.
- Expertise in model evaluation techniques and metrics, with hands-on experience in low-latency inference technologies.
- Proficiency in Python and familiarity with Kubernetes, AWS, and Azure.
|
|
|
|
|
March Model Madness Update!
|
|
|
|
|
|
|
|
|
And that's the buzzer! 🏀
It’s all over and you've crowned your winners!
Chat: Gemini-Pro 👑 Code: GPT-3.5 🏅 Instruct: GPT-4 🏆 Image: Dall-e-3 🥇
Thanks so much to everyone who submitted prompts and voted, you made it really fun.
|
|
|
|
|
Winners of the raffle will be notified soon, and all selected prompts, LLM answers, and vote summaries will be shared with the community too.
If you want to know more about the tech that powered all of this, find out more about Seaplane here.
Now it's back to the training camp to get ready for March Model Madness 2025!
|
|
|
MLOps Community IRL Meetup
|
|
|
|
|
Best Practices Towards Productionizing GenAI Workflows // Savin Goyal // IRL #72 Silicon Valley Going with the flow at work can be a great way to help you stay calm – be Zen. But sometimes it can mean you being dragged along where you don’t want, ultimately being off-task. This talk emphasizes the importance of simplifying the user experience for data scientists so that they can focus on their core work rather than infrastructure management. Savin looks at Metaflow's role streamlining GenAI workflows, by simplifying data engineering, enhancing workflow monitoring, and supporting continuous integration and deployment, allowing data scientists to focus more on their primary tasks. Go along with this flow and click below!
|
|
|
|
|
|
|
How Tecton Helps ML Teams Build Smarter Models, FasterMaking quick progress Vs
the time needed to experiment.
How do you balance it? This blog looks at enhancing the efficiency of ML teams with feature engineering. It identifies key bottlenecks in the ML lifecycle, such as experimentation, productionization, governance, and serving, and illustrates how they can be addressed through a feature platform. A case study of Tide shows improved model accuracy and deployment speeds by simplifying feature engineering, managing data pipelines, and facilitating collaboration among teams.Take the time to read now, and make quick progress!
|
|
|
|
|
Amsterdam - April 11 (cheers to Nebius and Toloka) Oslo - April 16 Stockholm - April 23 (shout out to Weights & Biases and Stormgrid) Madrid - April 25
|
|
|
|
|
|
|
|
|