Share
and fine tuning LLM cost break down
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
Last chance to let everyone know what your favorite pastime is.
Coffee Session
Exercise Your Code

We sat down with Rob the CEO of Rack'N to go deep into the DevOps mentality.  Our flavor of engineering was built on the back of giants, so let's see how much of that can be transferred to MLOps (or if you so please..... LLMOps 🤦‍♂️)

Here are a few key themes from the convo:

Complex Systems as Exercise: Reframe complexity and perceive it as exercise to keep you in shape rather than something scary.

Many people view complex systems as intimidating. After years of experience, Rob talks about how complex systems are more resilient to failure than simple systems. They do not come for free though! So you must exercise your code. Rob breaks down exactly what that means.

Shifting Perspective: Instead of asking why a system is complex, shift your focus to understanding and exercising the complex system. He highlights observability, transparency, and fixing the root cause of problems as essential elements in working with complex systems.

Challenges and Opportunities
: Installation and updates, lack of standards, and confusion between business models and technology. He emphasizes the benefits of autonomy in managing software and the importance of easy installation/updates.

....By the way, Rob invented the cloud with a colleague back in the 90's so you can guess we spent a good chunk of time talking about that.


Job of the week

Engineering Manager, Data Platform//KoBold Metals - Join a start-up using AI to enable the transition to electrification and help solve climate change.

Making mineral exploration data broadly accessible to humans (geologists and data scientists) and machines (machine learning pipelines) will improve our chances to discover the metals such as lithium, copper, nickel and cobalt that are critical for the energy transition.
IRL Meetup
Me gusta MLOps mucho

If you were looking for an excuse to practice your Spanish I've got just the thing for you. La Ciudad de Mexico had its first in-person meetup ever and we have a recording to prove it!

Come listen to Carl Wallace Handlin serenade you by talking all about how his team at Trully has overcome the "deployment gap" (taking ages before a model gets to prod) and what the ideal scenario is in his eyes.

For all you non-Spanish speakers, just paste the youtube link in chatgpt and I'm sure you will get a few key takeaways.

Video
Resources
LLM in Prod Recap

The team at Truefoundy did some incredible work explaining 10 of their favorite talks from the most recent LLM in Production Conference.

Read about the in-depth analysis here, and expect more conference videos to be dropping this week!
Blog
A Good MLOps Project Starts With a Python Package

MLOps practitioners (rightfully) point out that running notebooks in production is bad software practice, but what are the alternatives? A simple script is not enough to capture the complexity of AI/ML projects, and rewriting a whole project in another programming language is both costly and time-consuming.

To solve this problem, the most efficient approach is to create a Python package that compiles the project sources and assets in a code archive. However, building such a package can be a complex endeavor for newcomers. The Python ecosystem is vibrant, but also fragmented. Moreover, machine learning projects are more complex to develop than most other software applications.

Read Now
Inference GPU Costs

Dive into any large-scale deployment of AI models, and you’ll quickly see the elephant in the room isn’t training cost – it’s inference.

Here’s a hard-hitting fact: many AI companies shell out over 80% of their capital just on compute resources. In the words of Sam Altman⁹, Compute costs are eye-watering. Well, welcome to the intriguing & complex world of comparing inference compute costs!

Now imagine you’re an AI start-up or integrating AI into existing systems. As you experiment with APIs from OpenAI and other closed platforms, you’re faced with a dilemma – stick with high-cost, closed-source APIs or switch to customizable open-source models.

Don’t worry, though. We’re here to guide you. In this blog, we’ll simplify the complex world of compute costs in AI, starting with Large Language Models (LLM), helping you navigate this critical decision with clarity and confidence.

Continue Reading
Looking for a job?
Add your profile to our jobs board here
IRL Meetups
Luxenburg - July 05, 2023
Austin, Texas - July 13, 2023
Bristol - July 06, 2023
Atlanta - July 20, 2023
Chicago - July 24, 2023

Thanks for reading. This issue was written by Demetrios and edited by
Jessica Rudd. See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.




Email Marketing by ActiveCampaign