Share
Plus, MLOps vs LLMOps, RAGs, and Reasoning
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
Salutations and felicitations to this unusually word-rich introduction to our newsletter, in honor of International Thesaurus Day.

I remain in a quandary, contemplating whether such a commemoration of the breadth and variety of language would be embraced or frowned upon by those immersed in the world of Large Language Models.

But, let not the abundance of vocabulary cause you unease. It's also National Gourmet Coffee Day in America, presenting an impeccable segue to our podcasts!

So, cut out the fancy talk, grab a mug of Kopi Luwak, add a glug of Coffee-Mate and some Sweet'N Low for good measure, then treat your peepers to a look at our newsletter.
MLOps Community Podcast
Pioneering AI Models for Regional Languages // Aleksa Gordic // MLOps Podcast #203

LLMs are great. Of course, they are.

But, that doesn’t mean they don’t have issues, like hallucinations, being English-centric, possible copyright infringement, or the eventual and inevitable overthrowing and destruction of humanity.

Unfortunately, this week’s guest, Aleksa, isn’t working on the whole ‘save humanity’ issue, so, we’re still doomed. But, the good news is more people will be able to understand the commands AI gives thanks to Aleksa’s work on Yugo GTP.

It's a generative model for South Slavic languages like Serbian, Croatian, Bosnian, and Montenegrin and aims to help enhance businesses in the region. Given the range of languages in my life, it’s great work to see and I hope others are inspired by his work.

And, if you’re not inspired by that, you should be inspired by his chutzpa to quit the big tech companies and forge his path. We discuss the benefits and pitfalls of working in big tech versus starting your venture, underscoring the appeal of entrepreneurial effort. Plus he talks about his work on vision and image models including the voice-controlled image manipulation and searching app Jarvis and, the saturation of the vision market.

Plus, anyone who has the know-how to get free GPUs is worth listening to!

MLOps at the Crossroads // Patrick Barker & Farhood Etaati // MLOps Podcast #204

There are some divisive topics people can get entrenched in:

The Beatles vs. The Stones
Vinyl vs. Digital
Electric vs. Acoustic

All are important debates to be had, but none are as important or era-defining as MLOps vs. LLMOps. Chances are you’ve probably already have a view on this but perhaps this episode might sway your perspective.

So, in the Blue Corner, we’ve got Farhood who thinks LLMOps is just hype, a play to get investment like in the .com bubble days. He also makes the point that MLOps is so new that even that term isn’t consistent across companies.

And, in the Red Corner, we have Patrick, who agrees there’s a lot of hype but sees it as a good thing, fostering innovation and creativity. He also shares his personal experience of a significant shift in skill sets when he transitioned jobs - some foundational knowledge is still relevant, but the practical application and toolsets are distinctly different in LLMops.

So, have a listen and see if they can change your view!
Gen AI Roundtable in collaboration with QuantumBlack,
25 January
Do you build, or do you buy?

Join the QuantumBlack team on 25 January to discuss the different sides of buying vs building your own GenAI solution.

We will look at the trade-offs companies need to make - including some of the considerations of using black box solutions that do not provide transparency on what data sources were used.

Whether you are a business leader or a developer exploring the space of GenAI, this talk will provide you with valuable insights to prepare you for how you can be more informed and prepared for navigating this fast-moving space.

Click here to register now
MLOps Community IRL Meetup
Why Aren’t You Using RAGs? // Ejiro Onose // IRL Meetup #60 Lagos

Hugely proud to share our inaugural IRL video from Lagos, showcasing the global reach of our MLOps Community!

Ejiro's presentation on RAGs shines a light on the development of LangChain, covering its recent updates, RAG models, and practical challenges and solutions in production environments.

He delves into the use of Langchain's templates, the intricacies of document handling and segmentation, and deployment strategies using Langserve, a robust Restful API.

Let's hope this is the first of many Community firsts for 2024!

💡Job of the week

Engineering Manager – Feature Engineering team // Tecton

As an Engineering Manager for our Feature Engineering team, you will play a critical role in shaping the team’s strategic direction and implementation of Tecton’s core Feature Engineering SDK, APIs, and developer toolkit. Feature Engineering owns the abstractions and APIs used to build sophisticated ML pipelines used by thousands of developers. Your leadership and technical expertise will directly influence Tecton’s core user experience from sign-up through feature development and finally production.
You’ll partner with Product, Design, and Marketing, and Field teams to define the roadmap that helps companies ranging from Fortune 100s to startups accelerate their path to real-time AI.

Qualifications

  • 7+ years of software engineering experience for high-scale products or infrastructure
  • 3+ years of experience working on ML engineering or developer-centric platforms
  • 2+ years of people management experience for a group of engineers
  • Passion for Product and building intuitive, high-quality developer tooling, APIs, and DSLs
  • Experience with Batch and Streaming Data Pipelines

    Blogposts
    Towards AGI: Making LLMs Better at Reasoning

    I’ve heard some engineers affectionately refer to their models as their 'babies.’

    I used to think that it was because they were proud of something they had created, something they had shaped and taught.

    Now, however, after spending several hours trying to deal with getting ‘warm ice’ for a drink, I think it may be because both have some growth to do with reasoning.

    For AI there’s help from this blog. It explores the gaps in LLMs reasoning abilities and looks at ways to address them, like Reinforcement Learning with Human Feedback (RLHF), Chain of Thoughts prompting, and Diverse Verifier on Reasoning Steps (DIVERSE) to explore multiple reasoning paths.

    It also looks at augmenting data processing with RAG for improved accuracy and presents advanced methodologies like Funsearch by Deepmind for creating diverse, high-quality prompts.

    I wonder if any of these will work with kids…

    With thanks to Manas Singh for the contribution.
    Looking for a job?
    Add your profile to our jobs board here
    IRL Meetups
    London - January 18
    Stockholm - January 18
    Madrid
    - January 25
    Frankfurt - January 25
    Edinburgh - January 25
    Seattle - January 25
    Munich - January 30
    Denver - January 30

    Thanks for reading. See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on X. The MLOps Community newsletter is edited by Jessica Rudd.



    Email Marketing by ActiveCampaign