|
|
|
|
Folks, we are putting together another survey. Just like the last one, all the responses will be open-sourced for everyone to use. This time the survey is about Evaluation of LLMs. Take 5 minutes and fill it out. Share it with your friends so we get a robust data sample.
|
|
|
|
|
|
|
Building Cody, an Open Source AI Coding Assistant // Beyang Liu
“The big question is, does it tell you why you can’t delete that one piece of code with the coment, “NEVER TOUCH THIS CODE”?
Beyang Liu, CTO and cofounder of Sourcegraph gives a great answer to this question in this episode’s podcast when talking about Cody, the new AI coding assistant.
I have been seeing lots of cool examples of Cody being used in the wild. So, I reached out to the creator himself to hear the whole story.
He gave the inspiration behind creating
Cody. One of the interfaces they created involves inline autocomplete for your code. Basically tapping into your code base to offer predictive suggestions. The other interface takes the form of an explicit prompt-based interface, allowing you to inquire about your code.
Our conversation then delves into the exciting realm of language models and their potential to streamline software development. Beyang paints an interesting picture of a more inclusive process and how that could reshape the landscape for software developers down the line.
|
|
|
Job of the week
Senior
MLOps Engineer (ML Platform) // Get Your Guide - As a Senior MLOps Engineer, you'll join the Machine Learning Platform team and have a chance to make an enormous impact across all data products.
You'll work closely with the Data Products teams to build systems that enable ML/AI algorithms to go to production. If you are interested in contributing to the success of a product that is used by millions of travellers around the world then click the link.
|
|
|
|
|
|
|
We are so back. Let's be honest. This is the best conference. Ever. On the internet. And. I am not the only one that
says that. (Obviously, that would be a bit biased) On top of being the self-proclaimed best virtual conference out there, it's in the AI niche. You couldn't have lucked out more. Please tell me. Where else on the internet can you: - Suggest lyrics about catastrophic forgetting (and I improvise them in real-time)
- Hear how real practitioners are actually leveraging Large Language Models
- Be guided through a mediation
- Do some semi-illegal betting to win swag
- Prompt inject an LLM for private data
That is not even mentioning talks from companies such as Mistral, AutoGPT, Pinterest, Honeycomb, Lamini and so much more… October 3rd. 100% Virtual. See you there. Reserve Now
|
|
|
|
|
|
|
Agents. Love em. Hate em. They are damn interesting. My problem is they are bringing some much overhyped-ness to the AI
world. Then I found this gem by Nathan Lambert. He explores the ins and outs of deploying generative AI and you guessed it; agents. Nathan explains what these agents are all about. He then covers integration solutions from companies like ChatGPT Plugins, LangChain, and Adept AI. I like it cause he calls a spade a spade. Reliability is a major factor. And security, trust, data handling all stand out as major concerns for him. He breaks down essential features needed to make functional user-facing LLM agents work for businesses. Spoiler alert. It's hard. Check it out
|
|
|
|
|
|
|
En Camino a MLOPS // Jorge Vizcayno // Meetup IRL #47 Mexico In this video of the recent meetup in Mexico, Jorge Vizcayno, the co-founder and CEO of Alima, takes the stage. He dives into a talk that's all about his journey and insights into the world of implementing and managing machine learning projects. Jorge gets hands-on with
various frameworks and tools in the MLOps arena, throwing the spotlight on TFX, GitHub, TerraForm, and AWS. He hammers home the significance of picking the perfect framework based on project needs and limitations. Also, there's an insightful discussion about integrating MLOps into startup projects and organizations. Jorge covers the use of DevOps, continuous integration, continuous deployment, and model evaluation. It's a valuable talk worth checking out. Watch Now
|
|
|
|
|
|
|
We all know the surge in LLMs has sparked a fascination with incorporating them into diverse software systems. But, it’s easier said than done. In this blog, Andrew McMahon delves into the operational implications for ML engineers.
In his analysis, Andrew highlights key hurdles, such as:
- Necessity for expanded infrastructure
- Escalated complexity in rollbacks
- Imperative to establish custom guardrails
Moreover, Andrew gives credit to the trials in validating LLMs and delves into the landscape of prominent evaluation frameworks and datasets.
This blog is an extract from Andrew McMahon’s book Machine Learning Engineering with Python, Second Edition.
Read Now
|
|
|
|
|
Add your profile to our jobs board here
|
|
|
|
|
|
|
|
|
Thanks for reading. This issue was written by Demetrios and edited by Jessica Rudd. See you
in Slack, Youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.
|
|
|
|
|