Share
Bits and bytes to atoms
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
The most asked for thing people wanted to learn about in the LLM in production survey we put out was fine-tuning.

We are having a virtual meetup on that tomorrow. More info below.
 
Coffee Session
LLMs at Scale

On this podcast, we had Nils Reimers, Director of Machine Learning at Cohere.

LLMs Security
The guild of LLMs is to make humans more productive by automating uninteresting tasks. Unfortunately, bad actors can augment these LLM's capabilities. This brings up some security concerns that are quite challenging to manage like data poisoning, PII data manipulation, personalized spam, phishing emails, hate speech, etc.

However, with security principles, there is a trade-off between the model provider and the user. Either model providers respect the privacy of the user by not having access to their data due to the inability to properly protect user data or they have access to user data but are able to effectively shield user data from the bad actors.

One major drawback of using LLMs in production is the latency issue. Usually, there is the non-subjective time it takes to get an answer back from the model and there is the perceived time. Model providers apply all sorts of tricks to address the latency gaps and improve the user experience when using these models.

They could either make the output wait time fun by engaging users with some quizzes, fact tells etc. Or they can make the waiting time seem shorter by streaming the results to the users as the model generates an output, rather than a typical one-batch output.

This idea is to keep the users occupied while the model generates an output.


 
LLMs In Production
Agentic Relationship Management
Ashe Magalhaes, founder of Hearth AI, talked about technical approaches to agentic product relationship management.


Hearth AI is a stealth start-up that aims to build an agent that augments the human experience. Agents are Al systems that leverage large language models (their simulated world model) and human feedback to determine which actions to take and in which order.

Agentic products describe a class of products that execute a series of observations, "thoughts, and actions to meet an objective.

Despite being glued to our screens, the ability to efficiently manage personal and professional networks still presents some unprecedented levels of complexity:

• Dropping relevant communication threads
• Wasting time on manual data entry tasks
• Losing key network details
• Failing to match people with the right opportunities at night times
• Letting important relationships grow cold

However, agentic relationship management products that leverage Al agents can be used to learn, understand, and act on behalf of the user. Hearth Al will synthesize and manage your network complexity to ensure that people focus on their connections.

 
LLMs Panel Session
Data Privacy and Security
Diego Oppenheimer, Partner and CEO at Factory, moderated a panel session with Vin Vashishta, C-level Technical Strategy Advisor and Founder at V Squared, Saahil Jain, Engineering Manager at You.com, Gevorg Karapetyan, Co-Founder and CTO at ZERO Systems, and Shreya Rajpal Founding Engineer at Predibase.

When thinking about the implications of data privacy / data security with respect to LLMs, it's no longer about the data itself, it is the patterns within the data that can be uncovered. But it also creates vulnerabilities for anyone with a significant amount of data out in public. The ways these foundational language model APIs are used need to be framed to address the different concerns that exist today with data privacy.

A lot of these models are still susceptible to hallucinations and technical work still needs to be done to reduce hallucinations. Hallucinations don't nullify the usefulness of these models. The goal is to ensure that users are presented with the right expectations when using language products or tools. Also, we need to ensure that the language products or tools don't mislead users and that users know how to responsibly use it.


 
Past Meetup
Fine-Tuning LLMs
We will have Mark Kim-Huang, Co-Founder and Head of AI at Preemo inc., on this podcast to talk about LLMs best practices.

With the open source community releasing foundational models at a blistering pace, there has never been a better time to develop an AI-powered product. In this talk, we walk you through the challenges and state-of-the-art techniques that can help you fine-tune your own LLMs. Additionally, we will provide guidance on how to determine when a small model would be more appropriate for your use case.


 
Blog
  • Lesson Learned

This blog was written by
Médéric Hurier

This article presents the steps and lessons learned from his journey to shed some light on the MLOps survey on LLMs. It starts by presenting the goal and questions of the survey. Then, it explains how he used ChatGPT to review the data and standardize the content. Finally, it goes to evaluate the performance of ChatGPT compared to a manual review.


 
We Have Jobs!!
From now on we will highlight one awesome job per week! Please reach out if you want your job featured.

  • MLOps Engineer at Dexter Energy Services: Are you a passionate engineer that is eager to help accelerate the transition to a less carbonized industry? This is a sign to join the engineering team of your dreams.
IRL Meetups

Sydney - May 17, 2023
Madrid - May 23, 2023
Toronto - May 23, 2023
San Francisco - May 24, 2023
Stockholm - May 25, 2023
Munich - May 25, 2023
Seattle - May 26, 2023
Denver - June 9, 2023

Amsterdam - June 15, 2023

Thanks for reading. This issue was written by Nwoke Tochukwu and edited by
Demetrios Brinkmann and Jessica Rudd. See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.




Email Marketing by ActiveCampaign