|
|
|
|
Here is the data from our LLM in-production survey a week ago. Really cool findings. And next week we will announce something very special.
By the way, we have re-activated our bi-weekly reading group. if you enjoy reading/are currently reading something, please join here.
|
|
|
|
|
|
|
|
|
Saahil Jain, Engineer at You.com talk about the future of search with You, in the current era of Large Language Models and Generative
models.You and the Search LandscapeYou is an AI-powered search engine. It's one of the companies using Generative AI and NLP to advance and reinvent the wheels in search.The most significant transitions in search have been from the paradigm of traditional classic information retrieval, which uses keywords as the primary factor for performing a search, to
neural information retrieval which made it possible to embed documents and perform semantics search. Lately, with the use of conversational AI, the search paradigm is more direct when providing an answer rather than providing you with a list of suggestions and related documents.However, sometimes getting a list of options when making a search is better than a single answer. You take the search a little further by combining these different search paradigms to give a better search experience. Search Challenges Trade-offsBased on user intent for a search query which is largely nuanced due to a lot of use cases. A couple of factors come into play when delivering a search output to a user like the relevance of the search suggestions, content latency, and throughputs/cost.
|
|
|
|
|
|
|
|
|
- Understanding Machine Learning Systems by Chip Huyen.
Business Mertrics and ML Metrics When building a system for a business, it must be driven by business objectives, which are then translated into ML objectives to guide the development of ML models. In truth, most companies
don’t care about fancy ML model metrics that don't move some business metrics.
The ultimate goal of any project within a business is to increase profits either directly or indirectly. For an ML project to succeed within a business organization, it’s crucial to tie the performance of an ML system to the overall business performance.
Experiments are often required to determine the exact relationship between ML and business metrics. Many businesses conduct experiments like A/B testing to determine which model leads to the best business metrics, and then implement that model regardless of whether or not
it has better machine learning metrics.
|
|
|
|
|
|
|
|
|
-
MLOps at Wolt // Stephen Batifol
At the last Helsinki meetup, the guest speaker was Stephen Batifol, Machine Learning/YAML Engineer at Wolt. He talked about the MLOps Journey at Wolt
Forecasting supply and demand, serving restaurant recommendations, and predicting delivery times. These are just a few examples of how Machine Learning is being applied at Wolt. Now with over 20 million users, scaling the ML infrastructure has been a significant challenge.
Wolt is addressing these challenges with its own end-to-end MLOps platform on Kubernetes. Wolt has integrated open-source frameworks with its platform, specifically Flyte, MLFlow, and Seldon Core to enable its primary focus of improving platform velocity.
|
|
|
|
|
|
|
|
|
- The emergence of the Full Stack ML Engineer
This blog was written by Prassanna Ganesh Ravishankar, Senior Machine Learning Software Engineer at Papercup.
This blog draws a detailed map of the application development history. It starts from the
origin stories of the long-standing paradigm of a monolithic application or tool being served to users packaged with application logic, to the client-server programming paradigm. This evolution in the paradigm also influenced the skills of the engineers, which is how full-stack machine learning engineers have emerged from this shift.
Read Here
|
|
|
|
|
|
|
|
|
|
|
- Tecton 0.6 release // Tecton
Tecton recently rolled out version 0.6, which includes new capabilities to simplify and accelerate feature engineering at scale. With these new capabilities, data teams can approach feature engineering more efficiently, making it easier
to make data available to end users in real-time predictive products.
In Tecton 0.6, data teams can now develop and test features quickly with the flexibility of a Python notebook using the new notebook-driven development capability. This release also includes continuous mode for all Stream Feature Views that enable users to refresh stream features quickly.
Additionally, Tecton 0.6 offers new aggregation functions, such as First-n, First-n distinct, and Last-n, which previously required custom development to get into place. It also includes a new Stream Ingest API, which is an alternative way to send streaming data into Tecton, and ACL
CLI commands that enable users to manage Service Account lifecycle and inspect and modify Workspace roles programmatically.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Thanks for reading. This issue was written by Nwoke Tochukwu and edited by Demetrios Brinkmann and Jessica Rudd. See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.
|
|
|
|
|