Share
Preview
Think not about the compute...
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

We are doing an async reading group around the fundamentals of data engineering. You can join us and find out all the details on how it's happening here.

 
Coffee Session
Feathr Feature Store
Feature stores and the journey of building Feathr. David Stein a Senior staff software engineer at LinkedIn broke it down for us.

Feathr Background
An open-source feature store from LinkedIn that was formally known as "Frame" when it was used internally to manage its numerous ML use-cases.

It was born at a critical point of discovering the burden that emerged from managing data (feature) preparation workflows that had been growing for years at LinkedIn.

Feathr simplifies this workflow, by making it possible to define a set of features as entities and then compute them directly from their source data, before finally loading them into either a training or inference context.

System Interaction with Feathr

The idea is to highlight the pertinent parts of building and working with feature preparation workflows for ML engineers/data scientists by abstracting away complexities.

Features are viewed as entities for various sets of things like job postings, advertisements, users, or feed items.

The innovation decouples the features from the code that creates them and the models that make use of them.

Using an "import" library kind of design, that generates a single point of truth for all features as well as their definitions across the entire workflow ecosystem, irrespective of the context.

Feathr's Unique Qualities
Feathr takes a "swiss army knife" approach to its design in the form of expression languages and APIs that enables engineers to define the features based on available data assets in form of tensors that can then be injected into any context of use.

 
Coffee Session
ML Deployment Computing
Luis Ceze, co-founder and CEO at OctoML, gave us something to think about on this pod.

DevOps = MLOps?

With the numerous definitions for MLOps, perspective becomes the defining factor that begs the question, "Has MLOps been conflated to encompass more than what it is supposed to?

It's not news that ML models cannot be treated like regular pieces of code at deployment. However, if that were possible, wouldn't that just be DevOps?

In that light, MLOps can then be viewed as the entire process of model creation(i.e., from the data collection phase to model development) because of the required workflow.

ML Hardware Dependency is one of the fundamental constraints that make it require special treatment due to the various types of hardware available.

To accommodate its special needs, low-level code is used to tightly couple software computation on the hardware by using specific datatypes and instructions in either processors, GPUs, or accelerators. This enables things like parallel computing and maximum utilization of their abilities.

Abstracting away this hardware layer will empower engineers to focus more on producing the best model rather than worrying about hardware infrastructural constraints.

TVM (Tensor Virtual Machine)
TVM is a framework that provides the ability to create an Intermediate Representation (IR) of a model irrespective of the software stack, which can then optimally run efficiently across any hardware target.

It abstracts the hardware details away in the lower layer and then maps the model from a higher layer across the abstraction interface.

This takes a compilation approach to process the computation by running the ML model execution in binary code directly on the hardware.

OctoML's Sauce
OctoML takes the concept of TVM's a step further, by automating the ability to understand the tradeoffs that exist with deploying models to different hardware targets seamlessly, based on their specific hardware instruction.

The trick here is to use machine learning to generate ML Code that would then choose what compilation approach would be the best option for deploying a particular model.

 
Blog Post
What Does an MLE Actually Do?
We kicked off a new blog series to shed some light on what is expected of machine learning engineers (MLEs) in companies ranging from start-ups to large corporations. Companies that have mature ML programs to those just starting.

Our first victim is Matheus Frata, a Tech Lead ML Engineer at Neoway.

I asked him over 15 questions, Check them all and tell me if you want to participate in this series by letting the rest of us know what you do!
 

Virtual Meetup
Demystifying Vertex AI
Pelayo is a Machine Learning Engineer working for Richemont SA.

Since Vertex AI is a recent addition in the MLOps landscape, many doubts arise from its potential adopters.

The purpose of this meetup is to demystify Vertex AI. In this episode, we break down the pros, cons & lessons learned from the speaker’s Vertex AI journey.

Google is not paying him to say anything. we are gonna hear the good bad and the ugly. It's happening tomorrow at 9am PT/5pm BST/9:30pm IST

We Have Jobs!!
There is an official MLOps community jobs board now. Post a job and get featured in this newsletter!
IRL Meetups
Israel — September 7
Seattle September 15
Zurich September 15

Denver — September 20
Amsterdam — September 21
Washington September 22
Chicago September 22
Toronto September 27
Oslo September 28

Berlin — October 6

Thanks for reading. This issue was written by Nwoke Tochukwu and edited by Demetrios Brinkmann and Jessica Rudd. See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.



Email Marketing by ActiveCampaign