Share
Preview
Fit ML into the use-case context
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
In case anyone is around I’ve got some special stuff planned for the apply(recsys) conference happening today.

Come join me and laugh a bit with bad musical data songs. Link to register here

 
Coffee Session
Rules Engine X ML
On this podcast, We had the pleasure of talking with Jeremy Thomas Jordan, a Machine Learning Engineer at Duo security. We talked about the good, the bad, and the ugly with rules engines and ML systems.

Using Rule Engines
Software Logic is the technical explanation of Rule Engines.
The million-dollar question is, "when are rules engines good, and when are they not?"


It's a tricky question because rules can be an addition or a hedge around the unpredictable nature of ML systems.

It's okay to start deploying with the rules engine (heuristics) before adding ML bit by bit on top of it.

When building software, it's highly unlikely that the data needed for the ML solution will be available.

Rules are great for encoding our knowledge about the problem. As a practitioner, domain knowledge comes in handy at the early stages of building a system to serve. We would most likely have a good understanding of the problem before we have a large amount of data available based on the problem.

Also, when a rules engine system and an ML model are both in production, the ML model might introduce some false positives that are not good. It's easier to quickly replace that ML part with a rule rather than trying to sort out a new/quality dataset for the model.

Rule Sprawl
From the ML perspective, it is easy to assume that building a certain solution would be more suited for a rules engine.

However, managing an ever-increasing number of rules and ensuring that it doesn't get unwieldy is a very delicate concern when building systems.

Defining a union of high-precision rules will give the most productive outcome, rather than having an interaction on noisy rules that support each other.

It doesn't limit the ability to catch everything because optimizing for precision can harm recall. This is where ML complements things well; it learns from the errors and uses that to generalize better patterns.

In short, it serves more as a feature than a bug.
 
IRL Meetup
Designing ML Features
At the #16 Scotland IRL Meetup, we had Siwei Kang, the creator of PicoJar. She shared her experience with fitting ML into the context of an application or problem.

PicoJar is a screenshot-based note-taking app. It's a really useful tool, given that screenshot is a quick way of bookmarking ideas with visual context. The goal is to improve and optimize the downstream experience to enable people to visit the screenshot more often and take action on them.

In this day and age, ML is a commodity. Solving a problem with ML is not a scientific problem but more of a creative one. There are some important things to consider when designing ML systems based on Siwei's experience.

ML Design Experience
Think again in making the most critical feature that runs your product to be powered by ML. The critical feature is what users must make use of when using the product. If the feature doesn't work, then the product is no good.

It is a good idea to make a critical feature complimentary to the product. It gives more room for tolerance for the users making use of the product.

Ironically, a good ML problem can usually be solved by a human expert.

Domain experts use their experience to give are more structured design to a solution that is more reliable in real-life applications/use cases.

Also, it is important to keep ethics in mind. Privacy and responsibility will always be important concerns when building ML solutions.


 
Blog post
Monitoring ML Models
Duarte Carmo is a technologist/hacker. His work lies in the intersection of machine learning, data, software engineering, and people.

ML model monitoring is a delicate phase in the MLOps lifecycle. Understanding how to implement monitoring is crucial in the development process. In this blog, Duarte shows how to monitor your ML model in production using Evidently AI.
 
We Have Jobs!!
There is an official MLOps community jobs board now. Post a job and get featured in this newsletter!
IRL Meetups
Paris — December 07
Bristol — December 07
Amsterdam — December 08
Denver — December 09
Luxembourg — December 15
Los Angeles — December 17
Toronto — January 31

Thanks for reading. This issue was written by Nwoke Tochukwu and edited by Demetrios Brinkmann and Jessica Rudd. See you in Slack, Youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.



Email Marketing by ActiveCampaign