Share
Preview
New Tool Tuesday, Weekly round-up, and whats next in MLOps
 β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ β€Œ

Loads of new slack channels for some of the coolest projects happening in the MLOps ecosystem. I highly recommend checking out #mlops-stacks and #learning-resources

New Tool Tuesday
Dikembe Mutombo
The One Commandment

I am a sucker for well named products. When @Miguel Jaques mentioned he had created a new tool called Nimbo I couldn't help myself. I had to dive in deeper for this addition of 'New Tool Tuesday'.

What it is:
Nimbo is a command-line tool that allows you to run machine learning models on AWS with a single command. It abstracts away the complexity of AWS.Β  This allows you to build, iterate, and deliver machine learning models.

Why it was created: Two Edinburgh college buddies were sick of how cumbersome using AWS was. Miguel a PhD in Machine Learning, Juozas, the co-founder, a Software Engineer wanted to be able to run jobs on AWS as easily as running them locally (e.g. training a neural network).

"All in all, we didn't like the current AWS user experience, and we thought we could drastically simplify it for the machine learning/scientific computing niche."

Having experienced that pain they set out to provide commands that make it easier to work with AWS. Such commands include easily checking GPU instance prices, logging onto instances, syncing data to/from S3, one-command Jupyter notebook, etc.

The lads decided to be solely client-side, meaning that the code runs on your EC2 instances and data is stored in your S3 buckets.

"We don't have a server; all the infrastructure orchestration happens in the Nimbo package."


How it works under the hood: Nimbo uses the AWS CLI and SDK to automate the many steps of running jobs on AWS.

Some examples of this include: launching an instance (on-demand or spot instance), setting up your environment, syncing your code, pulling your datasets from S3, running the job, and when the job is done, saving the results and logs back to S3 and gracefully shutting down the instance.

You can use a single command (
nimbo pull results) to get the results onto your computer. One of the most annoying parts of working with AWS is the IAM management and permissions. Miguel and Juozas decided to automate that too cause no one should have to suffer through that unwillingly.

Looking forward: The guys plan to add docker support, one-command deployments, and GCP support. Who knows maybe they will even chat with Pavlos and the Sagify folks as it seems they are trying to address some of the same problems.

Past Meetup
We Are Live
Coding In Public

If you are under the impression that coding in front of people has to be like interview code tests, Felipe is here to tell you it doesn't.

I was fascinated by the community and interest around live coding. Felipe who has been at it for quite some time now was able to tell us all about the fears and insecurities he had when just starting out on twitch. He then relayed why it was nothing like what he imagined and what has kept him coming back to stream multiple times every week.

The audience watching wants to go on a journey with you. They want to see you have to google things cause you don't know off the top of your head. They want to see how you trouble shoot and overcome obstacles.

I realized this is a brilliant way of getting a peek into someone else's mind and learning new problem-solving skills.

Full conversation on youtube and in podcast land.
Coffee Session
An MLOps Success Story
People, Process, Technology atΒ Β 

This week, Demetrios and I spoke to Stephen Galsworthy, who was most recently the head of Quby, a smart energy tech company. This was an uplifting episode, because we didn't just hear why MLOps is hard; we heard what the benefits are when a company can get it right!

Listen to the podcast for the full story, but here's a little taste of why the episode was so interesting. Quby was able to bake in machine learning algorithms into their products through investments in process and platform that paid off over several years. Their success with their products helped play a part in their successful exit to a Dutch utility company recently. Stephen told us some interesting and useful stories and ideas that Quby applied to ultimately make MLOps successful.

1. Invest in people: One of the remarkable things Quby did to make their MLOps work a success is set up an internal "data university" for all different parts of the organization. They recognized every part of the business needed to understand ML in order to be successful, and they found a creative way to introduce the needed maturity.

2. Embed, instead of create, ML: As ML professionals, our first instinct can often be to look at a new problem and suggest a solution driven by the awesome power of machine learning. However, it can often be simpler and more impactful to infuse machine learning into an existing solution. Doing this avoids the intensive process needed to setup a wholly ML-driven solution. Quby applied this rationale intensively and saw the results in their smart thermostat product.

3. Leverage technology platforms: Finally, Quby invested heavily in a unified data and machine learning architecture that allowed ML projects to be spun up increasingly quickly. Stephen provided great insight into how effective platform design and adoption can dramatically decrease time to release and increase an organization's happiness with ML systems.

Demetrios and I thoroughly enjoyed digging into what Quby built with Stephen, and we hope you enjoy it too! You can find the full conversation in video format or in podcastland.

Till next time,
Vishnu
Current Meetup
Island Vibes
You, me, and a coconut...

Your model is not an island. For success, Data Science requires a high level of technical collaboration with other parts of the data organization.

How can we collab with these other parts of the organization more effectively?

Cris Bergh is gracing us with his presence this week to talk about these very issues. Bergh will also touch on DataOps vs MLOps vs ModelOps and why the hell there are so many damn Ops these days.

Bio: Chris Bergh is the CEO and Head Chef at DataKitchen. Chris has more than 25 years of research, software engineering, data analytics, and executive management experience. At various points in his career, he has been a COO, CTO, VP, and Director of Engineering. Chris is a recognized expert on DataOps. He is the co-author of the "DataOps Cookbook” and the β€œDataOps Manifesto,” and a speaker on DataOps at many industry conferences.

See you on Wednesday aka tomorrow at 5pm UK / 9am California. Click the button below to jump into the event, or subscribe to our public google calendar.
Sponsored
Do You MPM?
Optimizing MLOps with Model Performance Management

We’ve seen the rise of MLOps in an effort to enable IT teams and Data Science teams to collaborate and accelerate ML model development and deployment. Due to the sudden necessity, many companies opted to use a traditional APM framework when instead, a dedicated ML framework is better suited.

In this paper, Fiddler put together the unique nature of machine learning, its challenges, and how to optimize MLOps with a disciplined Model Performance Management (MPM) framework. Here’s a list of what you will find:

  • A snapshot of ML Lifecycle
  • The 8 unique challenges for ML models
  • A summary of how MPM solves these challenges

Download the β€œOptimizing MLOps with ML Model Performance Management” to read more.


Fiddler’s mission is to allow businesses of all sizes to unlock the AI black box and deliver transparent AI experiences to end-users. They enable businesses to build, deploy, and maintain trustworthy AI solutions. Fiddler’s next-generation ML Model Performance Management platform empowers data science and AI/ML teams to validate, monitor, explain, and analyze their AI solutions to accelerate AI adoption, meet regulatory compliance, and build trust with end-users. The platform provides complete visibility and understanding of AI solutions to customers. Fiddler works with pioneering Fortune 500 companies as well as emerging tech companies.
Best of Slack
Jobs
See you in slack, youtube, and podcast land. Oh yeah, and we are also on Twitter if you like chirping birds.



Email Marketing by ActiveCampaign