ELC
timezone
+00:00 GMT

Leading AI/ML Teams

with Craig Martell Head of LyftML @ Lyft

September 20, 2020

SPEAKER

Craig Martell - Head of LyftML @ Lyft

Craig Martell is Head of Lyft Machine Learning. He’s also an adjunct professor of Machine Learning for Northeastern University’s Align program.

“If I had to give one piece of advice about starting AI in your company, one of the first people I would hire is a really great data scientist, even if they can't code. Just so they're the one who's going to start training you and helping you think about, how to gather data, how the modeling is going to work, what you're going to need, whether that feature that you want to build is even modelable in the first place. And that scientific knowledge has to permeate the entirety of your org”

- Craig Martell

Prior to joining Lyft, he was Head of Machine Intelligence @ Dropbox, and led a number of AI teams and initiatives at LinkedIn, including the development of the LinkedIn AI Academy. Before LinkedIn, Craig was a tenured computer science professor at the Naval Postgraduate School specializing in natural-language processing (NLP). He has a Ph.D. in Computer Science from the University of Pennsylvania and is the co-author of the MIT Press book Great Principles of Computing.

Join us for the 2020 ELC Summit on 10/15!

We’ve transformed the 2020 ELC Summit into an immersive multi-day virtual experience PACKED with celebration and inspiration! We’ll have 100+ incredible speakers, well-rounded curated topics, hands-on practice through workshops plus other awesome immersive experiences. And you can meaningfully connect with tons of other eng leaders through our own custom-built platform!

You’ll connect with great people. You’ll walk away a better leader. and if you’re not careful… you might have some fun with us along the way :)

(early bird tickets available until 9/3)*

Check it out HERE


Show Notes

  • An overview of the machine learning lifecycle (2:49)
  • The most expensive and time-consuming aspect of the machine learning lifecycle (6:07)
  • The key skills of a machine learning team (7:21)
  • How do you build an AI/ML Team and what are the different models? (8:41)
  • What to do If you’re an engineering manager with no AI/ML skills or experience (15:19)
  • How deep does your understanding of AI/ML have to be in order to lead effectively? (18:48)
  • How do you estimate project timelines for AI/ML teams? (19:15)
  • What are the biggest mistakes engineering leaders make managing AI/ML teams? (20:52)
  • How do you manage expectations in an organization that’s in the early days of AI/ML development? (21:33)
  • What are sources of technical debt unique to AI/ML systems? (22:13)
  • How do machine learning teams interface with product teams? (23:55)
  • AI/ML resources for executive engineering leaders (25:44)
  • When’s the right time to invest in AI/ML? (26:10)
  • Can you apply the Pareto Principle (80/20 rule) to AI/ML development? (26:40)
  • Takeaways (28:12)

Resources


Transcript

An overview of the machine learning lifecycle

Craig Martell: So what I wanted to talk about today is, everything you wanted to know about machine learning management, but possibly didn't want to hear.

I'm gonna say essentially, here's what a good structure of machine learning team should look like. Here are the components you need. Maybe some suggestions for how you could fill up those components. And then if you actually want to manage that team, what you can do to help prepare yourself to interact with my tribe.

I want to first start talking about the machine learning life cycle. it's very different from standard software engineering.

And I think the first thing you need to learn, if you're going to manage the machine learning team is how machine learning gets built and what that life cycle looks like. Its cycle is much longer, so it's a challenge to actually sync up with a two week sprint, or even a six week sprint. Or even quarterly sometimes, which my boss doesn't like me saying...

We're gonna talk about what skills make up machine learning. And, how to make sure that you have them on your team. I already said all this, how to potentially scale your, scale ML to all your teams, and what to do if you don't have an ML background, but you're going to be managing that.

so the first thing you have to do is gather labeled data. And then you extract features from that label to data. I'll talk about that in a second. You then have to decide about what algorithm you want and you take the label data and the algorithm, and you put it into this box called machine learning.

And what comes out of that box is a function. And that function maps features to a class. Who doesn't know what I mean by the word features. Features are abstractions from the data that you believe are predictive.

Okay. So for example, if we were going to do gender prediction just by using, video, recognition, what's a good feature for gender prediction.

Well, height might be a good feature for general. Sorry. Voice frequency if we had that hair length, these are all good features for gender prediction. Are they a hundred percent accurate? No, not at all. So it's a function that maps those features to the light, the hood of a class. That's where we call it a classification task. We are classifying things.

So some possible classification tasks are, will you click on this job or not? Or when I search for results in Dropbox, Is this your file? Did you click on it or not? So is it a good file for you or not? That would be the two classes. That'll be a binary classification and the features might be , the title the last time you looked at that file, the content of the file relative to other files, you've recently opened. These are the features. So we start with labeled data, we use an algorithm and we generate a function which maps features to classes.

We then do an offline evaluation - How good are we doing given a goal data set that we consider ground-truth? How good are we doing giving a gold dataset?

And then we iterate. So we evaluate offline and we iterate. We evaluate offline and we iterate. You can see why our life cycle is very different than software engineering. We evaluate offline, then we iterate and then we ship - boom! But we're not done. Cause then we have to evaluate online and then iterate and then evaluate online and then iterate and evaluate online and then iterate.

And iterate could either be choosing a new algorithm, choosing new features, Or just gathering more data.

The most expensive and time consuming aspect of the machine learning lifecycle

What do you think the most expensive and time consuming aspect of this life cycle is?

Data cleaning? I don't say that here, but okay. It's exactly right. Gathering features from label data.

So I want you to pay attention to this as it's part of the rest of the story...

Gathering features from label data is a fairly manual task. It's a very scientific task because you have to gather the data in a way that's fairly sampling the population for the task at hand. You have to sample it fairly. You have to figure out which features you actually need. And sometimes those features are very large vectors. Sometimes they're projections into smaller spaces. The buzzword for that is now called embeddings. So there are lots of features that you have to build, but all of them depend specifically on your data and your problem.

And you need software engineers and you need people with great scientific thinking to do that. But it's also the most expensive and time consuming. I thought most people would say building the algorithm would be the most expensive cause you need real mathematical geniuses to do that. It's actually not the most expensive right.

For some problems it might be, for level five autonomy it might be. But I still think data's probably the more important one there too.

The key skills of a machine learning team

Okay. So what skills make up a machine learning team? What do you need on your team? In order to do machine learning?

You need math. You need someone that has a really robust, you need a set of people have that really robust mathematical skills. And those skills are essentially probability / statistics... I know they're not the same, but I collapsed them... linear algebra and calculus.

This is, if you come to a team meeting in my team, most of the conversations contain the word "vector" , "projection" , "gradient" , this is just how we talk. You need people with strong mathematical backgrounds.

You also need people with strong, empirical science backgrounds. Why do you need people to strong empirical science backgrounds? Because of what I said before, to gather that data, you have to know how to sample. You have to know how to sample fairly.

You have to make sure that you're not sampling in a skewed way so that when you build your model, it doesn't actually apply incorrectly to the deployed situation. Okay. So you need be able to sample.

You need to be able to evaluate statistically, not just statistical significance, but you have to understand things like statistical power. So you need people who are robust scientific thinkers. The way I say it is they have to be really good at hypothesis generation and hypothesis testing. I know , it sounds like we're just engineers, but really, if you need to make this work, people have to have scientific thinking on your team. And you need software engineers, but you already know that.

How to move beyond the classical model of building an AI/ML Team? - the different models...?

okay. So how might you get this? This combination of features. Well, you could do the classic thing and try your best to hire a bunch of PhDs in machine learning. How many people think that's going to be robustly successful?

It's very hard because it takes a really long time to produce these people and everybody wants them.

So that's one way to do it. And that's the classic model. It's very expensive and very time consuming and we need to move beyond that. So part of what we spend a lot of time thinking about it at Dropbox is how can we move beyond that very tightly constrained model.

Well you can combine these PhDs with people have strong bachelors and masters degrees in machine learning. Now, note what I said strong bachelors or masters in machine learning. So I'm moving a little bit from the model in that story. But, those bachelor's and master's in machine learning usually can code really well. And they're usually good enough at the math and good enough at the scientists that the senior people can mentor them.

You could do what other teams have tried... in my opinion, it's a mistake... which is do a division of labor. You could hire people who know math and science and let them build models. They're usually called data scientists, and then you can let the engineers ship those models. That seems like an awesome way to get started because the data scientists don't have to have coding backgrounds. And there's lots of software engineers who actually want to work in this field. So they seem really eager to do it.

The fundamental problem there it's, it works. It can work, but you gotta be very careful because it often creates a caste system. That the modelers are seen as really smart, doing cool, sexy things. And the engineers are just shipping models. So they came to learn about machine learning, but they end up just shipping a blob of data called a model and they know nothing about it.

So if you wanted to build a team that did this division of labor, be very cautious. I think not really seen it work, but Madhura promises to tell me I'm wrong in almost everything I'm saying in the panel discussion so we'll see...

And finally to help fill out this team makeup you can build or buy a platform.

Okay. So what do I mean "buy a platform" we're building something called machine learning platform. Databricks will sell you a machine learning platform. a lot of companies going out of business selling machine learning platforms... you could build your own machine learning platform using third party AI's from Google, Microsoft, or Amazon.

And the beauty of doing that is you're outsourcing the math. What do I mean by that? Other people are building those algorithms. But what did I say? The most expensive part is? Yeah. Data, Somehow getting the data. So you can't outsource the data, getting the data or sampling the data or figuring out what features are going to work. That's still an extremely important part. The algorithm is just one small part of that. So it's great to build a system that, ships, that wraps algorithms in an API so that anybody can use that, but you still need the scientific thinking.

It's also much easier to ship, right? If you are going to use that split model, maybe you can hire data scientists and you don't need, the software engineers, because you could build a platform that will help those data scientists ship. So building a platform has a lot of great benefits.

But as I said, there's no outsourcing the science. So no matter how, if I had to give one piece of advice about starting AI in your company, one of the first people I would hire is a really great data scientist, even if they can't code. Just so they're the one who's going to start training you and helping you think about, how to gather data, how the modeling is going to work, what you're going to need, whether that feature that you want to build is even modelable in the first place. and that scientific knowledge has to permeate the entirety of your org.

if you want to build AI products, I promised I would stop using that word... I meant to say machine learning... if you want to build machine learning products. Then you really need people who have strong, scientific thinking. You need the math, you need the coding, but those two are more, outsourceable. The strong, scientific thinking is not because that's your data and Nope... there's no API that you can throw that data to and it'll magically understand it. If we did, if we could, we, I would sell that.

You could start an AI Academy. So one thing that's been super successful when I was at LinkedIn and at other companies for sure... is taking the few PhDs that you have who really strongly know this stuff and miss being in the classroom, and give them the opportunity to deliver that scientific knowledge to generalist software engineers who actually want to learn this.

So the AI academies are very successful. They have to be backed up with really strong scientists and math folks. And you have to give them really strong tools for shipping. But if you can do that, training people in house or hiring Galvanize... you're welcome Galvanize.. hiring, galvanize to come train your people, they'll happily do that... works really well.

What to do if you're an engineering manager with no AI/ML skills or experience

Now... What, if you, as the manager don't have any of these skills?

I'm sure many people, there are going to be many managers are handed either ML teams or ML tasks... what should you do if you're one of those people? If you're a manager, if you're a leader that's been handed these ML tasks? There's no easy answer here.

The short answer is you've got to learn it. You're not going to be able to manage that team effectively if you don't learn machine learning, now you don't have to learn as deeply as the experts.

But would any of you manage a software engineering team, if you didn't know how to code? No. You have to be able to understand what they're doing, what their work life is like, what choices they have to make. So if you don't know machine learning, but , you're managing a machine learning team. Then I strongly recommend you get skills.

Here's some resources that I find really useful if you've not taken an online course by Andrew Ng... and I say this as an ex professor who believed that I was a very good teacher. If you have not taken an online course from Andrew Ng do yourself a favor and do it is the best set of lectures I have ever seen about machine learning. Be prepared to brush up on your linear algebra, your probability and your calculus.

Fast AI - something that my team likes a lot. It works really well. It takes a little more hands on approach. Be prepared to brush up on your probability and statistics, your linear algebra, and your calculus.

Now do you have to know it well enough to be able to do it? No, but you have to know it well enough to understand what the phrase gradient descent means. You absolutely have to understand that. If you can't understand that you're not going to be able to talk to your team in any effective way and you won't be able to... you have to know more than that... but you won't be able to represent them in any planning, meeting, or prioritization meeting.

I recommend that if you're going to be managing a machine learning team, take a real course. Go to San Jose State, San Francisco State go to Stanford, go to Berkeley, take an actual course. The grade is irrelevant. Take it pass, fail, who cares? What matters is that you struggle through going from data, to modeling, to evaluation, to modeling, to evaluation, to modeling, to evaluation, to modeling, to evaluation, to shipping.

Another really great way is to do a Kaggle competition. This is depending on how self motivated you are. these are machine learning competitions, and many people put it on their resume if they won one... cause it helps us hire them. But there are some tools there to help you learn how to do things.

And finally, I don't think there were any good books that do the following. I don't think there are any good books that are for popular audience, but teach you machine learning deeply enough. I don't think that's the case. If someone knows a better one, if someone knows one please tell me. But this book, "Hands-On Machine Learning with Scikit" and something else you can discern... is okay. But it's only okay if you're willing to struggle through the math, if you're not willing to struggle through the math, then it's going to be meaningless to you. So if you're comfortable, struggling through the math on your own, then that's a, an actually an actual great resource.

So the takeaway is you need three sets of skills. I know I'm going to question time. I'll be really fast. You need three sets of skills, math, science, and coding. And if you want to manage a team that does machine learning you need three sets of skills, math, science, and coding well enough to understand the choices and trade offs your team has to make.

Q&A

Questions. I'm all done

Patrick: we're going to jump into the Q and A segment from Craig's talk.

First question, how deep do you have to get into AI ML in order to lead effectively?How deep does your understanding of AI/ML have to be in order to lead effectively?

Craig Martell: Take a class? You really have to. I've seen people try... you can hope that you can do it. You could say I'm an awesome manager. I've been managing machine learning teams for a really long time. I strongly believe it doesn't work in the same way as if you took somebody who didn't know software engineering and tried to manage the software engineering team. It's just not going to work.

How do you estimate project timelines for AI/ML teams?

Patrick: How do you estimate project timelines when AI ML is involved?

Craig Martell: I have a joking answer, which is whatever it is multiply it by five. But here's the real answer...

if you can get a well spec'd input output, this is the input that you're going to get machine learning team. And this is the output that we expect. And if you agree that's actually a function that's creatable... because lots of times that spec will be a function that you don't believe is creatable. If you believe that input output is a function that's creatable, then here's a rough rule of thumb.

It's going to take an entire quarter to gather data... an entire quarter, sometimes more. It's going to take... probably more... to gather are the data you need to train the algorithm to label the data you need to make sure that you have a separable classes that are learnable. So it's going to take... that's where the science comes in. Gathering the data is going to take about a quarter.

Building the 0.9 that you're willing to ship is going to take about a quarter. My boss really doesn't like that one. And here's the best part. It's going to suck. The first version you ship is going to suck. It's going to be a teeny bit better than random crap, but you have to ship it because you need actual online data for the next quarter to get it to be good enough that you're proud.

So the rule of thumb is three quarters before it's actually a useful product. Or multiply by five... not three, not 15 quarters, but it's not two sprints is my point.

I'm gonna take it... if you're starting from nothing, it's going to be about three quarters. So it's really going to be about a year before you're really unbelievably proud of it and you think it's in stable state.

What are the biggest mistakes engineering leaders make managing AI/ML teams?Patrick: What are the biggest mistakes engineering leaders make in managing AI ML teams?

Craig Martell: I think that I captured that one. It's not understanding what they do.

The biggest mistake they make, is they go to prioritization meetings with their peers or their bosses, and they make promises that either severely undervalue what the team can do or severely over hope what the team can deliver.

So you have to really understand what actual things are buildable and what are not. It's not immediately intuitive. Some things that seem really easy to you are actually absurdly hard. And some things that seem really hard are actually pretty easy for us. And so you really have to dive in with your team to understand their capability.

How do you manage expectations in an organization that's in the early days of AI/ML development?

Patrick: How do you manage expectations in an org that's still in the early days of AI ML development.

Craig Martell: be straight up blunt and tell them the three quarter argument. Just be straight up blunt and say, it's not going to be faster than that. And when they tell you to ship it faster than that, tell them no. Cause you're not going to ship it faster than that. There are a few exceptions, but I don't want to pretend these don't exist.You're not going to ship it faster than three quarters. And if you do more power to you. Great, but just set expectations early and often that it's going to take a long time before anything gets out the door that's of any value. And it's going to suck three quarters to suck. Let's be clear about that.

No two quarters to suck three quarters to be okay.

What are sources of technical debt unique to AI/ML systems?

Patrick: What are some of the sources of technical debt, unique to AI ML systems?

Craig Martell: That is an awesome question !

so, you know, how companies have this not built here, problem. And then they want to build it themselves and they don't want to go by some other problem. Some other solution. Every single machine learning PhD wants to build their own shit. Every one of them. And they don't want a platformize at all! It's like pulling teeth to get them to want to be part of a platform.

So the technical debt is you're going to let them do that. Cause you want to not slow down that already slow three quarter shipping timetable. So you're going to let them do that. You're going to let them write their own data ingestion pipeline. You're going to let them write their own modeling. Whether they write the model themselves or import it from somebody that they found, who they went to grad school with.

They're going to do all of that themselves. They're going to put it into a big table, which can be part of your normal infrastructure and at runtime it's gonna be some table lookup. That part's fine. The table lookup for machine learning is going to be fine. That part's fine. And some things aren't gonna be that, but a lot of them are going to be table lookups. But you're going to be stuck with that guy quitting or that woman quitting and having no idea how that flow works.

And that's just going to be a fact that you need to mitigate early. And the way you mitigate it, isn't to force them to use some platform that they detest. The way you mitigate it is making sure they document the heck out of it and have someone else be on that project with them so that they know how to run it.

And then when you got some things out the door together with that team, you can get buy in to build the right platform. But that's the biggest technical debt that's specific to them. And you guys can all agree with me and, the panel may strongly disagree with me and I'm more than open to that.

How do machine learning teams interface with product teams?

Patrick: How does a machine learning team interact with product teams?

Craig Martell: One thing that you want to think about in this area is you want to think about the machine learning team as having two partners: product engineering and product management. And the way they should interact with the two partners is there should be a very clear API between product engineering and the ML engineers. And what that API will take in the input that was agreed upon in the spec. And it will spit out the results.

Okay. That part's clear, but there must be a firm contract. And that contract is nobody can rearrange the order of things after it leaves the API. Why? Any intuition as to why. No rearranging my results.

What good is machine learning relevance. If I give you a, an ordered list of results that are say, search results, and you decide to reorder them. That's fine. Just you can fire me. Right then my job, I'm not doing anything. If you're going to take my results and you're gonna reorder them, just randomly reorder stuff. That's fine. But we order things according to a metric. And that's number two, your API with the product team is a metric that you believe you can move and product believes moves their goals.

So we can't move revenue directly, but we can move engagement on certain things. And so we build the results of that API to that metric. And we iterate back and forth. We change results. We check the metric, we change results. We check the metric, we change it. We check the metric. So that doesn't quite answer the question.

I think I answered that earlier, but the way you interact with product development is that way. You have a clear spec on an API and a very clear metric with your product partners. And clear agreements on whose job it is to figure out that metric, whether or not that metric is moving in the right direction.

AI/ML resources for executive engineering

Patrick: Do you have any good resources to learn AI machine learning specific for leaders?

Craig Martell: so that's a subtly different question. If you mean execs, Andrew Ng has , I forget what it's called AI for everyone. That's fine for execs. It will allow the execs to understand, why you're making the decisions you're making.

But if you're the manager of the team, that's nowhere near sufficient. If you're the manager of the team, you have to learn the fundamentals of machine learning.

When's the right time to invest in AI/ML?

Patrick: When is the right time to invest in AI ML.

Craig Martell: So let me see what you mean by that. I think you mean where in the lifecycle of your product or where in the lifecycle of your company should you invest in AI and that's really what your company is. Like, if you envision your company to be delivering machine learning results, then you're too late. You should be having the machine, learning people in the room at the beginning so they can help you figure out the right data, the right products. What's doable. What's not doable. If you're not doing machine learning, then whenever it is that you get to do it.

Can you apply the Pareto Principle (80/20 rule) to AI/ML development?Patrick: Can you apply the Pareto principle or the 80/20 rule to AI ML development?

Craig Martell: I'm not going to answer that question but I'll answer a slightly different question, which will help. You have to decide whether or not you care about precision or recall. Precision is, "if I say it's true, it's likely to be true. This is relevant to you." Recall is "I've given you all the relevant things." Okay.

Often I can give you all the relevant things by showing a bunch of crap, too. That's a high recall, low precision.

High precision - I might give you five things, but there's 2000 that I didn't show you. That's high precision low recall. So you have to decide whether you want precision or recall.

If you're at 75-80% for precision. That's shippable that's okay. Your customers are probably going to be happy with that. What that means though is 25 out of a hundred or 20 out of 100 things won't be useful to them. So if you're okay with that product experience where 50% of the things are useful to your customers and the other 50 are not, that's fine.

Usually it's about 80%. Like people can handle 20% of my results are not good, but 80% of them are. So that's good. And when you reach a plateau, you have to decide the value of the cost. That mean the value of what you're going to do, and whether you want to invest the cost.

Often, getting over a plateau is just one person going to explore for a quarter. You see what happens. Often getting over plateau is doing significantly more robust, harder things, which means gathering more data, building more infrastructure, and hiring more people. So you have to decide on that yourself. But if it's at 80%, ship it, move on to another problem.

Dive in
Related
29:22
video
Leading AI/ML Teams with Craig Martell
Jul 17th, 2019 Views 1.4K
29:22
video
Leading AI/ML Teams with Craig Martell
Jul 17th, 2019 Views 1.4K
42:48
video
AI Regulation's Direct Impact on Engineering Leaders & ML Teams
By Noah Olberding • Oct 9th, 2023 Views 145
44:51
video
Why ML Ops Is the Unsung Super Hero of AI Success
By Tim Laboy-Coparropa • Jan 27th, 2023 Views 336