ELC
+00:00 GMT

AI/ML Start-up Trends

with Anna Patterson

February 1, 2022
AI/ML Start-up Trends
Listen on

ABOUT ANNA PATTERSON

Anna is the Founder & Managing Partner at Gradient Ventures, overseeing the fund’s global activities.

Anna is an accomplished leader in the field of artificial intelligence, a serial entrepreneur, with a long history at Google. Prior to starting Gradient Ventures, Anna was Google’s Vice President of Engineering in AI - integrating AI into products across Google. She also serves on the Board of Directors at Square, Inc.

Early in her career at Google, she helped launch and scale Android to over a billion phones, launched Google Play, and led the search, infrastructure, and recommendations horizontals. Anna was the principal architect and inventor of TeraGoogle, Google’s search serving system, which increased the index size over 10X at the time of launch. She also helped lead search ranking efforts through Google’s IPO to determine the top ten search results.

Anna co-founded Cuil, a clustering-based search engine, and wrote Recall.archive.org, the first keyword-based search engine and the largest index of the Internet Archive corpus. She wrote “Why writing your own search engine is hard” in the ACM Queue detailing this experience. Prior to that, Anna co-founded and co-authored a search engine Xift.

Recognized for her technical contributions as well as her commitment to championing women in tech, Anna was awarded the Technical Leadership ABIE Award in 2016. Anna received her PhD in Computer Science from the University of Illinois Champaign-Urbana. Then she became a Research Scientist at Stanford University in Artificial Intelligence, where she worked with Carolyn Talcott and one of the founders of AI, John McCarthy. For her undergrad, she double-majored in Electrical Engineering and Computer Science at Washington University in St Louis.

Anna resides in the Bay Area, where she wrangles her 4 kids, 2 horses and her Irish husband.

When you set out your plan, you can't miss all of your sales targets and make all of your hiring targets." Those kinds of things have to be in line

What people do is they just say, "Here's my plan. I'm going to march towards the plan. And it was super optimistic on the sales front and on the revenue front. And then maybe more realistic and achievable on the hiring front. And so they still kind of march ahead with the plan.

I think that you need to constantly reevaluate where you are and on what direction you're going in and whether the growth is appropriate or even the plan was appropriate...

<cite>- Anna Patterson    </cite>


Check out our friends and sponsor, Jellyfish. Jellyfish helps you align engineering work with business priorities and enables you to make better strategic decisions.

Learn more at Jellyfish.co/elc


Are you an eng leader interested in taking the leap to start your own company? Check out our brand new podcast series, Engineering Founders - Where we explore the transition from eng leader to founder!

Subscribe on your preferred podcast platform HERE!


SHOW NOTES:

  • Anna’s background scaling complex systems (4:00)
  • Emerging trends and opportunities in AI/ML (9:19)
  • The biggest fallacy in AI/ML right now (15:03)
  • The pendulum swing between model-first and data-first (16:14)
  • What’s after deep learning? (18:14)
  • Machine learning and source code (20:06)
  • What will be the most valuable companies with ML as the core value proposition? (25:04)
  • How to spot emerging trends in the AI/ML space (27:43)
  • Typical mistakes AI/ML companies & founders make (31:16)
  • How product-market fit is different for AI/ML companies (34:30)
  • Differences in scaling between trad-software and AI/ML (35:20)
  • How to test and validate ideas in the early-stages of an AI/ML company (37:49)
  • Rapid-Fire Questions (39:38)

LINKS AND RESOURCES

  • Gradient Ventures (Website)
  • Streamlit.io (Website) - collaborative Python-based app-sharing platform
  • Building Your AI A-Team (Link) - Anna and Adrien Treuille’s talk from the ELC 2020 Summit discussing how managing an AI team is different from traditional engineering teams & how to think about the collaboration between AI and engineering when scaling

TRANSCRIPT

Patrick Gallagher: We're launching a brand new podcast series called Engineering Founders, where we explore the transition from engineering leader to founder and share the stories, pivotal moments, and critical perspective shifts to help engineering leaders take the leap to start their own company. And this is the first episode in that series!

We have some really fun conversations coming out with a couple of really great founders and co-founders. All of them, former engineering leaders. And we're going to be releasing episodes a couple of times a month. Check it out. It's called Engineering Founders.

For our first episode in the series, we had a conversation with Anna Patterson, managing partner at gradient ventures, discussing emerging trends in the AI and machine learning space in some of the unique dynamics with building early-stage AI ML companies,

We cover how AI and machine learning systems have evolved. And Anna shares tons of fascinating emerging trends and opportunities.

Plus we get into how to spot emerging trends in the industry. Typical mistakes that AI and ML companies and founders make, how product market fit is different, how scaling is different versus traditional software companies and how to test and validate ideas in the early stages of your AI ML company.

Let me introduce you to Anna.

Anna Patterson is the managing partner at gradient ventures. Overseeing the fund's global activities.

She's an accomplished AI leader and serial entrepreneur with a long history at Google. She was Google's VP of Engineering in AI, integrating AI products across Google.

She's made massive contributions to the world of AI and search

Among a few of them...

she helped launch and scale Android to over a billion phones, launched Google play and led the search infrastructure and recommendations horizontals

Ana co-founded, Cuil, a clustering-based search engine and wrote recall.archive.org the first keyword-based search engine and the largest index of the internet archive corpus.

This episode's also special because we're joined by longtime listener, Theo Gervet, Machine Learning Lead at Relyance AI as a special guest co-host to give Jerry and I more of an insider's perspective and help us go more in-depth into the AI ML trends. If you like it and want more guest co-host featured, reach out and let us know!

Enjoy our conversation, and first episode of Engineering Founders, with Anna Patterson.

Anna’s background scaling complex systems

Patrick Gallagher: We're so excited to have you here. First off, just want to say thank you and welcome to the Engineering Leadership podcast.

Anna Patterson: Thank you for having me. I'm excited.

Patrick Gallagher: I also want to give a special welcome to Theo joining us - Machine Learning Lead at Reylance AI. He's joining us as a guest co-host. He's a long-time listener of the podcast and is a PhD candidate of machine learning. When Jerry and I kind of get out of our depth, he's here to help us make sure that we're diving into the right thing. So Theo, welcome. Thank you for joining us as a guest co-host today.

Theo Gervet: Thank you - very honored to be here.

Patrick Gallagher: Perfect. There's a lot to get into, and there's a lot that makes this conversation really special. You have an incredibly unique perspective when it comes to trends and AI/ML for a couple reasons. Not only are you a Managing Director at Gradient Ventures - and I know a lot of the big focus that you all have there is around AI-first startups - but you've also built and scaled incredibly complex systems in search AI and mobile infrastructure at Google.

Patrick Gallagher: Can you tell us a little bit about some of your career and some of the hard moments or the fun moments and how the things that you've worked on have now evolved?

Anna Patterson: Sure. Everybody tells a founding story sorta like it's is linear and neat. But early travails: I started a company right after graduate school. And we hit the dot.com crash. And so I hired my 12 best friends and then fired them. And so that was rough.

Patrick Gallagher: Oh my gosh..

Anna Patterson: -and then we moved into mobile. Then of course the telco crash was even worse than the dot-com crash. So after killing two industries, I found myself unemployed and was volunteering at the Internet Archive. And that's where I worked on recalls @ archive.org. It was a phrase-based search engine over the Wayback Infrastructure and it was history-based. So you could see what the search would look like back in time. So you could search for Clinton from like 1996 and prior, and it would be all about Bill Clinton. And then later it became more about Hillary Clinton. So you could really see how search has changed over time.

I found that to be a really fun project, unlike a lot of engineers. I got really into it. So, I couldn't stop even though I was doing it as a volunteer. And then when it launched in 2003, it started getting a fair amount of traffic.

So back in the day, 5 million queries a day seemed amazing. I went from a volunteer to people asking for the technology. And so that's how I wound up at Google, launched Terra Google on that infrastructure which was like a total rewrite, but when it launched, it was 50 times cheaper per search per document.

And so when that happens at your startup tail, you will be asked to be in charge of a lot of other random projects, which I really enjoyed. And so that was my first stint in Google.

Then I started another company, Cuil, which was different representations, clustering representations, trying to get deep into the web. I also launched Cpedia which was like a automatic Wikipedia page on any topic. and then that wound up at Android. So the whole team wound up at Android and it was a completely wild ride. So we were there from 3 million phones to a billion daily active and it was incredibly rewarding, both search bringing information to people and mobile phones, which, you know, bringing information to people in remote places and also a safety factor. People kind of feel safer with a cell phone, but actually statistically, they are safer with a cell phone.

And then after we were starting up an AI group at Google and I was taking product pitches from various groups about how to make AI come to life in their products. Our thinking was, "Oh, who here has a PhD in AI?". So, there were a few of us at Google by that. You know, of course there are a lot.

Pretty soon I said, "oh, we should talk to startups to make these ideas a reality". And the second I started talking to entrepreneurs again, I was like, I just, I have to do this for jobs. So, we started early stage seed funds, Gradient Ventures. We do seed in a lot of AI companies. So that's pretty exciting.

Patrick Gallagher: I believe it. We're here to talk about all of the wild trends that are going on here. So I can only imagine the excitement of all the folks that you get to talk with. Theo, since you know, we're talking a little bit about Anna's background and career. Are there any elements of Anna's story that really resonate with you or things that came up for you?

Theo Gervet: I find it fascinating that you've been attracted to startups from the get-go and tried something, it failed, and then went to Google and did it again, like a couple of times, and now back to startups there at Gradient Ventures. So I had somewhat of a similar path of doing the PhD, being super attracted to startups, but was not as bold as you did of trying to get it from the get-go. So I ended up joining something that already worked. But it definitely resonates that when you join a startup, you'd have to touch a bit of everything,

Anna Patterson: It was funny, I was on a panel... this was obviously pre-COVID. And I said, when I started my first company, I had $30 in the bank and I was heckled by somebody in the audience. They heckled me and said, "$30 in the bank? You should have joined a company!" And I said, "I think that's the definition of an entrepreneur!"

I figured whatever I was going to pay myself was more than what I made as a grad student.

Patrick Gallagher: So one quick connection between all four of us that I think is really interesting: so we all have like interesting higher education ties. And I know that you were an alum of Washing University in St. Louis. I used to work for Wash U. Jerry got his master's at University of Illinois. So there's that connection. And then Theo is in the middle of a PhD program in ML right now. So there's like this really interesting kind of hierarchy connection between the three of us.

But I think the hunger of a grad student, I think translates to the hunger of an entrepreneur very directly, because there is a deep, deep hunger, I think with that.

Anna Patterson: Sometimes. Literally. Yeah.

Patrick Gallagher: Yeah, my fiance is in grad school right now. And so we are definitely on the grad school budget. So I totally understand that.

Anna Patterson: Yeah.

Patrick Gallagher: So we're here to talk about, emerging trends and I thank you for sharing the story of your background and all of the things that you've done and seen so far, because I think that helps sets a really good context for our conversation now. And so if we could open up now, like...

Emerging Trends in AI/ML

Patrick Gallagher: What are some of the trends or the emerging opportunities in the AI/ML space that you're really excited about right now? What is going on in the world? And what are the things, that are really drawing your attention?

Anna Patterson: I think. AI ML is really stepping more to the forefront. I think when I look back at the trends that I've seen, we underestimated the amount of change that the web was going to bring to the world. And once again, with mobile. I think we underestimated the change from mobile phones. And I think right now we're underestimating the change from AI.

Some of the trends we see are generative AI, which is very interesting. People are using it. in news, you see all the things about creating fake videos, but there are also interesting applications like creating datasets.

And so, you know, we see some startups saying, okay, maybe this health data is too sensitive. So we're gonna say "What kind of properties do you want in this dataset?" And they generate a fake dataset that has the same properties as the real dataset. So, it allows for learning and also some anonymity. So I think it's pretty amazing that different kinds of applications that you're seeing with AI - and you're seeing it kind of grow up a bit from an academic pastime into real shipping systems. Obviously, Google has been shipping huge AI systems and models for a long time.

Patrick Gallagher: I think the point that you made about we're underestimating the change pretty significantly, I think is so true. I know one of the things that really fascinated me at Gradient is the advisory team that you all have built because I noticed like Astro Teller and Ray Kurzweil are two advisors there. And I've attended a couple of Singularity University programs.

And so I know the concept of humans having a really hard time predicting and understanding the actual implications of exponential technology is a true challenge that we all face. From your perspective, when you're thinking about some of the exponential trends or exponential growth of AI, are there certain areas of technology or I guess our society that you see AI having a bigger impact that people are underestimating?

Anna Patterson: I think one of the ways that you read about in the news is dovetailing with the future of work. I think that there are a number of tasks that AI was kind of poised to help with. And interestingly, you know, a lot of knowledge worker tasks - I think the old sci-fi stories, it was all about robots helping us and robots are proving to be very difficult.

I joke around, "How do you stop a robot army? You close the door". Because a lot of robots can't open a doorknob. So, that's the sci-fi version.

But you know, there's AI writing text, there's AI analyzing and writing legal documents. And so some of this repetitive, but creative material is being done by AI. We've seen AI composers, and it's very interesting.

And so I think that you're going to see augmenting of some of these, you know, probably deeper tasks. you know, there's a real shortage of some of these skills you know, I see AI helping kind of across the board in places that you don't really consider it.

Patrick Gallagher: Just to relate one specific example, and Theo I know you have so many up questions here, so I want to make sure we dive into those. I was having a... spirited debate with my fiance a couple of nights ago about AI. But one thing that came up actually yesterday was a very interesting application. Like there she's in the international development space.

And so she's working on a project to try to like consulting around negotiation skills and the people she's working with said "You can work with our data science department to essentially design research." So you don't have to do the research. You just design the question and then you'll get quantitative things to back up your theories around conflict resolution. And she turns to me, she goes, "Patrick, this is the future - for me."

And I was like, this is what we've been talking about and arguing about, for the last couple of months: AI task augmentation for real-world applications. If for her I'd never heard of like, how do you pair that with an international development scope and how that could impact the things that people do.

And so I thought that was a really interesting recent example that popped up of that, which I thought was really exciting because her and I talk about that stuff a lot. Theo, I know you have your follow up questions - jump on in.

Theo Gervet: That's very interesting. One follow-up question about AI for the future of work. It made me think of a company called Cresta where they have this pretty deep idea, I think, of learning from the best at every skill to teach everyone else.

So they're starting with sales and contact centers because there's always a ton of data and a provable ROI. But I think it's a much broader idea that could be applied across all of society. Did you have other domains where you think we could learn from the best humans to teach others?

Anna Patterson: Yeah. I mean, even in art, you know. If you were an animator and you're a junior animator, I think you can learn through a machine. For more senior animators. what is their speed? What is their brush stroke and how do they use these tools? Because a lot of those now are digital.

I mean, they're not actually coloring the film anymore. And so, you know, it's a whole toolset that I think you could have an AI teach you

Patrick Gallagher: Anna, one question, so we're talking about some of the trends, and I know there's Theo has a couple of other specific trends that I know he wants to jump in and ask you about. But I was wondering if we could flip it a little bit and talk about what's the biggest fallacy in AI ML right now.

The biggest fallacy in AI/ML right now

Anna Patterson: I think as companies, uh, small and large want to try a machine learning model, their fallacy is that the data is just laying there. So they say, "I want to use my data. So I'm going to go in there and create insights!"

You actually have to know - not to try to get too technical - but you have to actually know what you want to predict. And then you have to go back and see if you have the data to predict that.

And, a lot of times you don't. So you might have treasure troves of data, but then when you come to actually design what it is that you want to predict, you're missing something. And so you have to actually go back and retool your software to save that piece of information so that it can become a feature in your predictions.

You know, the idea that you have all the data to make all the predictions you need, and it's just lying there waiting for you? I find a little bit funny, so I think that's one of the biggest fallacies.

Theo Gervet: Oh, that's fascinating. I think that ties back into what you said before about the shift from people writing academic papers to shipping systems.

The pendulum swing between model-first and data-first

Theo Gervet: So what would you say is the trends in MLOps? Because one thing that ties into what you just said is some people are talking about the shift from the model-first as the data-first?

Anna Patterson: Yeah, I think that pendulum is gonna swing back and forth. I think people wanted to start making models. Then they realized that we don't have the right data or we have to clean the data. So it was data prep, data cleaning, data warehousing.

And then on the other side, it's deploying models. It's seeing whether it's model management, whether the models are drifting. There's a whole science around once you've deployed a model. How do you know it's still working? Especially if there's like some online learning component going. Then your model could start misbehaving.

So we've seen a number of those examples in the real world as well. I think the pendulum swings from people talking about data, to talk about models, to talking about monitoring. And you know, obviously they're all important. But you can't make a model usually without good data. So, I think all of them are good for a study.

Theo Gervet: Because it seems like, at least in industry, I've seen a lot more returns from tweaking the data than actually tweaking the code and the algorithm, where like 99% of research papers are still about algorithms more than principal ways to go about tweaking data.

Anna Patterson: Yeah, I've seen that as well. I always say any optimization method kind of takes the data - and imagine you not even making a complicated model, you're just going to use an optimizer. You have your objective function. Once you have the data - an optimizer, a learner, or whatever - kind of comes up with a solution and, pretend it's an 80% solution, you can work really, really hard to make it an 80% solution or 95%. But actually by going back to the data, adding more signals? Usually, that's an easier way to get between 80 and 95, rather than just kind of grinding on the algorithm.

Patrick Gallagher: Theo, you've got a lot of great questions. We'll keep letting you ride.

What's after deep learning?

Theo Gervet: One of the big questions I had for you is: what do you see as the next frontier for deep learning? Maybe beyond an LPN computer vision that already worked pretty well?

Anna Patterson: Yeah, I've seen some really interesting things. One of the things that I go back to, because I kind of joke around that it's investor tourism - because it's kind of so far out there - I saw artificial noses.

So, you can imagine a Glade air freshener where it has perfume and it blows perfume out. Instead, this sucks air over a membrane and the membrane is kind of both alive - cells and also receptors, and they learn patterns. And that simulates smelling.

And so with these smelling devices, you can tune them. You know, you can imagine that that's like a model. And so you tune these smelling devices to smelling things. You know, so they want to put them in hospitals to be tuned for a certain escaping bacteria.

You heard of dogs that can smell breast cancer. So you can imagine these eventually being in your bathroom and you can imagine them like in airports. So, it's very interesting. I think that it's both far away and not that far away. I mean, we'll definitely see it in our lifetimes.

Theo Gervet: That's fascinating. I hadn't thought of this as a learning.

Anna Patterson: Yeah.

Theo Gervet: One thing I would love to get your thoughts about is that I've been closer to recently is the machine learning on source code-

Anna Patterson: Yeah.

Theo Gervet: -because I feel source code is really great for deep learning and representation learning because we have very rich and structured representations like syntax trees, controlled flow graphs, etc. And software is eating the world, as Marc Andreessen would say, and even more so with crypto.

Anna Patterson: Yeah.

Theo Gervet: Have you thought about this domain?

Machine learning and source code

Anna Patterson: Yeah. I've seen a number of projects attempting to learn over a source code. Sometimes what they do is you can take a sample problem from homework. Imagine that homework has been assigned for three years. You have 300 students - right now you have a whole bunch of examples of code that solves the same thing. So, you know, those are really short.

And then you've seen auto-completion for code so that, as you type with Google docs or Gmail kind of try to complete your sentence. It tries to complete your line of code.

I think that as there's a movement to low-code, no-code, and at the same time, learning repeated syntax and an intention, I think, that code writing code will become more of a thing. Especially you will learn academically, oh, the halting problem. Oh, you know, you can't predict when something is going to halt. And then you get into industry and you're like, what?! This is pretty much straight-lined code. I mean, I got like a FOR loop here. I got some string line code and uh, yeah nothing about this is hard.

So, I think, that's kind of the disconnect between academia and real-life code. And I think that the real code is much simpler than you thought it was going to be when you were a student really plays into the fact that code will eventually be written also by machines.

Theo Gervet: It's a fascinating idea. I don't think you've seen the ML base-pair programming that GitHub released with Code X based on open AIs, GPT-3. I think it goes in the direction of auto-generating code. But it also raises a ton of issues around privacy, because you're actually sharing the data, like the code that you write in your ID as you write it. Do you feel we can do something about this?

Anna Patterson: It really depends on who's seeing it. Having a model see it, I think you can have it locally. You can learn it locally. I think, there are things you can do.

So I hate to say I don't worry about it too much. But you know, I think it's overcomeable, cause maybe the model will, you know, come to you, live on your machine and kind of, learn your style. So it'll be like your assistant.

Theo Gervet: That makes sense. One concern that at least when you go see YouTube videos of people presenting Code X, that developers have this sharing of their data with a centralized entity that did help. But maybe over time, it's replaced as part of a workflow. They don't benefit as much from the actual code that they wrote and Github ends up being the one who benefits. Do you think crypto could play a role in decentralizing some of this? I might be getting us in the rabbit hole again.

Anna Patterson: I haven't thought about crypto for that. Obviously crypto is one of those emerging trends that everybody is studying. And I love the arguments that crypto isn't actually coming. Have you seen how much money is in Bitcoin? I mean, it's already happened. So, I kind of find it funny when people say, oh, they're not sure it's really going to take off. I think they haven't looked at the data.

Patrick Gallagher: Just as a side comment, Theo. I know you have a couple more questions, but I think just to speak to it, probably Jerry and his reaction to a lot of this stuff. I feel like in some ways we're sort of divining things that I probably would have had no visibility on other than this conversation. And a lot of the pictures that you paint, I feel like it paints a really clear understanding of the use case and the implications from a lot of the different AI/ML applications we're talking about, which I think is really cool.

So I'm sitting over here with my jaw dropped is all I'm trying to say, We were talking a little about GPT 3. I heard about this potential use case of being able to write down the text of like an application or a website that you want to build. And then that text then, through an AI algorithm, generates the actual website itself. Is that something that both of you have been familiar with? Heard about?

Anna Patterson: I've seen some things that are textual that are still pretty code-like. I've also seen pictures - so that you draw boxes, and then you put text in the boxes, and then even a box inside the box. So it may be thinking,” oh, that's going to be check marked.” And so there are these tools that allow you to draw on a piece of paper what you want the website to look like. And then the code gets generated.

Patrick Gallagher: Wow. To me, that seems like there's the quote it's like technology should seem like magic. I think I totally butchered the quote, but to me, that's magic. it's been just like this, you know, wide exploration of a lot of different technologies, which has been so fun. I know one of the unique perspectives that you have is that you're seeing a lot of these early-stage companies sort of entering into building out these different products and use cases. And I know Theo, you had a question specific to this. So this would be a great time. Jump on in with that one.

What will be the most valuable companies with ML as the core value proposition?

Theo Gervet: I guess from your perspective at Gradient, I was very curious about what do you see as being the most valuable companies being built today with ML being core to the value proposition? Like maybe like the Stripe and Twilio of ML companies?

Anna Patterson: I wouldn't be a real VC if I didn't talk about at least one of my companies you know. So Streamlit is an open-source company that is based around the fact that ML engineers are really kind of app builders. You want to explore your data, look at your data and then use it for some purpose.

Maybe you're going to make a dashboard. With Streamlit, they have on their website something that at Adriene's previous company was maybe 2000 lines of code. And you can do it in about eight to 20 lines at Streamlit and kind of investigate a self-driving car, autonomous driving data.

I think every company needs these kinds of applications and they kind of need to take a tour of their data. I'm very proud of, Streamlit and how they've been able to penetrate that market and build a futuristic tool for futuristic technology.

Patrick Gallagher: I also wanted to give it a shout-out to the other side of this, Anna. Because I know you and Adrien had a really interesting talk around building out AI teams at our summit last year. So just also a plug for that, because beyond just like building out the product and finding that fit I know you both had some really great thoughts about actually building out your AI team to be successful in helping build that out.

We'll share a link with that there. Theo, you had a follow-up question.

Theo Gervet: I feel Streamlit seems like a great idea to also like empower data scientists to communicate with different stakeholders in the organization, which is something people struggled with. Like building teams when we have data engineers, software engineers, and data scientists, they go through the software engineer to actually deploy the thing, when StreamLit would let you communicate your results right away to PMs and other stakeholders.

Anna Patterson: Yeah. And we're seeing other folks other than data scientists use it as well. let's say you're in a marketing department and you want to show the ROI of the last campaign. Chances are you have access to the data. So it's been interesting to see that it's more than data scientists building these dashboards and apps.

Patrick Gallagher: Anna, we've covered a lot of different applications of AI ML. I think as we're processing this, one of the questions that comes up is: there can be a lot of noise in the space. And so I think part of one of our questions for this, especially for you because you were evaluating different companies in this area.

How to spot emerging trends in the AI/ML space

Patrick Gallagher: You probably have the most defined filter. So how do you sort out or spot these different trends in this space? Do you have a certain framework or criteria that you use or principles to help you sort out noise versus signal with the trends going on in AI?

Anna Patterson: Well, I wish I said yes to that would make me sound really smart. But I think there's two ways that we go about it, and you can think of it as either fishing or hunting. So if you see the companies that are flowing and raising money, you can start clustering them of what trends do they represent, what kinds of problems they're attacking.

And so, from that cluster, you can decide, "Oh, this is an emerging trend I want to pay attention to." And, usually we can't invest like the very first time we see something like the very first time you were pitched. "Oh, your email is going to be automatically written for you or something." The very first time you were pitched that you probably thought, "Oh, that's hard. It won't be useful. It won't work."

I think there's a lot of doubt until you start seeing things happen. You know, it's similar with companies. You know, the very first time you see an idea, it often strikes you as so novel, but if you know a lot about the area and you have a prepared mind, you can kind of jump on it.

But generally, you have to watch trends. And I think we see the very beginning. Because in our job. I see two people with an idea - sometimes four people with an idea - you know, for founders. So you kind of see the very beginning of these companies and the very beginning of these trends.

You know, we have one company that has a contact lens. And the contact lens has a projector that sits right over your pupil, and it's AR if your eyes are open, and it's VR if your eyes are closed. And so they have, you know, the networking to the contact lens and they're going through right now, there's a bunch of low sighted individuals that are not blind. And so what they're doing is augmenting the reality to make edge detection using AI. Very typical edge detection algorithms. And they're projecting that into their eyes so that they can see me right now talking with my hands as little green lines. And they can see a desk as an edge, et cetera.

It's so

Patrick Gallagher: The contact lens form factor to me is absolutely magic. I have a friend who is sightless and so he's also a professional blind golfer. that would be an incredibly life-changing use case for somebody like that.

Anna Patterson: Yeah. So, because they can kind of tune-up the brightness, then people who normally see probably something that's very dark gray, but not pitch-black. They basically can tell whether it's light or dark out. But they can't see anything else. They can see these like bright green lines to get around in the world. So, yeah, that's pretty exciting. I didn't know much about the area myself until investing in a company.

Patrick Gallagher: What it sounds like, is if you can kind of understand how two different forces are intersecting, and if you have an understanding of each of those individually, you can reasonably see how those intersections might provide unique or novel applications of that technology.

Anna Patterson: Yeah, I'd agree with that.

Patrick Gallagher: So speaking of early companies, I know we had some questions about being an early founder in this space, and I know Theo, you had some questions here, so wanted to have you jump in unless you have a follow-up questions from we were talking about before.

Typical mistakes AI/ML companies & founders make

Theo Gervet: So I assume among all the amazing ideas you see very early, like 90% plus of them end up failing. What would you see as being the typical founder mistakes for AI/ML companies in particular?

Anna Patterson: I do think the 90% - not to be too pedantic - but I think the 90% rule is not necessarily true today. I think that founders are getting more savvy. They're getting more networked with each other and trying to avoid some common mistakes.

I think one of the things that I see in AI ML is that sometimes these folks have come from big companies. where they feel like "I need 12 people to do anything." And that's where the grad school's scrappiness of two to three people, like a six-person project in grad school, would be like... a crazy, amazing, huge team! And then 12 in a big company that does AI ML would seem like a very small team.

So I think that sometimes I've seen people overhire, and then the seed amount they raised - which is, of course, bigger than the seed amount people raised two years ago - it doesn't last. And so you really need to use your seed money to show some forward progress.

And so it's super tempting to hire all your best friends. I did it. I also ran into that mistake personally. But no, I mean, I do see it out in the wild as well right now.

And I'm not sure why it's more prevalent with AI ML companies? I think that people who have a lot of experience with these industrial models sometimes do come from the big tech companies where they're used to big teams.

Patrick Gallagher: I really appreciate it the counter-intuitive, the 90% rule may be outdated. I think that's a perspective that people default to. But to hear that I think is really interesting cause I have seen a stronger network and connections with engineering. Not just engineering leaders, but founders as well.

And I think that networking is definitely a trend that is increasing with a lot of different folks in the community space.

Theo Gervet: I think some of this over-hiring might come from like this general sense of urgency in the Valley. You raised a bunch of money. You have to grow very fast and be seen as a rocket ship to be able to like hire the best people. Is that some pressure that you see founders putting on themselves?

Anna Patterson: Yeah, maybe. But one of my quotes, which I tell people is, "When you set out your plan, you can't miss all of your sales targets and make all of your hiring targets." Those kinds of things have to be in line.

And so what people do is they just say, here's my plan. I'm going to march towards the plan. And it was super optimistic on the sales front and on the revenue front. And then maybe more realistic and achievable on the hiring front. And so they still kind of march ahead with the plan.

Whereas you know, if a startup is a model in and of itself, I think that you need to constantly reevaluate where you are, and on what direction you're going in, and whether the growth is appropriate or even the plan is appropriate. Maybe who you thought you needed to hire isn't who you need to hire.

How product-market fit is different for AI/ML companies

Theo Gervet: And speaking about sales: what did you find different about finding product-market fit for an AI ML company from a traditional software company?

Anna Patterson: Hm. I'd say early on, one of my worries about AI companies is they were sorta like Swiss army knives. The early companies said, "Oh it can cut. It can open beer cans. It has not a particularly amazing knife on it. " It was, to me, more like a consulting model. I have this AI knowledge. I can do anything with it. But if you look at the market, there are more kitchen knives sold than Swiss Army knives. There are more scalpels sold than Swiss Army knives. So actually kind of knowing what you're doing and knowing what market you're going after is usually better than the Swiss army knife / consultative approach.

Differences in scaling between AI/ML and trad-software

Theo Gervet: So like we talked a little bit about the product-market fit space. What did you see as being different in terms of scaling the company for AI ML compared to traditional software?

Anna Patterson: I think both of them, you have to be a good listener. And before starting a company, I would highly recommend that everybody brushes up on their listening skills. Because you need to listen to your co-founders. You need to listen to your employees and you really have to listen to your customers.

And I think for our traditional company where you have a product that's coming to market and you have to listen to whether it's working, I think even when you have an AI/ML project, that you're bringing to the market. You really have to close the loop on listening. You have to design your questionnaires well, to not be like leading questions... "You really like this, right?"

So, you know, that's how we built the operating team around helping usually technical founders bring things to market with how to design their user questionnaires.

And design thinking and even sales playbooks for AI ML engineers.

Theo Gervet: Do you feel this is even more important for AI ML companies than traditional software?

Anna Patterson: I don't. I think that it's very hard, no matter what product you're bringing to market. It could be harder in an AI ML company because you need to get the objective function really right. Otherwise, it doesn't work at all.

But imagine, you know, it's something like Streamlit or it's an API. You have to make sure to get kind of that base layer right. And for that, you need to have good first customers. Choose them correctly. Listen to them well, but kinda not too well. It's the art of selling what you have, but making sure that what you have is what people want. It's a whole flywheel in and of itself.

Patrick Gallagher: I wanted to ask you about, that last comment, about helping make sure it's what people want. So for context, our listeners primarily engineering leaders and there's a large number of them who are interested in starting their own companies. some of them in the AI ML space.

How do you recommend for those folks who are sort of exploring starting that AI ML company to test or validate the different ideas in those early stages?

Are there any differences in certain applications for how to test or validate those ideas?

How to test and validate ideas in the early stages of starting an AI/ML company

Anna Patterson: I think, you know, , you have to figure out what you're thinking about doing. You have to approach people, or network with people, to get to other people that they know, to try to talk to those target customers to see if this is really a problem for them.

But a lot of times AI ML engineers do the opposite. They decides to learn what they need to build by sitting in with folks. They say, "Oh, here's something that I think is broken. Some workflow, I think, has broken. I want to sit with whoever that is." Let's say they're going to look at emergency room logistics or whatever.

"Okay. I'm going to just go get permission to hang out in an emergency room." and they got to be data-oriented around how to solve a problem.

Rapid-Fire Questions

Patrick Gallagher: Anna, I know we have a few minutes left. We've got some rapid-fire questions for you. What are you reading or listening to right now?

Anna Patterson: I hope you don't mean a book. So right now, as far as listening. I took my kids to a Machine Gun Kelly concert a few weeks ago. It was the first post COVID concert.

And so I had to do my homework and listen to Machine Gun Kelly prior to the concert. And it really reminds me of stuff I listened to when I was younger when I would be like listening to Minor Threat and hacking.

Patrick Gallagher: That's great. I have a strong punk rock background in the bay area, so I can relate to that very deeply. Okay. Number two: what is a tool or methodology that's had a big impact on you?

Anna Patterson: Well, I think obviously mobile phones. I knew nothing about it before joining Android and got to see kind of the mobile revolution happen in front row seats. So, you know, definitely still very impactful today.

Patrick Gallagher: What is a trend that you're seeing or following that's interesting or hasn't quite hit the mainstream yet, that you're paying attention to?

Anna Patterson: I think I and a lot of other people in tech are paying attention to crypto. We've talked a lot about AI, but that's another trend that I think is another game-changer in our society that people should be paying attention to.

Patrick Gallagher: Two more quick ones: your favorite or most powerful question to ask others or to be asked yourself?

Anna Patterson: You know, I have a really recent example, given that your audience is engineering leaders.

An employee asked this the other day and I found it really touching. They said, "How can I be excellent at my job? Or how can I be excellent at my level?" I thought it was a really nice flip of "What do I have to do to get promoted?" right? As engineering leaders, you hear that all the time from employees.

And I thought it was really heartwarming. How am I excellent at my level? And it made me want to help that person more. I will help you be excellent, you know? I thought that might be some kind of advice.

Patrick Gallagher: That's a powerful reframe. I love that. So last question: is there a quote that you live by or one that really resonates with you right now that you can leave us with as we close off today's episode?

Anna Patterson: Well, as you know, I have four kids. And so, one of the things I think about on those difficult days, which everyone has them, is the quote from Dory, which is "Just keep swimming. Just keep going."

Patrick Gallagher: The wisdom of Finding Nemo rears its head to close off the episode. That's fantastic. Anna, thank you so much for sharing your perspective on all these different trends and helping provide advice to folks as they consider starting their own AI ML company. It really meant a lot.

Theo, thank you for coming to bat with so many thoughtful questions. It was great to have you as a cohost. I think you've done a great job. We'll probably have you back on the show again. And so just want to say thank you both for an incredible conversation.

Outro

Patrick Gallagher: If you like this episode, make sure you subscribe to our new podcast series Engineering Founders! The leap from engineering leader to founder can be terrifying, filled with unknowns and requires a completely different skillset.

We're going to be diving into the stories, the pivotal moments and critical insights from former engineering leaders turned founders that helped them take those early leaps to launch their own.

Check it out here: https://engineering-founders.simplecast.com/

Thanks for listening to the engineering leadership podcast.

Dive in
Related
28:10
video
Building Your AI A-Team by Anna Patterson, Adrien Treuille
Oct 27th, 2020 Views 982
42:48
video
AI Regulation's Direct Impact on Engineering Leaders & ML Teams
By Noah Olberding • Oct 9th, 2023 Views 322
podcast
Leading AI/ML Teams
Sep 20th, 2020 Views 1K