Date November 06, 2021
Moral Courage and the Human Factor: SINC Sits Down with Les Correia, Estée Lauder
SINC’s Director of Content Annie Liljegren spoke with Les Correia in September 2021, and has edited this interview for length and clarity.
My guest today is Les Correia, Executive Director, Global Head of Application Security and Special Projects at Estée Lauder.
Les will be featured at our 2021 West Forum December 5-7 in in Scottsdale, Arizona, delivering a session entitled “Key Considerations in Building a DevSecOps Program.”
Great to have you here, Les. I’d like to start with a particularly prescient statement you made at the Cyber Security Exchange North American in 2018.
You said: “I don’t think we’ll ever get to this utopian perfection, so we have to instead be good at crisis management.”
Certainly no one’s going argue with that post-Covid.
Les Correia: My thinking has always been consistent in that space because, in general terms, a human being always finds a way around barriers. That’s why we call them hackers–depending on which side you are on–we’ll always find a way around, even as people, right? if you, say, create a law, people just cut the law until they can get through and then there’s a new law, and so on.
You’re always going to have somebody breaking through, because if you’re breaking through you only have to be right once, and the person who is building those defenses has to be right all the time, so you have to think that way.
And then talking about the last 18-24 months: basically you always have to be working under a method which assumes you will be broken into and start thinking ahead of time: prioritizing what’s important, and also what isn’t important, and focus on those areas, so that before the thing happens you’ve already planned.
You need to know how you’re going to deal with the crisis if something happened now, so you’re not panicking and trying to scramble.
So instead of trying to prioritize a schedule in which you’re going to do it, instead of that schedule–what is the priority itself? Then you know what is important and what you need to lose sleep over, and what you shouldn’t.
And how did that play out over the last 18 to 24 months, steering through a crisis where it seems everything is a priority?
LC: Yes, it sounds simplistic, but it isn’t–business continuity as opposed to disaster recovery. Disaster recovery is very physical: something happens, and here is what do you do. But continuity is making sure your business is continuing regardless of whatever the disaster is.
So one of the tenets of building that continuity program is understanding the risk and the business impact of different processes within the company. If you know how all the key processes, departments, locations, business units interact, you’ll know what’s important what isn’t.
For example, if you’re manufacturing, and you know what to protect and what happens if something does happen, and you’ve already prioritized Okay, is that more important than selling? Sales is another process, is payroll more important by region, and so forth. If you have those things in place, you can actually focus on what’s the priority of certain things.
It’s a hard thing to do, for a company, it requires that mindset and commitment, because most companies struggle with that. But if you’re already on that level—focusing on what’s important—there’s a less of a headache because you’ve already got that priority in place. If you don’t know what’s important, you have to go by weighing in at the spur of the moment.
Now with Covid, the main difference was reacting to access management, and making sure you’ve got good endpoint security, whatever that endpoint is.
Also the other thing was the belief in zero trust and identity, where you marry the two so you have an identity, and you have a zero trust associated with that identity. Zero trust is not a new concept, and actually may be a misnomer–more accurately ‘allow access’ or ‘trust those that require.’
So it means identifying each area that needs more access, not just coming into the environment, and then a free-for-all, but allowing identity to govern. It requires work, and that’s where the maturity, I think, is going to get better.
So if we were headed towards identity-based approaches before, WFH obviously exacerbated that along with this massive shift in endpoint context. How does that impact an identity-based methodology?
LC: You have to wrap it up together because all these second-factor, multi-factors are still associated with an identity.
The big thing in the past used to be that whole networks were segmented, but it was not based on identity. It was based on what’s important and what’s not important. But you link that up with an identity that’s always stuck to you, and add those levels, whether it’s biometric or behaviors and associated with, Annie, for example. That’s my kind of thinking anyway.
And the more factors you have the better. Now, you might use biometrics or whatever, but the point is that identity is linked really tight, so it follows you wherever you are. Of course that means you need to have a good identity system too, which is also maturing.
I’d like to take you back to something you said at the first Global Security Observatory. Here’s the full quote:
“(S)ocial engineering…gaps continue to be our bane. There has been some improvement in tools to address some behavioral issues using AI. Studies are ongoing, taking advantage of quantum computing to support AI utilizing neural networks to match human intelligence.”
The way I read this, it suggests you don’t necessarily think we can teach ourselves to behave differently or correct our own behavioral issues, and that AI will have to address, or at least support that. Would that be fair to say?
Are users simply unable to behave as needed for optimal security without the help of AI, or are there ways we can meaningfully socially engineer our behavior? I realize that’s several questions in one…
LC: No that’s fine, actually this sort of thing has come up before in my other talks…
When computers were being introduced, they said Oh it’s going to replace human beings and we’ll all be out of a job and so on, and it wasn’t—there’s even more work, and there will be even more work. So first of all, that’s a myth.
And then, even with all the focus on phishing, people are still falling for it, people are still doing the clicking. The degree to which certain behaviors are still happening–it’s unbelievable what people are willing to believe.
And that’s why you need these other kinds of input, and artificial intelligence is that. We use that word so loosely and I get annoyed sometimes–machine learning is actually what it is.
But the stuff is only as good as the data that is collected and curated—that’s the fundamental thing. I can have the best algorithms and it means nothing if I don’t have the right data to curate and therefore learn from that data and move forward.
Today, information is coming at speed, which is a huge difference, information that used to take years is coming at speed. And you want something to help you pick up the nuggets of what’s important and what isn’t, at least so you don’t sweat the normalized stuff.
So why not use tools, because today you have tools that can detect what’s right or wrong, to an extent–but you still need a human being to validate that.
You had generations who never touched a computer and then generations now who’ve been operating computers since they were babies, but you’ll never replace the human element because of the human factor.
There will always be new vulnerabilities and people will always fall for them, regardless of which generation you come from.
“You’ll never replace the human element, because of the human factor.”
Seems like that touches back to the point you made earlier: that you have to assume breach, and it’s futile to think you can eradicate foolish human behavior, so you assume people are going to act carelessly sometimes and work instead to mitigate the effects?
LC: Absolutely. If it’s not you there’s going to be someone, and so you’re only as good as your weakest link. You’re always going to have a breach somewhere or the other.
And then there’s two other key factors: first, selectivity–human beings are so selective. What we tend to do is protect our bank account like crazy and leave our social networks and retail history out there, and you’re exposing yourself, because you’ve taken a picture on your vacation to Italy and then someone knows they can come burgle your house. So with the way that selectivity concept works today, I absolutely I think privacy is a joke. We have these regulations and all that bull, but really we give it away for free.
And not only that, but also the correlation concept, because now you can correlate the information. I can find out something about a person here, and then using something on social or elsewhere, I can correlate that information.
In your public remarks and interviews and articles you consistently refer to this idea of ‘moral courage.’ Can you unpack what that idea means to you, whether in the context of your position or your industry or however you want to take it?
LC: Yes, I feel so strongly about this. Let me start with why I think that:
I think it’s more important to have better relationships than have a super IQ. And we always judge people as being such a great person because she’s got a crazy IQ and all that, and yet you might be the worst at relationships.
It’s important to have empathy, it’s important to understand cultures, to understand relationships. That’s where you can get things done, it’s a fundamental.
Now, moral courage, or the way I’m thinking about it, is somebody having the strength to speak up—because not everybody has the oomph to say something is right or wrong without consequences.But I feel if you’re a leader and especially if you’ve got a following it’s extremely important. If you hear someone being disparaged, don’t laugh it off–say something and have the strength to speak it out, even for the sake of your own integrity.
Because those little moments inform larger things, moral courage has to happen individually–with each of us, one at a time.
“Have the strength to speak out, even for the sake of your own integrity…those moments inform larger things.”
It sounds like you see part of the value of that mindset is that it’s not siloed off–it’s interwoven with and informing your strategy and your other relationships and the business itself. It’s not as though you have what’s morally correct over here, and then what’s effective over here.
LC: All of those absolutely, it’s intermingled. And it’s not enough to scramble to react when it’s suddenly fashionable to improve diversity or to hire women—these issues have been going on forever—it’s hard for women in tech. And we need all kinds of diverse thinking to make our world effective.
And most of the time we’re just followers, so I appreciate folks who can speak their mind and challenge it and have that moral courage. And I say moral, because that’s what it is, morally correct. Otherwise it’s the opposite.
People say I’m very outspoken, sometimes I’ve said it to me, but I feel that the truth is okay. It’s okay to say.
What’s top of mind for you right now, as far as something you wish the industry was paying more attention to, or something you wish your peers were thinking about in a different way?
LC: A lot but to say but I think the hardest thing is that whole social behavior, social engineering element to security awareness. It’s almost left at the back end of things, and then you say Oh, did you hear so-and-so got hit with ransomware…
And yet, this whole social engineering concept is really about thinking about how we think, and behaviors, and that education is so necessary. But it’s almost looked at as, you know, a session you have to attend for compliance reasons.
But if you do that, if you’re just doing it to pass, you’ll never going to learn anything.
So I think the big one is still social engineering and having a proper education. There’s a lot of talk and a lot written on that, but it’s a real concern.
West IT & Security Leaders Forum
Apply to Attend