[MUSIC PLAYING] Now, before I dig into this, I want to talk about the reality. The reality is this is largely not productized. What this is is capabilities of Identity Manager and OneLogin as they are today. But it's the art of the possible. It's configurations and customizations that can be made to leverage current capability to solve problems.
I mentioned in the previous session-- how many guys were here in the previous session? Some of you? Yeah, I mentioned in the previous session that a lot of what we're doing is based on the customization capability of Identity Manager and this integration. And it's to solve problems that weren't possible to solve with one platform tool maybe a year ago.
So now, we have these new tools, new options to solve problems. So I want to dig into this problem a little bit. So first of all, this is the rough agenda. We're going to talk about risky behavior and how that can lead to a breach.
I'll talk about our Unified Identity platform, how we identify the attack surface. So what is it we're trying to fix? --and detecting risky behaviors, so automated preventions. And I'll show a little bit of screenshots of how the thing actually works.
So is it SOAR? That's first question. No, it's not SOAR. And the main difference is we have Identity tools. And so we're solving problems where we can based on the context and information that we have. And that's all about identity.
So everything we do is based on identity. We're not reaching out beyond the normal identity storage or normal identity actions. We're doing IGA type activity based on behaviors. But it's not a full on SOAR implementation.
So that's the big difference. And I don't want anybody to think, oh, man, Josh just got up there and said One Identity has a SOAR tool. We don't.
I want to talk about, though, what's going on with risky behavior. How many of you guys have heard of the Uber breach that happened last year? I think everybody knows about this.
Without getting into the details of the anatomy of the Uber breach, at the front end, really, what happened was this. A hacker bought some credentials, acquired credentials and then signed into OneLogin. But there was an MFA challenge that was sent to the actual user. And the actual user ignored it, didn't confirm it.
So the hacker wasn't able to log in. So MFA works, right? That should have fixed it 100%. But the hacker didn't give up there. They kept on doing it, kept on doing it, kept trying to log in, log in, log in, log in.
All this time, the real user is logged in. So the real person is logged in. The hacker is logging in again and again and failing the MFA challenge. That should look like a pretty risky behavior pattern, right?
Any of us thinking about that would go, OK, I can see how you're not going to do that. What are you going to be logged in on your laptop and you try to log in on your phone and then the phone presents you with the MFA challenge? You're like, yeah, I'm going to not do that, like, five times in the next five minutes. It never happens, right?
So the end of the story is the hacker went the social engineering route, masqueraded as an IT person, told the real user, please, confirm that MFA request. The user went, OK, sure. I'll do that. And then we broke MFA with user behavior.
But we could have caught this earlier. So that's an example of some risky behavior patterns. So that's the first example there.
Repeated MFA challenge failures in a short time window or while users currently logged in. That's the Uber breach. Another one is maybe an account's locked because of improper password entered successive-- successively entered the wrong password.
So we lock the account in OneLogin. That's one thing. But that indicates some bad thing might be happening on a back end system. That back end system might be at risk.
Maybe the user has a Safeguard account and has privileged access on Safeguard. We know there's risky things happening at the edge on OneLogin. Maybe we should do something about the access we know about that they also have.
Or the successful password change immediately followed by a forgot password while the user is actively logged in. So if a hacker logs in with stolen creds, changes the user's password, and the real user goes, I can't log in. My password has been changed.
This actually has happened to me where I go try to log in and my password has been changed, not in OneLogin. But it's happened on Google. So it happens that a hacker manages to hack in, change your password. And then now, they own your credentials, and you can't log in.
So these behaviors, we can see them happening in real time in OneLogin. So I often talk about OneLogin is like the front door. If you think of your IGA program is a house or your Identity Management program is your house.
OneLogin is the front door. How do you get into the front door? Can I knock on the door and you just let me in? Do I have to tell you the password and then you let me in?
Do I have that remote garage door opener on my phone? I push the button, and it opens the garage door? That's how I get into the thing?
So OneLogin is the door. Once you're in the house, what rooms you can get into and what you can do when you're in there, that's Identity Manager. It tells you what rooms you can get into and what you can do when you're in there.
And then your safe, that's Safeguard. The most critical stuff, most valuable stuff, you put it in your safe. OneLogin can only monitor what's happening at the front door.
So they only know about that. But they absolutely know about what's happening at the front door. We need to be paying attention.
In my house, we've got this Ring camera. I met a bunch of guys have Ring cameras or the same sort of thing. We can see.
When somebody is knocking on the door, we can look at the camera and see who they are from inside the house just like Identity Manager is able to look into OneLogin when somebody is coming in the front door and say, who is that? Is that the right person? Do we know what they're doing?
I'm going to ignore them because I know that's a salesman and I don't want to talk to them right now. That's the way that Identity Manager can see this do this integration. So our Unified Identity platform, of course, is what makes this possible.
We've talked a lot about the enhanced OneLogin connector, all of the data and metadata we can pull out of OneLogin into Identity Manager that helps us to make decisions about governance. The behavior-driven governance thing is more like-- we used to call it use it or lose it. And that was the joke.
If you don't use the thing, then we're going to revoke your access because you probably don't need it, especially if it's a high risk access. You have privileged access. You're not ever using it. Why don't we take away your privileged access and take the Safeguard icon off of your OneLogin? That's the use it or lose it.
This is more like abuse it and lose it. You do some bad stuff, we're going to take away your access because you did bad stuff. So step one is we need to identify the attack surface.
What's being attacked? What are the most critical things in your infrastructure? So number one, whatever you got in Safeguard, that is the most critical stuff in your system.
Whatever you got in your PAM system, no matter if it's Safeguard, we like Safeguard. But you might have some other PAM system. That's the most critical stuff. So that's number one, number one priority.
So we govern Safeguard access and Identity Manager through our PAG module. So we're able to grant and revoke privileged access through Identity Manager onto Safeguard. Identity Manager also governs all the other systems that might have high risk access in them as well outside of PAM systems.
So maybe, you have AD accounts. Maybe, you have Linux root accounts, that kind of thing. Whatever it is, high risk access, you need to identify all of that as potential attack surface.
Then we want to detect risky behaviors. This is that behavior pattern that we talked about. OneLogin event log contains all of this information. On top of that, OneLogin with their AI can detect risky behaviors and then update the user's risk level.
Now, they use that mostly in OneLogin to prompt MFA or extra levels of authentication whenever certain things happen or automatically lock a user's account. We're talking about doing more granular things on third party target systems, not just inside of OneLogin. So we can take those signals as well from OneLogin and say, hey, if the users account was locked because they tried to log in from the USSR and Brazil at the same time, then we should revoke their privileged access. It's probably a hacker.
By mining that data, we can find the risk patterns as they're occurring. And that front door access that's controlled by OneLogin is governed by Identity Manager. So that's the first-- or second thing, automating the threat response.
And then automate the attack prevention. So this is where things get a little more like, we're pretending we're a little bit of a sore. We're going to mine that event data to detect risky behaviors and then automatically mitigate it by removing, temporarily removing high risk or privileged access from users that are engaging in this risky behavior. And then it'll alert the IT security team of what's going on. So how do we do this?
So identifying the attack surface, well, obviously, PAG. Anything in Safeguard, automatically mark it. We're going to mark this all in Identity Manager with a high risk index.
You guys have used Identity Manager. You know there's a little slider, the risk index. You can just go select all the PAG objects, and edit them, and them all as high risk. And then bingo, we've identified these as your attack surface.
Also, you can use the risk index on other objects like this. You see this high risk application access. Obviously, that's a high risk application.
And it has a calculated risk index. This one's a system role. But any objects an Identity Manager, they either have a directly assigned risk index or calculated.
And so they can either be automatically marked if they're calculated or you set them manually. But you identify this by setting this risk index. And we like these system roles.
I talk about system roles. I say we. I like these system roles. Many of my colleagues are still coming on board with my usage of system roles.
But I like them because it's a great way to bundle access together into one large object that you can govern as one unit. And that one big system role can have its own calculated risk index. So it aggregates the risk of all of the things that are within it and creates one risk index automatically.
Then we need to detect risky behaviors. This is where we're coloring outside the lines a little bit. So Identity Manager doesn't have a prepackaged automatic way of doing this.
What it does have prepackaged is the ability to create a script library and to run scripts from Identity Manager. And you can trigger them on a schedule. You can trigger them on things like when you synchronize.
For this, we really want to trigger it frequently. You want to go check for this maybe every five minutes or every one minute. So we're going to use a data mining script, in Identity Manager to go find these risky behavior patterns.
So the script is added an Identity Manager. And then it goes and hits the OneLogin API, the event API. And it goes and searches it for events that match a certain pattern like user failed or MFA challenge and a certain number of times in a certain amount of time.
And these are examples. Again, it's not a productized solution. You identify what you think is risky. And then you modify our script that we give you to go look for the things that you care about.
Failed to reset their expired password, et cetera. User's password reset required state. And then when that risky condition happens, then we adorn the user record in Identity Manager. We call this the person account.
We adorn that with some flags that tell us what's happening. We have this security risk flag. You see it up there in the red at the top.
That's a little checkbox you might have seen in Identity Manager if you've ever looked in the manager tool. It's often not used. It's kind of used like if you say somebody is-- I don't know. You have the big red button, walk them out the door. Maybe, it gets flagged all of a sudden.
But we don't see it used that often. This is a really great use case for it because we will automate flipping that bit, turning on the security risk flag for that user. But I've also added a couple of other things here to give us a little more granularity.
So I have this extension of the table, the person table to add a few new fields. You see the behavior risk level is there. That's a high, medium, low. So it can be low.
Usually, the user is low risk. But we can make them high, medium, or low. So we're going to set their risk level to high. And then we have the security risk reason because we kind of want to know, why did they get put in this state?
Mostly because we're going to automate the restoration of their account whenever we detect a restorative event or a clearing event. So let's say that the problem is the user is in a password reset required state. Once they successfully reset their password, they're no longer in that state. We should put them back to normal.
And so we use these security risk reason, which if you read it there, it says user login challenge. That's just an exact stream text that came right out of OneLogin. So we just use that information to help us track what reason the user was marked this way.
And then we automate the prevention. So Identity Manager is going to quarantine the user. We're going to automate the revocation of certain access, the high risk access until it's been resolved. And then we're going to give it back to them.
So script, again, we have more than one script involved in this. A script will perform a security quarantine. So when an event happens, user security flag is set, user security risk.
Then we're going to go remove all of their high risk access, kind of put it on cold storage. And then we're going to restore that whenever the restoring event has happened. Also, we are going to use our policies, just out of the box policy capability to just monitor.
Here are users whose security behavior risk level is high. So that gives us a report to the exception approver right the minute that it happens. So as soon as this thing happens, then the exceptional approver gets a notification email however you notify.
It says, hey, Joe is now high risk. Here's the risk reason. He failed to reset his password, whatever it is. And then when the risky behavior has been resolved, whatever it is--
Maybe it's a manual effort. Maybe the security team has to go and contact this person. Are you having a problem resetting your password? What's the deal?
Maybe there really is a breach that is in effect, and they need to go clean it up. But once it's all fixed, then they restore the paper. And this is where this would have fixed that Uber breach because when even with the social engineering and the hacker eventually convinced the user to let them in to affirm that MFA challenge, then the hacker got in.
And what he really got access to was their active directory account and privileged information that was provided by active directory. If all of that active directory group membership high risk access was removed automatically when the initial behavior pattern was detected, then even if the hacker made it in, he wouldn't have been able to get to anything. So I relate this to I've been driving convertibles for about 25 years now.
And security in a convertible is like this. Don't put anything in the car of value. That's how you do it. For 20 years, I drove a Miata, and I left the windows down all the time, never locked the doors because I don't want somebody to break the window, rip open the top, steal my stuff.
I knew somebody was going to get in. Well, this is where we can know, hey, somebody, they might be getting in. Let's take all the stuff out of the car real quick.
So we're going to take it all out. So when they get in, they're going, man, I got in the car, but nothing here for me to steal. And the key to this, one key is this correlated access. I talked about correlated access all the time.
That's really system roles, which allow us to know that a user, they have access to accounts and things. So a user, a OneLogin user might have apps on their OneLogin launchpad. Maybe, they have 20, 30, 40, 50 apps or however many.
But those are associated with access on the back end that might have risk associated with them. So Identity Manager can use system roles to correlate this access across to make sure you don't just take off the Safeguard icon off of their launchpad when this happens. But you also revoke the privileges that they're granted through Safeguard and disable their Safeguard account.
That's a different system and different access. And so we need to correlate that together. And so we're using system roles to do that, practically. Again, I'm a big believer in system roles. I'm going to eventually convince all of my colleagues that that's a great thing.
So next up, I'm going to show some screenshots of how this works. Now, full disclosure, this is very prototype. We will be eventually making this available as a solution accelerator much like behavior-driven governance. But it is kind of a LEGO kit.
It's like a do-it-yourself thing. We will give you the tools to go and implement this. But it's kind of up to you to make sure that it's implemented correctly or that it accomplishes your goals. Mostly, it's a proof of the art of the possible and of the benefits that you can have with the platform.
Before I move on to this, are there any questions? Anybody not tracking or how about positive feedback? Does it sounds valuable? It was interesting?
We're not very talkative, but we nod a lot. All right, so to make this work when we finally come out with our solution accelerator in the next maybe a few weeks or months, then you'll install this transform. And that includes the schema extensions, new database tables, config parameters, and the script.
And then you'll create SOD and company policies in order to monitor this in real time. So to start out, we have this user, Rena Fay. And she's just a normal user, but she has some high risk access. So her identity is active. She's just a normal regular user.
She has some high risk access. That high risk application access we showed before that's got everything known to man, bad access in it. Oh, and she also has production database access, whatever that means. But it's high risk.
So she has this access that she could potentially be at risk. So then a hacker tries to log into OneLogin as Rena. Then the system sends MFA challenge to Rena just like all of us.
Hacker logs in. Rena gets a thing on her phone. Is this you? She ignores it. And this results in this event. User login challenge failed.
So OneLogin is like, yeah, they failed to enter the MFA challenge. OneLogin normally just kind of silently just doesn't let that hacker in. That's kind of how it works.
But this might happen a bunch of times. So Identity Manager will detect it and quarantine the user. And you can see that this is a screenshot of the event in OneLogin. That's kind of what it looks like when you drill down to the event.
And it says things like IP address and location and things. So I actually did this on my-- I stimulated this on my own laptop demo system. But you could see how we could detect that this was happening at the wrong geography.
Maybe she can't be doing it-- trying to log in from California while she's also logged in from Texas. And you know how it could go. So we detect this risky behavior. Identity Manager does some stuff.
It updates Rena's identity, sets that user security risk flag, and sets her behavior risk level to high. And because that causes a policy violation, when those things exist, then it alerts the security team. Hey, Rena is now high risk. And then we automatically revoke high risk access from Rena.
So we're taking all of the valuable stuff out of my convertible right here. Take everything out. Somebody is about to break in. They already tore the top open. They're coming in. They're getting the stuff.
So now, you see here Rena, security risk. And see that little switch? That says identity poses a security risk. That's kind of what that means.
And then the policy violation pops up. So the exception approver or whoever it is in your team that receives these notifications will get this notification that this thing is happening right now. It's not happening in--
We don't wait for an hour for it to sync or anything like that. We just tell you right away. And then we revoked her access.
Notice no more high risk access for Rena. Now her high risk access has been put on ice until she can sort this stuff out. And we'll give it back when she sorts it out.
So that's what happens. Now managing the security team contacts Rena. What's going on? No, that wasn't me. But obviously, they didn't log in.
We investigate, figure out nothing's bad. And a resolution event happens. So she finally resets her password, passes the MFA challenge. We've detected a resolving event.
So Rena logs in for real properly, resets all of her stuff, properly responds to the MFA challenge. And we receive this event in OneLogin that says, Rena Fay, successfully verified authenticator OneLogin. So she passed the MFA.
And Identity Manager says, all right, you can have your access back. So we turn your security risk flag off, set your behavior risk level back to low. And the access that we revoked is restored.
And all of this revocation, security risk flag, turn it on and off, behavior risk level thing, all of these actions are recorded in the history for the user and in the audit log. So now, we can go back and see, did this thing happen? How often is this happening?
You can even create more policies that go and check. Is a certain user having this happen frequently? You can kind of make heat map if you need to, see who it is, who's logging in the wrong way more frequently. And that's kind of how it all works. Like I said, we're still prototyping this and getting it all sorted out. But thanks, everybody. I appreciate it.
[MUSIC PLAYING]