[MUSIC PLAYING] Hello, and welcome to our third webinar. I'm Helena Carter. Here, we'll be looking at specific use cases that focus on expanding Cloud Identity as a service usage and how to add further resilience to your environment.
[MUSIC PLAYING]
Organizations seek to work closer together with external parties, as this builds further business. But how do you build an environment using remote desktop protocol that's still safe for everyone to collaborate with? Well, to discuss this is Stuart Sharp, Vice President of Product Management at OneLogin.
So, Stuart, welcome. Let's start with asking exactly what remote desktop protocol and secure shell protocol actually mean.
Well, remote desk protocol was developed by Microsoft, to allow users to log into Windows machines remotely. When this was first released-- and that was a long time ago-- these were physical machines. But of course now, they're mainly virtual machines. The RDP has traditionally been accessed via a VPN. But these virtual private networks are expensive to buy and to manage, and they add friction to users.
And SSH has been around even longer than RDP. And it's used to access infrastructure, including networking equipment, firewalls, servers, et cetera, whether they're running Windows, Linux, Mac OS, or proprietary operating systems.
So which organizations, then, is this all relevant to when it comes to access management?
Well, that's easy. I think it's just about every organization you can think of. Because any organization that has IT systems, whether those be traditional on-prem data centers or SaaS applications, they will have privileged users who manage those assets and resources. So this really is relevant to everybody.
Because now more than ever, you have to look at taking nothing for granted, and you have to support flexibility in how you manage your systems. That often means that you will have privileged users working from home. Almost every organization, obviously, has had to go through that since the pandemic, if they weren't already moving towards that beforehand. So you need secure remote access for your privileged users to keep your systems running smoothly and to provide the flexibility that your business requires.
And what exactly, then, does OneLogin provide in this situation?
Well, I mentioned earlier that most remote desktop, traditionally an SSH, is often accessed via VPN. So the first thing we do is remove the need for VPNs while still providing secure access to these managed resources.
So OneLogin provides enforces advanced authentication, which is what you need to remove the VPN. We also have an option to include risk-based adaptive authentication, that uses a number of different metrics to analyze every session-- every attempt to establish a session with these resources generates a risk score, and you can actually change the authentication challenge that a privileged user faces based on that risk score.
So we'd say-- it's machine-learning based. Think of it as a very modern way of adding an additional level of security.
So that's a standard way that OneLogin protects access to any application. What PAM Essentials does on top of it is it provides full session recording for all privileged users who access your managed infrastructure. That's great not just from a security point of view, it's also good for troubleshooting. If something goes wrong, you can go back and see exactly everything that was done during that session. And so it's a great troubleshooting tool as well.
The other thing we provide is credential vaulting. Credentials are never released to a privileged user, and after use they're rotated automatically in the back end. So when that privileged user logs out, they can't go and use those credentials again. If there's a bad actor that somehow managed to obtain those credentials, they wouldn't be able to use them. But by doing credential vaulting, it means the bad actor never even gets visibility of them in the first place.
It's also a great way for granting one-off access to, say, contractors, or even to a third party. Because you know you've got full visibility on everything they do, they know that they can't get away with doing something improper. And also you can grant it and easily remove it just with a few clicks. It's so easy to do.
Stuart, thank you so much. Let's take a look at this in action now with the following demonstration.
[VIDEO PLAYBACK]
- In OneLogin, which is used to perform privileged activities on an integrated SaaS application. So privilege is attached to the admin account and not the standard account.
Then we have a fast and secure switching from the standard user into the admin account persona. And we can show how some controls are added around this. Then we have the control where the user needs to be logged in to a workstation secured by PAM Essentials to access their admin account. And finally, then, we can see how users are able to access the SaaS application via that admin account once they're on the PAM Essentials secured workstation.
OK, so let's start the process. So just like before, we're on our trusted device. And we can sign in using our James Smith user. And we can see again the device trust control is in place. And again, we're doing the web auth and passkey authentication in a passwordless sign-in flow-- in this case, using our phone with the cross-device authentication solution from passkeys.
So then again, we have our message just to tell us what's going on in this demo environment here. And we're brought into the app portal just as before, and we can click on Microsoft 365 app tile to get into Microsoft 365. And we can see, just like before, this has worked all fine.
But in this case, James does not have any admin privileges in Microsoft 365, as you can see here, because he signed on with his standard account. So let's now try and switch into the admin persona. And we can see here at this point now, we are blocked. We're not allowed to do that from this machine.
So we actually need to go into our PAM Essentials environment and go and find our SaaS administration RDP host, where we can go on to this machine and from there safely switch into our admin persona with the full session capture running from the PAM Essentials environment. So we can see, we have our RDP 002 machine here, that the user has clicked on. And we're opening now RDP session into that machine, all using the PAM Essentials and the login integration.
So this will bring us through a RDP session now through the session proxy into our RDP host, where we can login again and to administer our SaaS applications. So we can see we're on the machine now, and we can open up the OneLogin environment again here and try and login using our James Smith account.
So we see at this point, we're prompted for a password list flow-- and this time using OneLogin Protect, in this case. And this is enabled with the number matching. And this is because the web auth N passkeys method is not available in this particular use case with our RDP session. So we're using OneLogin Protect instead here.
And we can see, user's signed in. So then they're able to click on the app tile to switch into their admin account. And you can see we have a message to just advise them of that. And at this point now, they're in their admin persona, and they have the Microsoft 365 tile available, where they can click on their sign in. And we can see now, they have admin privileges and admin rights in the Microsoft 365 environment, where they can go and perform some administrative duties.
All activities performed on the SaaS admin workstation secured with PAM Essentials will have a session capture enabled. And the session capture of any activities performed can be played back at a later date by an auditor.
So now let's look and see what happens if James tries to sign in with his admin account directly from his workstation. We can see he enters his admin user here into the login page. And even though he's on a managed device and he has a certificate, he's going to get access denied because he's not permitted to use the admin account in this way from this machine, outside of the PAM Essentials secured solution.
[END PLAYBACK]
Up next, we're looking at B2B use cases.
[MUSIC PLAYING]
Organizations are often looking to work more closely with external parties in order to build further business. But the question is, how can you grant access to external companies to enter your applications in a safe way? Here to answer that is Stuart Sharp, Vice President of Product Management at OneLogin.
Stuart, nice to see you. First of all, please could you talk us through some examples of B2B use cases?
Yeah, certainly. So business to business-- what we call B2B use cases-- is really an offshoot of the drive to digitalize processes. And when we say B2B, it can equally apply to organizations who aren't businesses, but we're just separating that out from selling to the end consumer.
So it's one organization relating to another. And that can be anywhere within the business process. It could be from sales and marketing to order and fulfillment processes. It can be contractors that you bring in to augment your workforce, or it could be your-- if you sell to businesses, it can be your customers. So all we're doing is talking about organization to organization.
And when you digitalize your processes-- and now more than ever, that means moving to SaaS-based applications-- you want your business partners to use those applications that you have moved your processes to, for the greatest level of efficiency.
And why could such B2B use cases, then, become a challenge to organizations?
The move towards online processes is driven by the need for greater efficiency-- that desire to lower costs and improve productivity. However, when adding these new systems, you are adding a new layer of complexity and management. You have to grant the businesses you work with-- well, or more properly, you have to grant users, employees at the businesses you work with, access to your relevant systems.
Every time a partner has a new employee, they need to contact you, ask you to set up this person in your system to grant them access, and the new employee will have to create a new password for the application. They'll have to know how to access it. For example, just little things like that add friction to the flow of day-to-day business. And that overhead eats away at productivity gains you made in the first place, that you were trying to achieve by digitalizing those processes.
So let's think about finding answers, then. So how does OneLogin solve situations like this?
We allow businesses to create what we call a trust between the two organizations. What this does, in effect, is saying, I know that it's my partner-- the other business-- that knows which of their employees should be accessing our sales portal-- to order new parts, for example.
Now, it doesn't make sense for me to manage who those employees should be. So by trusting the other party, you've automatically removed a lot of the friction.
Now, the way we do that is we link to their identity provider. So if we are a business, we have OneLogin as our identity provider, we will create a identity trust with the other party's identity provider. They already manage their identity system. They're already setting up users and saying what roles they have and what applications they should have access to.
That will mean that it will automatically tell us when there's a new user who should have access to our system. And it can be done without us having to do anything at all. There's no manual intervention required.
Now, of course, we will restrict as a business what that other business can do. When they create users, we'll say, well, they will only be a standard user who can create orders in the system, and that's all they can do. So we still have control over what employees from the other company can do, but we let them decide who those employees should be.
And even more importantly, perhaps, when they know when an employee is no longer there and is removed from the system. Otherwise you just have users with access to your system when they change jobs, and they'd still be able to log in. So it's much more secure, as well as more efficient.
Now, that's a very specific, I'm going to partner with this other business, and we're going to choose to create a trust between our identity providers. However, there are other B2B use cases that can leverage centralized systems to verify users.
Increasingly, particularly in Europe, there has been a growth of what we call eID, or national ID systems. And these are regional or national identity systems that verify a user when accessing your application. You can use it for primary authentication, so they will be redirected to this ID system, or to authenticate via some other method.
Or, you can add it as a second form of verification. So they might have a username and password set up to access your system via one login, but you want to put an additional authentication challenge, and you can redirect them to this third party.
You can also use our adaptive risk authentication-- what we call SmartFactor-- to choose when to redirect that person to this third-party system to verify. So if something seems unusual-- they're logging in from a new computer or a new geography that they haven't logged in before-- you may choose to perform this redirect to this eID system to verify a strong verification of their identity.
Stuart, thank you so much for talking us through all that. Now what we're going to do is turn to a demonstration.
[VIDEO PLAYBACK]
- The OneLogin Bring Your Own MFA feature allows you to integrate with literally any third-party MFA solution that supports OIDC. So with this, you could even integrate with high-trust identity providers, like eID, government ID, bank ID-- anything that supports OIDC. In this example here, for demo purposes, I'm using Cosmos-- One Cosmos block ID, which is a very strong biometric authentication app.
Jerry Cantrell here is a demo user who is on a passwordless flow, but he signs in for the first time, so he doesn't have MFA registered yet. Now, when Jerry clicks or provides his username, clicks Continue, now the system detects automatically that he doesn't have MFA registered yet. And even though he's a passwordless user, as an automated onboarding flow, it now asks him for the password-- just this one time-- and now guides him through a registration flow for his MFA.
So I click on Begin Setup. And here on the list of available MFA methods is One Cosmos. So I'm going to use that. Click on it. Open the app on my phone over here. Provide my fingerprint for biometric authentication. Scan the QR code. Here you go. Confirm biometric authentication again. And now this is registered as Jerry's MFA method.
Now, he can go to his profile and register additional security factors as a backup. So, for instance, if he doesn't have his phone, he wants to answer security questions or he wants to use a YubiKey or whatever, he can add those here, depending on what is configured in the system, what is available to him. And the next time that Jerry signs in, he won't get asked for a password any more, but simply needs to confirm One Cosmos block ID.
So again, opening my app, biometric verification, QR code, and here we go.
[END PLAYBACK]
Coming up next, we'll be looking at protecting software as a service administration with PAM Essentials.
[MUSIC PLAYING]
An identity as a service platform provides great functionality. It's also a key component in providing and revoking access for users. But how can you protect the admin side of such an important platform? Who has what rights to make changes? To give us the answers, let's speak to Stuart Sharp, Vice President of Product Management at OneLogin. Stuart, hello.
First of all, can you tell us exactly what PAM essentials provides?
PAM Essentials is a new module that we've added on to extend the functionality of OneLogin. So OneLogin, traditionally, as most IDaaSes perform authentication and life-cycle management-- so the creation and provisioning of users into SaaS applications.
PAM Essentials is about taking functionality that has been used traditionally to protect privileged access to infrastructure-- so to networking devices, to servers, to data centers, et cetera. And we do that by session recording.
So everything that a privileged user does on that Windows RDP or SSH session is recorded-- can be played back. We also use credential vaulting. So a privileged user is never actually given the username and password that's used to access the system. It's injected when the session is launched from OneLogin.
So we protect the users from the privileged users when they log in to OneLogin with strong adaptive risk-based authentication. We then monitor and secure their credentials when they launch that access to the managed infrastructure.
And how does that, then, protect the admin side of identity as a service?
Well, this is where it gets interesting. So privileged access management has traditionally been for, like I said, RDP and SSH access. However, when you're administering identity as a service, just like any SaaS platform, it's not via RDP, it's not via SSH, it's via a web browser. And that's presented a real challenge for many PAM providers up to this point.
What we do, though, is we have a way that you can launch an RDP session to a Windows machine. And we say that all administrative activity into your OneLogin environment itself, or into any other SaaS application, can only be performed from that RDP session-- from that Windows host. In old terminology, you'd think of it as a jump host.
But what the privileged user is doing is saying, OK, I need to make some administrative changes in OneLogin, or in another SaaS application. I'm going to simply launch my RDP session that's been designated for admin activity into OneLogin, or to Salesforce, whatever. And once I'm there, only once I'm on that machine can I launch administrative sessions into those target SaaS applications.
So how does this interact, then, with OneLogin as an access-management solution?
Well, you have a number of options of how you can do it, but I think one that will be most intuitive is that when a privileged user logs into their OneLogin portal, they will see the applications they have access to as a standard user. So let's say, they're a Salesforce administrator and they're a OneLogin administrator.
So when they launch the Salesforce from their OneLogin portal, they'll just go in as a standard user. They can't go in and change admin settings, et cetera. And even for OneLogin, they wouldn't have access to administrative settings. They just have their profile as a normal user.
However, when they go and launch that RDP session for that host that is designated for administrative activity, and they launched their browser within that host and log into OneLogin, they will see a different set of applications. When they launch Salesforce within that protected Windows session, then that will launch them into Salesforce as an administrator. They'll see that their OneLogin portal will automatically grant them access to administrative settings within OneLogin-- the settings that they don't see in their initial OneLogin portal before they launch that Windows session.
So it's a very clear designation. When they log in from their laptop, from their desktop as normal, they're accessing everything as a standard user. They go to their Managed Infrastructure tab. They launch a session to this protected RDP host. And their view within OneLogin from there is all about privileged access management activity.
Stuart, thank you so much for talking us through all that. And now we're going to take a look at a demo.
[MUSIC PLAYING]
[VIDEO PLAYBACK]
- OneLogin with PAM Essentials offers a great way of controlling access to your managed infrastructure, reusing all the elements of user lifecycle automation in OneLogin. Just a quick recap. We have mappings, which are essentially if/then constructs, and allow us to automate decisions based on any information that we have about a user-- like information in their attributes, group membership in a directory, or even information like last login date.
And then based on these mappings, we make decisions like assigning roles. And with roles, we group application access together. And with application access, we can also assign access to the managed infrastructure, which we will see in just a second.
The configuration of PAM Essentials is very straightforward. I begin by adding my networks, or network segments and rolling out agents to those network segments so that PAM Essentials can connect. Then, I add my directories, like my active directory that I've got in my environment here. I add my infrastructure assets. These can be both domain joined or standalone machines. And then I add my accounts in the context of which users will be able to connect to the assets in my controlled environment.
Now, when I've got all these elements added to my configuration, I'm good to go to set up access policies. And the access policies are where it all comes together. When I look at a policy-- and let me go into Edit mode-- basically what I'm doing here is I'm saying, who should be able to access which resources in the context of which privileged account?
And now to assign that, to assign who should be able to do this, I'm reusing the OneLogin roles. So like I said, I reuse the concepts of lifecycle automation in OneLogin to control access to the managed infrastructure.
So this whole concept of managing that access rotates around the concept of roles in OneLogin. So I can keep it very simple and global by simply using the standard roles that are being created when I set up my PAM Essentials integration. Or, if I want to go more granular, all I need to do is set up more policies that are more granular and then create more roles in OneLogin that allow me to assign those granular access policies in PAM Essentials.
To show how this all works when it's all put together, I've created the test user here in my active directory-- Rob Halford. And when I look at my admin console in OneLogin, this user has been sent over, so now he has access to OneLogin. And he should also have access to PAM Essentials.
Now, I'm signing in as Rob from a domain-joined client machine here. Rob is on a passwordless login flow into OneLogin, so he can simply use the MFA method of his choice. In this case, I'm going to use OneLogin Protect. I get a push notification to my smartphone over here, accept the push, provide fingerprint. And now I'm signed in as Rob.
And now I can see, Rob has access to the applications that have been assigned to him. And also, he's got access to the managed infrastructure.
Now, when he clicks on here, he gets prompted for MFA again, because I've applied an app policy for additional security. And this app policy in this configuration specifically enforces YubiKey as a very phishing resistant MFA method, to additionally protect my managed infrastructure. So I put in my YubiKey, press the button here. And now I'm signed into the managed infrastructure.
And here in PAM Essentials, now, Rob cannot see anything yet, because I haven't assigned a policy to him that would show him any assets that he can access by PAM Essentials. So now I'm going to move over to my domain controller and put Rob into administrator groups for my Windows and Linux server admins. I click Apply. And this syncs over in real time. So if you want to provide or revoke access through PAM essentials, this syncs over immediately. So when I refresh the page here now, and I click on the Applications tab for Rob, I can see that the proper roles have been assigned. And when I go back to my client machine and I refresh the page here, this has sent over immediately as well. And now Rob has access to these machines.
Now to get privileged access, Rob clicks on the assets that he wants to access and selects the account he wants to access it with. And he's signed into the machine. Now it's just running a few commands here-- just a few things, so we can actually see something later in the session recording in PAM Essentials. So running a database dump here. Let's see if that was successful. Yes. And logging out again.
Going back to the admin UI in PAM Essentials, I can go to the session recordings where I can download all the different sessions. I can download the audit player here. And when I've downloaded one of these sessions and I want to start the player here, look at the session, I can see that everything where something relevant has happened is highlighted in different colors at the bottom on the timeline, so I can quickly jump to those positions where something has happened that I want to see.
[END PLAYBACK]
Next, we'll be taking a look at identity provider migration.
[MUSIC PLAYING]
An identity as a service platform is at the core of an IT setup, but a lot of organizations struggle to envisage an easy migration in case they do want to move to another solution. So is there an easy way to migrate?
Well, Stuart Sharp, Vice President of Product Management at OneLogin, can help us answer this. Stuart, why is a possible migration of IDP a challenge?
It's a challenge because for many organizations, they're already heavily dependent on their identity providers, particularly with the rise of identity as a service. Many of our customers, for example, have not just dozens of applications, but literally hundreds of applications. So any system where you have hundreds of other systems connected to it, you can imagine, where would you even begin to make a migration and upgrade path to something different?
So how can OneLogin help, then, when it comes to migration?
There are several very important ways that OneLogin can make it a very simple journey. First of all, we can integrate with your existing identity provider, called Federation.
That means that at any one point in time, you can choose when users are authenticating with your old system, but accessing applications protected in your new one, or vice versa. So you can gradually migrate applications one at a time without changing the user experience. And you can change the user experience at any time, regardless of how many applications you've migrated.
So that really means that you've minimized the impact on the end user, and it's given you time to go through that migration process on the back end of migrating those dozens or maybe even hundreds of applications on the back end.
And let's just have a think about best practices, then. So are there any best practices when it comes to changing identity provider?
Well, I've been doing this for a long time. And I always advise customers, don't start with a rip and replace. Don't think that I'm going to set up the new one and overnight turn off the old one. You don't need to do that, so why go through that pain and inefficiencies of doing it?
So first of all, it's about a gradual migration. Make sure in terms of your project planning, you're giving yourself enough time to gradually migrate off the old system.
Now, if you're talking about going from one IDaaS platform to another, that's fine. You can have a clean cut over. But if you're talking about an on-premise identity provider that's protecting legacy on-prem systems, seriously consider if what you want to do is keep the on-premise identity provider running only to service those legacy applications while you migrate everything else to the cloud.
Now, another thing that you should really think about as well is if you're going to support on-premise applications, do you want to do that if they're going to be deprecated in 18 months? Probably not worth it. So only look at supporting those on-premise applications that you know, for whatever reason, are going to be around for another 5 to 10 years. You just don't have an option to move it into the cloud.
The other thing you want to do is automate the migration process. So OneLogin allows you to configure applications via API using systems like Terraform. So you actually don't have to manually configure each application. And if you're migrating from an identity provider that allows you to extract the application configuration via code-- via API, most likely-- then you can actually create scripts-- and we've got some sample ones that we've uploaded into GitHub-- where it'll automatically do that migration. It will copy the configuration from your existing identity provider and automatically create the applications in OneLogin. And you can migrate 100 applications at the same time it would take you to migrate one.
Stuart, thank you so much for giving us your insight there. We're now going to take a look at this in practice.
[MUSIC PLAYING]
[VIDEO PLAYBACK]
- In this recording, we're going to go over how we can migrate applications from an existing Okta environment into OneLogin and how to do this with Terraform. So we're going to use our example solution, we've got published here in this public GitHub repository here-- an automation dash one login. And if you have a look in the Terraform folder and the reference environment 01, we can see an example, solution, description, and configuration that we're going to use to do the application onboarding process.
So let's have a look at what's involved and set the scene here. So we can see, we've got our Okta environments here, which we're going to be migrating the applications from. So we've got 14 applications that are set up in this environment here. And we're going to migrate these into OneLogin using Terraform.
So on the OneLogin side, then, we've got two instances. We have our dev instance here, which is our workforce-- [INAUDIBLE] workforce dev. You can see zero applications here at the moment. And then we've got our fictitious production environment, then, see this one workforce PP-- and again, zero applications here.
So we then have our Spacelift environment. So we have two stacks set up in Spacelift, representing each of our different OneLogin environments. So our developer environment here and our product environment, and we can see they're both connected to GitHub repositories here for our Cedar Stone workforce.
And our dev is connected to a test branch, and our product environment is connected to the main branch. So we can see here in our GitHub repository, we've got our main branch. So we've got our private internal repository here for Cedar Stone workforce, which you've connected to Spacelift.
And we can see here, we've copied in some starting Terraform configuration. So we have our main and our test branch, and these will be where we're applying changes to trigger the automation into the inbound OneLogin environment.
So we have also got, then, what we're going to show and use is an example-- Ansible playbook, which we can use to extract applications from the Okta environment and get them into a format of an application inventory file, that we can then use to trigger the Terraform process into OneLogin.
So if we have a look in the Ansible folder, in the automation OneLogin repository, you can see here export applications from Okta. And this is the example, we're going to be using here. And finally, then, we just have our Azure Cloud Shell environment here, which we're going to be using to run our Ansible playbook-- and run this against our Okta environment to extract the applications.
And you can see here, I've already cloned that repository. And we have here the export applications from Okta folder with our example playbooks here. So we're good to go and start the process now.
The export of the application list from Okta and create our application inventory file, which we're then going to use to feed into our Terraform application onboarding process. So you can see here, I have my Linux machine here, which has our Ansible playbook cloned. And we've got our playbook open here.
So all we have to do is define the Okta domain here. So you can see I've set the Okta domain for my environment here. And that's pretty much all we need to set in this playbook.
And if we then do a Ansible playbook and our exported apps YAML file and we run this, this will communicate to the Okta API. So if we put in our API key here-- the prompt-- and this will now go and communicate to the Okta API and extract out the applications that are in that environment and create the application inventory file.
So we can see there that's completed successfully. If I refresh here now, you can see, I've now got an application's inventory JSON file here. So you can see now, we've got all of our applications that were in that Okta environment exported out here into a nice JSON file, which we can now use to feed into our Terraform process to OneLogin.
So we can copy the contents of the applications here from our Ansible playbook output. And basically now go to our OneLogin-- our repository for our developer instance for our OneLogin environment-- so our test branch. So that's this one.
And we're going to basically update the configuration here to create some applications. So like we saw, we currently don't have any applications here in the developer instance, so that's fine. So let's start the process now to add them.
So all we need to do is go and update our locals file in our repository here and add the applications into our configuration. So we can just edit the file here. And we can see here we have an OL application object, which is currently blank. So let's add in that JSON listing of all the applications from Okta.
So you can see here each of the applications has automatically been allocated a connector ID-- so OneLogin connector ID for that related application that we had in our Okta environment. So that's good. We just need to check now if there's any custom SAML applications that we need to just double check and add manually.
So we can see ServiceNow, Salesforce-- they all look good-- Twitter. OK, so now we have two test applications here that the connector ID hasn't been automatically populated. So this is a SAML application test-- SAML application. So we need to get the SAML connector ID. So we can have a look here in our export from Okta example.
And in our vars file here, we can see we've got the OneLogin connector ID for our generic SAML applications set. So we can just copy that and update the SAML app, which is this one. So that's the connector ID for that one. And then the other is a custom scheme provisioning application. So again, we can grab that ID from here and set that in our test environment as well.
So now we should have all of the applications having a relevant connector ID, which is great. And we're pretty much now ready to test this process.
So just to say here, we've got some configuration we can set. So we can set who's going to be the application owner of the application. And that would be the person responsible for completing the final configuration of the application connector. And we can say whether the application is going to be a birthright application or not, or whether it needs to be allocated based on request and whether it's visible or hidden, and then which custom attribute is related to that application and which is going to provide access to that application.
So for now, we'll just leave everything as is. And we will just commit the changes. So just to say that the custom attributes here-- all of the custom attributes related to each application have already been set up in the target environments here. So we can see all of these custom attributes were set up here in the admin UI. And it's not currently possible to do this through the API. So just a small step to add those before you run the Terraform configuration.
So we've committed that to our repository. And if we have a little look back, you can see-- if we have a look at it, it's just been updated now-- the locals file. And look at our history. And if we look at the last commit, we will basically see our application file inventory has been updated.
So it previously was blank, and now we've added all of these applications here. So that's great. So that will trigger now an Terraform run to run in our Spacelift environment. So if we go to Spacelift here, we can see there already was a run applying. And actually it has already completed and finished.
So if we have a look at this, we can have a look at our runs. We can see that it finished there, and there were 210 resources added to the target environment-- so our dev environment here. So that all looks great-- that applied-- as you'd expect with all of those resources being created from the Terraform configuration.
So now let's have a look at our developer environment. And we should see now our applications-- our 14 applications. There we go. So we can see now, number of applications-- 14. So just double check, this matches with our Okta environment.
So we have 14 as well. And these applications now have been created. And we can basically now allocate who will complete the configuration of those applications. So if we just take a look at one of the applications, we can see here that it's been created. We can have a look at the configuration and see that the custom attribute for that application is this one. And we can see that it hasn't been set as a birthright application, so it needs to be requested by populating this custom attribute for each user to be true.
And we can see which roles are related to providing access to this application. So we can see here the role-- AA-AWS IAM Identity Center is the role that we'll be giving that access to this application.
And we can also see, we've got a whole series of mappings, then, that will automate the allocation of access to those applications as well as administrator-- delegated administrator access as well. So let's just now update our configuration again, because we didn't set any application owners. So if we go again back to our test branch-- and again, we want to update our locals file. And just let's update the first two applications on the list.
So we have our OneLogin and reflection application here. So what we're going to do is set application owner. So we have a guy here who is going to be responsible to finish the configuration of those applications. So he's the technical owner for the business unit responsible for these applications.
And we'll just make an edit. We need to edit the Git file here. So let's add James as the application owner, then, for the first two applications. So we can do that by populating his email into the application owner. And we leave everything else as it is. So we can commit those changes.
So that's been committed now to that repository. And again, now if we look back at Spacelift, we should see a Terraform run has kicked off here to apply these changes. And initializing here now, and we're going to run a plan to make sure this is all OK and then automatically deploy the changes to the target environment.
So let's see. So there we go. We got 30 resources to change. These will be applied now, and we should see the additional configuration. Now in our OneLogin environment. So we can see here, apply has been completed successfully. 30 resources have been changed. So that's great.
So now let's have a look at our applications. And our first two applications were the ones we wanted to look at. So let's actually just-- now that we've updated things, we can reapply all the mappings. And we saw that the user, James, was going to be the person responsible for the two applications to configure them.
So we can see James here-- James's account here. And if we have a look, we can see, he's been given access to those two applications that he's responsible for. So that's great.
And we can see here that the two application access roles have been set. So he has those applications. And we can also see, if we have a look at the delegated admin roles that have been granted, we can see that he has been given a delegated admin role for the OneLogin application and the Reflection application as well.
So if we switch over now James's view of what he can see, he should now be able to provide and perform delegated administration activities on those applications. So we can see here we're signed in as James now.
James has that delegated administration privilege, and he has those two applications that have been migrated here available to him. And he can now go in and complete the configuration of those applications in the OneLogin Admin Console.
So for example, he can go in and update the OAC information for this application here in this example, adding configuration as needs be.
[END PLAYBACK]
I'm delighted to be joined by Kalle Niemi now, Lead Business Consultant at Intragen. Let's kick off, then, by talking tools, hammers, and the value that Intragen can bring.
Thanks, Helena. And as you mentioned, tools are always tools. You need to know where to use them and how to use them. And I think that is the value that Intragen can bring to the table.
We know what tools to use and where. We have tons of experience from different kinds of identity and access management projects all around Europe. Our experts are both very knowledgeable of the business cases and the business domain and also on the technical side of things-- how to implement these solutions.
And often when I look at customer companies, they don't have the resources to undertake, let's say, even the implementation of these solutions, let alone starting to assess where they are and where they want to go. So they do need external help with that.
Also, taking these tools into use is today sometimes simple, sometimes not. But addressing your use cases and knowing how to address them with those specific tools is something where you usually want to rely on expertise coming from an expert company.
And that's the thing, isn't it? You're all there to support, to guide and advise throughout your client's whole journey.
Exactly. We want to be there from the start to the end. So starting when the customer is planning to start their identity and access management journey, or maybe they are already on that journey, but they are wondering what to do next. We can provide that advisory from our business consultancy team. Then we can provide help with the technical implementation of solutions with our technical consultancy team.
And then we manage the whole lifecycle. We have a managed service team as well, that can provide you with your regular support needs. But also our customers have access to our business consultancy and technical consultancy throughout the entire life cycle that are using their solutions. So yes, we want to be there for the entire journey.
And why would companies, then, need external help at all rather than just doing everything by themselves?
It is very hard to find that level of expertise needed in-house, as this is quite a niche topic for many. This expertise is quite often only found on external partners. So you both need the business understanding of the use cases, how to address them, what are the actual processes and things that you need to address, but then you also need the technical expertise to implement the solutions that will help you addressing those pain points-- and also managing those solutions throughout the life cycle.
And I think it's very hard to find an organization that will have all this in-house. So most organizations will rely on external help throughout the entire journey.
And when it comes to resources, let's think about wants versus needs, then. It's often about knowing not just what your client might want, but what they actually need.
Yes, that's true. Because there's a lot of those wants in a company. There are different stakeholders wanting this and the other that. But it is a matter of prioritizing those wants and hopes and actually coming up, what you really need as a company-- bringing in the sort of business expertise of knowing what other companies are doing. What are the regulations? What is the very minimum that you need to do?
And then once you've done that, you can probably address also some of the ones. But it's a key thing to do what you need and what you want. And that is also where we provide our advisory to the customers. We want to challenge them on their journey as well.
And what is your core message then that you might like to leave us with today?
So I think we've seen great use cases, different use cases, how to manage access, whether it's external or whether it's a remote desktop access. Those are some of the examples of different use cases within the identity and access management domain that you need to address somehow, and you're already properly addressing them somehow.
But if you are still wondering, OK, that looked good in the demo, how can we achieve the same thing, where do we need to start, then I would recommend taking contact to us-- because we can really help you identify where you are now in corresponding to what you saw in the demo. What steps do you need to take to actually be in that position that we saw today on those particular use cases? So we can really help you dissecting that complex topic into smaller initiatives that you need to undertake and be where you want to be.
Kalle, thank you so much for your time and for giving us that extra insight today. Thank you.
And thank you so much for joining these webinars. We really hope you enjoyed watching them. And we look forward to welcoming you again in the next series.