Hello, and welcome to Unite. I'm glad you've chosen to join me for this presentation about secure DevOps. My name is Dan Peterson. I work for One Identity Engineering as the lead architect for the PAM portfolio of products. In this presentation, we are going to talk about how Safeguard can integrate into your existing infrastructure to securely provide Secrets for DevOps use cases, without requiring that all your employees dramatically change the way that they work.
To lay the foundation for this presentation, I want to give a little background on DevOps. Then I want to talk about how Safeguard is a next-generation PAM product that is built for DevOps. We will talk about the REST API, the application-to-application API, and some of the libraries and SDKs that are easily available to you. Then I want to talk about Secrets Broker, and that unique approach to DevOps Secrets.
Let's start at the beginning. What is DevOps? I could ask 100 different people that question, and get 100 different answers that are specific to the DevOps problems they have, and the use cases that they are trying to implement. The broadest definition I can think of for DevOps is this. While the lines have blurred at many companies today, traditionally you have two groups of people involved in providing a service or an application. You have the developers, who write and test the code. And you have the operations teams, who deploy and maintain the code in production.
DevOps includes everything that happens between those two groups of people. DevOps is writing and securely storing the code. DevOps is building the output. DevOps is publishing the output. Now, that's not a complete picture because providing a service or an application is not a discrete one-time event. Systems and software have a lifecycle. It really is a loop.
After the service or application is deployed the first time, there will be new features and functionality that are needed. First, you plan those features, then you write the code to implement them. That code has to be built. It gets tested for quality. When it is ready, it gets released to production.
Deployment requires lots of orchestration for modern applications. There is responsibility to maintain proper operation and production, and you will want to monitor your application to see how the new features are being used. That information is, again, fed back into the planning for the next round of changes.
That loop of activity between development and operations could be visualized as a pipeline. As companies have undergone a digital transformation, and have begun to incorporate more data and technology in the way they do business, there has been a persistent desire to move faster and faster. There is a desire to be more agile, to get new features and functionality in front of the target customer more often. And more quickly, to close that feedback loop of innovation.
The goal is to outperform the competition by improving internal tools and external products. For more and more companies, it is-- in more and more industries, software and technology are a part of the product. The need to move faster leads to automation. Automation solves two important problems. It executes very quickly, producing output faster than we can ever do so manually. Processes that are encoded or scripted as automated tasks are repeatable, which allows us to safely move faster.
The cybersecurity landscape has been changing. Along with advances in technology, companies face more and more threats to the DevOps toolchain. DevOps automation needs access to critical assets and sensitive information. Securely implementing DevOps automation means authentication and authorization. We must authenticate even the non-human actors involved in production pipelines, and ensure proper authorization rules are in place.
Authentication means secrets. We need passwords, private keys, and other secrets for automated processes to access critical assets, and data, and communicate with one another. Before we delve deeper into secrets, let's talk a little more about the evolution of DevOps. What are we protecting, and what are the technologies we need to integrate with?
Source code repositories have been around for ages. Now we find ourselves protecting intermediate repositories of built components, and packages, and image repositories. Our build systems are becoming increasingly complex. Now they are designed for continuous integration, and continuous deployment. As the complexity of our applications has increased, we now need orchestration frameworks and microservice communication technologies.
In order for deployment automation to be repeatable, we now store infrastructure and configuration as code. When our applications are running in-production, they often need access to sensitive production data. In some cases, access to this data may be subject to regulation. In order to properly maintain an application, we need logging and monitoring capabilities.
There has been an explosion of innovation in tools and frameworks used for DevOps. In past presentations on this subject, I have put slides full of logos on the screen, and asked brave attendees to volunteer which of the logos representing DevOps technologies they have never heard of before. The hundreds of new technologies that have come onto the scene have made us more productive, but the rate at which they are being introduced presents a problem for the security team.
Can they work with our existing security controls, their automated processes without human supervision? Are we sure they are all safe? Are we protecting critical assets and sensitive data? Are we still compliant with regulation?
With that introduction, let's turn our attention to Safeguard. Safeguard for Privileged Passwords is a next-generation PAM solution. It was built from the ground-up in the modern era, using modern techniques and a modern architecture. SPP is built for DevOps because SPP is built for automation.
So let's give a little background on the SPP architecture. First let's start with how SPP is built as a secure hardened appliance. The appliance design is important to security because providing the security guarantees that you want in a credential vault requires a closed system with a tightly-controlled operating environment. Also, an appliance provides for a turnkey deployment scenario. You turn it on, and it is ready to go.
No need to install any software or configure any databases. This dramatically improves your time-to-value, and helps you to implement security controls that you need as quickly as possible. SPP appliances are built to support many different deployment models. It can be delivered as a hardware appliance to install in your own data center. It can be delivered as a virtual appliance to run in your own hypervisor, on ESX or hyper-v.
It can be delivered as a virtual appliance to run in your own private cloud instance, on AWS or Azure. It can also be delivered as a service through Safeguard on Demand. The closed system of the SPP appliance does not allow for console access. Everything is locked down. The only way in or out, the only way to use the system, is through the Safeguard API. This means the web interface, the command line, and the scripts that we use must all call the Safeguard API.
SPP has an API-first design. This means that 100% of the Safeguard functionality is available through the API. All API clients to the main Safeguard API use JWT Bearer tokens for authorization. On the previous slide, I represented Safeguard as an appliance. But the Safeguard architecture is built for reliability.
Safeguard supports advanced clustering technology for high availability, and disaster recovery. Critical product functions, such as credential retrieval and access request workflow, are available from any appliance in the cluster, along with a shared indelible audit log. The Safeguard API is readable from any appliance in the cluster. Certain operations for defining policy or creating assets must be performed on the primary appliance. The Safeguard API is a REST-style API, which means it uses HTTP methods for CRUD operations.
CRUD operations are create, read, update, and delete. And these are targeted toward entity resources, which are exposed in URLs as plural nouns. For example, to create a user, you POST to the Users URL. In addition, there are many API actions that cannot be adequately expressed with the simple HTTP method verbs. For those, Safeguard uses POST actions with URLs that end in the appropriate verb.
For example, to unlock a user, you POST to a particular user URL ending with the unlock verb. The Safeguard API is my favorite feature of the Safeguard product. It enables you to customize the way you want to use the product. You can automate any task, and you can integrate with any other service. The best part is that the documentation for how to use this Safeguard API is shipped on the SPP appliance as an interactive UI.
Safeguard API implements the Open API documentation standard, and it serves a web page called Swagger UI that visualizes and provides tools to call the API interactively, to try out anything you want to programmatically. The question I often receive about using the Safeguard API is, what happens with new versions? Customers don't want to make an investment in the API if automation is going to break with an upgrade.
Well first, know that the Safeguard API version is independent of the version of SPP. It is only upgraded as needed. For example, SPP 6.0, a major release, shipped with the same API version as the old 2.X series. The Safeguard API version will never change during the lifetime of an LTS release.
Just some history. V2 was the original version of the API in the first release of SPP to customers. V3 of the API was released in SPP 2.7. V4 of the API was not released until SPP 7.0. You can expect that the current version of the API will be the same for many years, across many releases. The best part about the Safeguard APIs is that both the current version and the previous version of the API are served from the latest release of Safeguard.
So if you want to call the examples on this page against 7.0, they will still work. Release notes and the Open API documentation will warn you about endpoints that may be removed in a future release. When changes are made to the API, they are not dramatic. For example, to call these same APIs listed on this slide with the V4 API, all you have to do is change the version number.
The remainder of this presentation is going to be information and demonstrations of tools that you can use to take full advantage of the Safeguard API with SPP. I will also spend some time to introduce some specialized APIs that are built specifically for DevOps, and some of the libraries, SDK, and add-ons you can use to implement your DevOps use cases.
The first demonstration is going to be on Swagger UI. I mainly use Swagger UI for information on how to call a particular API endpoint using another tool, but it is perfectly acceptable to call the Safeguard API through Swagger UI directly. Just know that when you try it out with the Swagger UI, you are operating on your live data.
In this demo, I'm going to start by showing you how to find the Safeguard API tutorial that is published on GitHub. This GitHub page doesn't contain any code. Rather, it contains documentation and hands-on labs to get you up to speed with the Safeguard API. It is organized into six parts.
You don't need to go through all six parts to be a Safeguard API expert. I recommend going through SPP1 and SPP2, and then select the next part that is most interesting to you. The SPP1 introduction includes a bunch of background on SPP and the API. It includes some detail as to how the URLs are formed, and how to call the various services.
A key piece of information is that the Safeguard API surface area is separated into multiple services. Core is where most of the product functionality resides. Appliance is specific to managing the hardware, or the VM. Notification contains anonymous status endpoints, which are useful in implementing SPP network architecture with load balancers. A-to-A is specific to application-and-application integration.
Heading back and clicking on SPP2 brings us to the Swagger UI tutorial. This is where we are going to find our first hands-on lab. These hands-on labs give you steps that you can follow to experiment with your own SPP appliance. The first thing it shows you in the hands-on lab is how to find swagger.json, the Open API doc, and the Swagger UI. Lets navigate to the Swagger UI on my test appliance.
We will start with the Notification Service, because it's the smallest and easiest to use. As I mentioned before, a Notification Service has status information that doesn't require authentication. So we can just click Try it out to call this endpoint against my running appliance. Because this endpoint doesn't require any parameters, I can just click Execute.
The output shows me how to perform the same request using curl, and shows the request URL. Below, you see the actual server response from the SPP appliance. I can easily copy the resulting JSON using this clipboard button. If I wanted to, I could copy that response to another application. Since Notification's a read-only API, there isn't much else to talk about here.
Let's go to the Appliance Service. This takes a little longer to load, because it has more end points. Let's just pick a simple one, like network interfaces. This API is how you configure network interfaces on SPP. Starting with the get, let's click on Try it out, and hit the Execute button. This time, you can see that I get 401 Unauthorized back from the server.
There is a proper error code as well, and a message from SPP in the body. Lucky for us, Swagger UI provides a way to authenticate and pass JWT authorizations. If I had a JWT Bearer token, I could just paste it below. But since I don't, I will use OAuth2 implicit flow to request one.
Clicking on Authorize sends me to the SPP login page. I use my normal SPP login process. In this case, I use an AD log-in. Then, when I'm redirected back, I am ready to go. I can just close out, and scroll back down. And this time, when I click Execute, I get a 200 OK. And the response body has the current configuration state of my network interfaces.
As you can see, this endpoint returned an array of network interface objects. If I want to get just one, I scroll down to the endpoint with curly brackets in the URL. These curly brackets represent a path parameter that can be used to select the specific object that I want from the list. This pattern is used throughout the Safeguard API. I happen to know the idea of my interface that I want is X0.
If I enter that and click Execute, I get just the object I wanted. I'm going to copy that to my clipboard. Then I'm going to go down to the Put operation, which allows me to update the network interface. I click Try it out, and enter X0 for the ID. Then I can highlight the template body, and paste the one from my clipboard. Now I can change just the parts of the object that I want to.
SPP follows this pattern where you can get an entity from the Safeguard API, and change it, and put it back to make an update. For my update, I'm going to set the link duplex to full. You may be wondering, how did I know that I can put the word Full in for that parameter? Well, Swagger has the documentation of accepted values under the schema for each endpoint. If I just look up that field, I can see link duplex, and the accepted enum values.
Each of these properties have data types and default values. Sometimes they're nullable, which means the property can be set to null. I don't actually want to set my network configuration, because it requires a brief appliance maintenance for that to work. But if I did click Execute, that configuration would update. The key thing to point out is that even the appliance OS properties, like the IP address, are managed via the Safeguard API. Everything is in the API.
So let's go load the core Swagger. This one takes much longer to load, so I skipped it in the recording. There are lots of endpoints in core. Let's show an example from the user's endpoint. In the same way, if I just try to click Try it out, and then Execute, I get back 401 Unauthorized. Because I switched pages, my authorization was removed. You have to authenticate on each Swagger page.
And I'll just do that in exactly the same way, with an AD login. Now when I come back and click Execute, I get a list of all users back from the Safeguard API. Let's search through, and find a local test user that we can modify. ID 43. So if I take that ID and go down to the specific user endpoint, I can plug it into Swagger, and get just that user.
But what if I want to find it in a different way? What if I only know the name of the user I'm looking for? Well, the Filter parameter allows me to create a query. Instructions on filter syntax are found on GitHub. Let's find the user with the name equal to UserAdmin. Now, this same endpoint that I use to list all users only returns the list of users that match the filter. So rather than scanning through the list in the UI, or programmatically filtering on the client side, I can make the server do the work.
So let's take the 43 down, and select just the user we want, with no JSON list. I'm going to copy that body so that we can modify it. Let's go down to the Put operation to perform the update. I paste in the body, and plug-in the ID. I'm going to change the description. When I execute this method, it's actually going to update the object.
You can see my request to URL includes the 43. It returned 200. And the body that came back includes my update to the description. So that's how you can use Swagger UI to explore the Safeguard API, and to make changes to the data in a running system. As I mentioned before the demo, I mainly use Swagger UI to find information about how to call the Safeguard API, and then use a different tool to actually call it.
So what are those tools? Possibly the most important thing you can remember from my presentation is the location of the One Identity GitHub page. That is where you will find tools and integrations for many One Identity products. That is where you will find all of the scripting libraries, SDKs, plug-ins, and open-source add-ons for Safeguard.
These tools make it way easier to use the Safeguard API. They include interfaces to dramatically reduce the complexity for authentication and composition of REST APIs method invocations. Currently, Safeguard supports three main scripting libraries. Safeguard-PS is a PowerShell module that is the most elaborate library with hundreds of commandlets.
Safeguard-Bash provides helpful-- powerful helper scripts for the Bash scripting environment. And Safeguard-Ansible includes a lookup plug-in and a credential type plug-in to help you use Secrets from Safeguard in Ansible inventories and playbooks. Safeguard also has four SDKs.
SafeguardDotNet and SafeguardJava have been around the longest, or they're the most mature. They're used in many integrations, both by One Identity, and third-parties. Safeguard.js is newer. It works for both front-end JavaScript, and back end node.js projects. PySafeguard and Safeguard-Ansible are brand new.
Wherever possible, these components are distributed through public package repositories. For example, if you want to install Safeguard-PS you should install it directly from the PowerShell Gallery. Instructions on how to do it are on the Safeguard-PS GitHub page. Safeguard-Ansible can be installed from Galaxy. SafeguardDotNet can be installed from NuGet, SafeguardJava from Maven Central, PySafeguard from py API, and safeguard.js from NPM. All of those instructions are found on the individual Safeguard SDK pages.
Before the next demo, I want to introduce a specific part of the Safeguard API called to A2A API. A2A stands for application-to-application. This is the part of the Safeguard API that is meant to be called by automated processes for credential retrieval. It has a few properties that make it better suited to that purpose than the Safeguard access request API endpoints.
First is that the A2A API allows a credential to be retrieved in a single round trip. Automated processes need to be able to retrieve credentials quickly, so the access request endpoints that require multiple calls are not as well-suited. Another property is that the A2A API uses client certificate authentication, and mutual TLS.
Client certificates are the best way to secure a non-human credential. Operating systems and security modules provide special storage and interfaces for using client certificate authentication that are much more secure than using an API key, or another password that would have to be injected. The A2A API supports IP restrictions that allow you to configure that only certain IP addresses and ranges can be used to retrieve certain credentials.
And finally, A2A API uses account obfuscation via an API key so that when you add a request for a credential to a configuration file that must be checked into source control, you aren't giving away information about the asset and the account that the credential corresponds to. The A2A API has two separate features. One is credential retrieval for passwords, SSH keys, and API keys.
The other is for an access request broker. This is an interesting endpoint that can be used to make access requests on behalf of a Safeguard user. This can be very useful for integrating with a ticketing system. Another cool capability of the Safeguard API is the ability to subscribe to and listen for events over SignalR, which is the Microsoft implementation of real-time web technology. Usually it's WebSockets.
While you can listen for events using the general purpose Safeguard API, this capability is also available in the A2A API. This is important because, in many integrations, an automated process needs to know the current password, even right after it changes. With other products, this has to be implemented by frequently pulling the password from the vault.
Safeguard A2A has a better way. The A2A caller can subscribe to a particular account, and the integrating application will be notified immediately when a credential changes. This helps to avoid outages, and greatly simplifies the implementation logic.
In this demo, I'm going to provide an introduction to Safeguard-PS. This PowerShell module is the most extensive of any of the Safeguard scripting libraries or SDKs. I will start by showing you how to install it. It is easiest just to do that from the command line with the Install-Module commandlet.
You can see I already have it installed from source, so it didn't override it automatically. But if I had never installed it before, I would get it from the PowerShell Gallery. There are more instructions on installation on the One Identity Safeguard GitHub page. To list all the commandlets that you get from Safeguard-PS, we have added this Get-SafeguardCommand commandlet.
I am using PowerShell Core and the new Windows terminal, which I strongly suggest. New terminal gives me tabs, and PowerShell Core does a good job of command completion. You can see that it is suggesting a parameter to me based on my command history. If I just press Enter without putting in a parameter, it will list all Safeguard commandlets, and there are lots of them-- too many for us to discuss in this short demo.
Perhaps the most important capability of the Safeguard-PS module is that it facilitates authentication to SPP using the Connect-Safeguard commandlet. if I just type Connect-Safeguard and give it the DNS name of the test appliance in my cluster, it will then prompt me for the Identity provider, the username, and password.
dan.vas is the name of my Active Directory Identity provider, with ID AD1. Then I can use another simple commandlet to show the user information of the currently logged-on user. You can see the output comes back as a PowerShell object.
This might be surprising to you if you were expecting to see JSON, which is what is actually returned by the Safeguard API. The underlying invoke REST method commandlet automatically converts the server response to PowerShell, which makes it easy to manipulate the object. If I wanted to get the JSON back, I could easily convert it using PowerShell. I just type the objects from the response to the convert to JSON commandlet.
Now, let's disconnect from SPP, and let me show you another way to connect. What if you are using 2FA, or modern federated authentication? The Connect-Safeguard commandlet allows you to authenticate via browser in exactly the same way as you would if you were using Azure CLI.
The browser pops up with a standard SPP login experience. And when I have successfully authenticated, I close the browser, and I see my login was successful. I'm going to run the same get-safeguardloggedInuser commandlet as before. It gives the same JSON output.
Let me run get-safeguardloggedInUser again, but this time give it the verbose parameter. The verbose parameter shows me how the commandlet works. It shows what URL was used for the me endpoint, and that no extra parameters were specified. This is cool, because almost all Safeguard-PS commandlets are actually implemented using a single command called Invoke-SafeGuardMethod. Then I can compose exactly the same API request using Invoke-SafeguardMethod.
So we specify the core service, the get method, and the me relative URL, and I get the same response. If I want to, I can specify a different APIs version, and actually call the older me endpoint from the V3 API. If I show verbose again, we can see how that affects the URL.
So let's do something a little more useful, and call the Get-Safeguarduser commandlet. When I call it without parameters, I get all the users. I want just one, so I'm going to use, arbitrarily, ID 43. That's still a lot of output, due to all those properties. Some get commandlets allow you to specify just the fields you want coming back from the API.
I can do that for all users, too. And if I pipe that to the built-in filter table commandlet, becomes way easier to deal with. Instead, it appears as a load of entities coming back almost like a table. What if you wanted CSV to load it-- to actually load it as a table? One interesting thing is that I can't convert that to CSV in the same way as I did with JSON.
The convert to CSV gets hung up at a higher level in the PowerShell object model. I'll show an easier way later of how to do CSV. But first, let's see how the fields parameter affects my API request using verbose. You can see the fields query parameter is added to the URL. And you can see verbose output for what the parameters were when they aren't encoded.
I can send that API request using invoke REST method. Let me show you how to do that. I'll want to specify the core service, get, and the user's relative URL. Then, to specify the query parameters, I use parameters, and specify it as a hash literal.
The reason that didn't work is because hash literals in PowerShell use an equal sign between the key and value. So I'll just fix that up, and run it, and you get the same result. The point of all this is that you can implement any API request using Invoke-SafeguardMethod. Now, let's talk about getting the CSV output that we wanted.
Invoke-SafeguardMethod allows you to specify an accept header. If I set that to text CSV, then I get exactly what I'm looking for in my CSV output, with the top row for the column names. Now, what if I want to search for a user by name? Say, Uber Admin. There is a Find-SafeguardUser commandlet allows me to specify a query filter.
Notice the quoting I have to do for the filter string. That first time, I accidentally sent back ticks. Let's try that again with single quotes. Now that I got my quoting figured out, I realized that username is the old property name in V3. It's now Name. Not there. Maybe I can find it with a space.
I thought I had a user in here with that name. Let's try a display name. If that doesn't work, I guess I'll use a different operator. I have to spell it right, of course. OK, that found something. But it's so much data, it's hard to see what it was.
Let's use the fields to turn out just what we are looking for. There is no Uber Admin on this system. Well, let's get Global Admin instead. Let's say I want to modify that user. So instead of just getting the user, I'm going to save it to a variable.
If I just type the variable in, I can print the object that came back. On this object, the description is null. Let's set it to, what is this for? Oops, that typo at the beginning of the line. OK, that just sets the description of my object to memory, but I want to send it back to the server so that it sets the value up there.
I can use Edit-SafeguardUser, and give it my variable as a user object to set it. It can find the ID for my specified object. So if I execute that command, I get a response back, and you can see that my new description is there. But just to prove it, I'll get directly against the server. And you can see that the description is there.
Now, let's do something with assets and accounts. If I use Get-SafeguardAssetAccount, it gets all the accounts on SPP, which is overwhelming. So let's get just the fields we want. Oh, asset name field also changed in V4 to a sub-object. So let's add a dot. Now you can see all these accounts, and the assets they're on.
Let's do something different from what we did with users. Let's execute a password change. I'll pick an account ID, 51, so we run the Invoke-SafeguardAssetAccountPasswordChange commandlet with ID 51. Well, it looks like that got interpreted as a different parameter. So let's be explicit that we are sending in an account ID.
This endpoint works different than the Safeguard API. Changing a password is a long-running task, so it doesn't return an immediate result. Instead, you can get a progress callback that you can listen for updates on. Safeguard-PS wraps all of this complexity for you, and then prints out all of the output at the end.
So those are just a few examples from the Safeguard-PS. Because SPP is API-first, you can do anything via the Safeguard API. And Safeguard-PS probably has a purpose-built commandlet to do it.
For the next demo, I want to show Safeguard-Bash, which is another scripting library. And I want to use it to demo how scripting against SPP can be event-driven. Safeguard-bash excels at to A2A and event-driven use cases, but you could also use one of our other SDKs, such as SafeguardDotNet, SafeguardJava, or PySafeguard.
I encourage you to go to those GitHub pages for more information on how to use those languages, if that's what you're interested in. To start, I'm going to show you the Safeguard-Bash GitHub page. And I'm going to jump into the source Directory to show you that Safeguard-Bash is really just a bunch of helper scripts that work really well together if you add them to your path.
I'm going to highlight these A2A scripts, because we are going to demo A2A use cases. This event-driven use case that we were going to do at the end is a little more advanced. So we have written up a published sample you can follow for yourself. Going into that Sample directory, there is a password handler script that is just going to print out the password with color console output. If you were to build an integration for yourself, you would replace this script on your command line with something more meaningful.
If you glance at the source, you will see that this handler script is receiving the password via standard in. This is important, because we don't want to leak password information into the process table. In order to use this sample, I need a simple PKI. I have a root CA generated on June 5 of 2021, with a separate issuer CA. That two-level PKI is represented by 10 formatted certificates.
I issued a certificate for an A2A user as a PKCS 12 file, but I've converted that to separate PEM-formatted cert and key files. I put all of Safeguard-Bash in my path to run these examples. Now, the first script, I'm going to run requires Certificate Authentication. The purpose of this script is to show which accounts can be retrieved via A2A for this certificate.
This is how you compose the command line, but there is a dash h option that you can send to see the usage. Running the command prompts me for a password, again, so the password doesn't end up in the process table. From the result of that script, I can see an API response that lists the accounts I have access to via A2A, Azure AD Apps asset with the Test App account, and radius.dan.vas with the L local user account. Note the API keys that are used to pull those credentials.
With A2A API, you don't specify your account requests by account name. So the first thing I'm going to do is pull an API key secret from the Test App account. To compose the command line, I need the target SPP appliance, the client certificate information is passed to the PEM key and the PEM cert, and the API key for that account.
Executing that, you can see that I get back an API response. You can see that there are two API keys configured for that account, Test Key, and TestApiKeyB. Those are the current client IDs and secrets for those accounts. The response to an API key A2A request is a JSON object with all of the information.
Now I'm going to use the get A2A password helper script to get the password for the account on radius.dan.vas. The command line is basically the same as for the previous example, except I want to specify the appropriate API key for the L user account on radius.dan.vas. When I execute that, notice the password comes back wrapped in quotes.
The reason this happens is that the API is sending back a JSON object. That content type is specified in the HTTP request with an accept header for application JSON. To make this more usable, we have a dash r option that will give the password in raw form. Now I want to show how to get the SSH private key for the same account using the get A2A private key helper script.
By default, the private key comes back as a string. I can use the dash r option again to get the raw output. Using that option with an SSH key also processes the new lines, making the output ready for use, or to be piped to a file.
Let's clear the screen, and do the last demo. Remember the Event Handler script that prints the password that we showed at the beginning of the demo? Now we were going to use it, along with a helper script called Handle A2A Password Event. What this script does is connect to SPP using a client's certificate, and immediately retrieves the password, and calls the handler. Then it listens for a password change event. If that event is detected, it will retrieve the password again, and call the handler.
I will execute the listener on the right of the terminal. And on the left I will log in with Safeguard-PS and change the password. After starting the listener, you see the password printed the first time. Moving to the left terminal, I log in.
Let me try that again with the correct password. I call Invoke-SafeguardAssetAccountPasswordChange with the radius L user account. Notice, on the right, I get the new password immediately. Let's change it one more time, just for good measure. There's the new password.
Before we are done, I want to show you some similar functionality that's available in the brand new PySafeguard module. Starting from GitHub again, you can see PySafeguard has its own page. It has some instructions on how to connect using PySafeguard, and it has brief instructions on how to install using pip. In the case of PySafeguard, the Event Listener functionality requires an optional, dependent package for SignalR.
To demo this, we will start from the command line by creating a new Python virtual environment. Let's activate that environment so we can install modules without affecting the rest of the system. Let's pull in PySafeguard. The main dependency for PySafeguard is the request module.
I'm going to demo from the command line. Let's just import everything from py Safeguard. I'll copy some commands from Notepad to move things along quickly, to make a connection. In this example, I'm using an appliance without a TLS certificate properly configured. You would never do this in production, but it is good to know how to work around that problem when prototyping.
Setting verify equal to false when creating the connection object takes care of disabling TLS for verification. Then I connect using the default user and password. The warnings you see are related to ignoring the TLS certificate validation. The first thing I'm going to do is just invoke the system time method on the appliance service.
I made a typo by using connect rather than connection, when-- which is the actual name of my object. The Invoke method returns a response object, with status code and other properties. Let's print the actual content from the object. And you see there is the current UTS-- UTC time for SPP.
Let's exit out, and install SignalR so I can show an event example. In this part of the demo, I'm going to connect to the full event service, rather than to the A2A event service, as shown previously. Once that SignalR component is installed, we import everything from PySafeguard again.
Let's create a connection object. Now let's define a callback that just prints the message the result message out to the screen. Now let's connect. My PySafeguard connection is listening. On the left terminal, I'm going to connect using Safeguard-PS, and start a new Safeguard backup.
Every event in SPP is logged and distributed to event listeners, per application permissions. Running the New-SafeguardBackup command will start the backup. The listener gets the event in the right-hand terminal. Taking a backup is a long running process. So that backup completed event should also show up on the right.
I'm going to go over to the left, and request that backup so I can see if it completed. Because I would have expected to receive it by now-- because the event didn't appear on the right. It looks like I just needed to scroll down. Let's export or download the backup on the PowerShell side. Downloading a backup, of course, takes some time. But once the bytes are received, the event shows up on the right. Although the backup failed to save, because I was in the root C directory on the left.
And that is to A2A and event-driven demo. A2A is the perfect API for pulling secrets quickly and securely from SPP. These listeners, written in bash, dotnet, and Java, are very robust. They know how to reconnect in the case of an outage. These APIs and SDKs give a better way for retrieving passwords without constant complaining.
We have just released support for an Ansible integration with Safeguard. This integration is built on top of the PySafeguard SDK. Ansible is a very common tool used to automate apps and IT infrastructure. Ansible is actually sponsored by Red Hat, but there is a Community Edition that is very commonly used by many companies and IT departments to simplify deployment and configuration tasks.
Ansible has its own declarative language for expressing desired state as YAML. Ansible automation is organized into playbooks that operate on inventories. Red Hat provides the Ansible automation platform that is a web UI for using Ansible. It used to be called Ansible Tower. There is also a free web UI called AWX, but many people just use the command line.
Safeguard provides two integrations with Ansible. First is the credential lookup plug-in that allows you to load credentials into your Ansible sources-- your playbooks and inventories. The second is the Credential Type plugin that integrates with the Red Hat Ansible automation platform, or with AWX.
In this demo of Safeguard-Ansible, I want to briefly show you Safeguard-Ansible in action from the command line. Before we get to that, let's start with the Safeguard-Ansible GitHub page. This is where you would go for information on Safeguard Ansible, including how to install it. The contents of this page are split into two parts.
The first is the Credentials Lookup plugin, and the second is the Credential Type plugin. We are going to show the Lookup plugin in this demo. Our Ansible integration requires PySafeguard as a dependency. This can be easily installed via pip, and then the plugin itself can be installed from Ansible Galaxy.
However, if you don't want to use Galaxy, you can download the plugin directly from GitHub. Below, there are also some usage examples similar to the one I'm going to walk through in this demo. Let's start the demo by running the command to install. I already have the collection installed, so this doesn't take any action in my environment. I'm going to show you how to use Safeguard-Ansible in the inventory file to connect to a target host.
To use this integration, I need to add a variable to my inventory file that contains the necessary information to do the lookup. Really, this is just the A2A connection information for using the A2A of SPP. That connection information is a client cert for authentication, and a root certificate to validate the TLS connection to SPP. Then when specifying the host, I can set an API key to the credential that I want to look up.
The actual call to the plugin is highlighted, where I am passing in the parameters. I have the certificate files installed here locally, and the A2A credential retrieval configuration on SPP can be locked down to only allow connections from this Ansible control node. I'm just going to run a simple ping against all hosts listed in my inventory.
This essentially causes Ansible to create an SSH connection to the host I have listed. In this case, it is just radius.dan.vas. But the credential used to make the connection is the password being pulled from SPP by the lookup plugin, and nothing needs to be hardcoded into the Ansible configuration files.
The presentation in demos thus far have highlighted the automation capabilities of SPP, and the tools that are useful when implementing your own DevOps scenarios. I encourage you to visit the One Identity GitHub page to learn more about using the Safeguard API, and these powerful automation tools.
To wrap up, let's get back to DevOps. Sometimes there can be a little friction between development and operations. We mentioned earlier the massive number of tools that have been coming out to speed up the development process. It seems like there are more new frameworks and new tools than there are new products to put them in. And whether we like to admit it or not, there is a certain amount of technology fashion in our industry, where some developers really want to be seen using the newest and coolest thing.
Operations, on the other hand, usually has a greater sensitivity to, and responsibility for, security and maintaining the posture of the product. This can lead to some disagreements. Developers will often come along and say, hey, I want to use this great new tool. And because operations is worried about whether they can do so securely, they often have to say no.
On the other hand, operations receives mandates to implement security controls in production, and throughout the DevOps pipeline. So they come along to developers and say, hey, I need you to use this new security tool. Developers don't want to be forced out of the tools that they are familiar with, so they-- and/or that they think are more fashionable, so they say, no way.
This can lead to friction that, unfortunately, becomes a security risk. The last thing we want is for these two groups of people to be working around each, other and omitting important security controls. This friction can cause developers to check in an API key with the intent to remove it later so they don't have to slow down their coding process.
This is why Safeguard offers two alternative approaches to using secrets in DevOps automation integration. Our goal is that you be able to move at the speed of DevOps with the best possible experience, depending on your use case. Earlier we described how the A2A API can be used to pull a password, or an SSH key, or an API key from Safeguard, which is a great strategy for all manner of DevOps and RPA integrations.
However, Secrets Broker for DevOps allows you to set up a different scenario, where Safeguard pushes a secret into the DevOps world. Two advantages to this strategy that I'll mention here are that, one, you can allow your developers to continue using the tooling that they prefer, thus reducing that friction. And two, you can avoid granting direct network access to your PAM vault from the DevOps world.
Secrets Broker for DevOps is an add on that runs as a service outside of Safeguard. It can be deployed as a container, or as a Windows service. Secrets Broker provides an extensible plugin framework that can be used to push secrets to other DevOps tools and vaults.
The advantage here is that developers are more likely to use, and correctly use, the security tools that they are familiar with. For example, Azure developers building with Azure pipelines can easily make use of Azure Key Vault. Why not put the secret that they need right where they are ready to use it?
Companies that are used to deploying internal applications on Jenkins, with database passwords coming through a credential plugin, can easily integrate that into how they work today. Many of the mature DevOps tools out there have secure secret storage mechanisms already. The problem with many of them is that they don't have a mechanism for securely rotating those secrets. Safeguard can easily do that for them.
In the latest release of Secrets Broker, we have added two brand new plugins-- one for CircleCI, and one for GitHub Secrets, used in GitHub Actions. Those are two examples of platforms who cannot rotate their own secrets.
In this demo, I'm going to show Secrets Broker for DevOps push Secrets to CircleCI. CircleCI has secure secret storage, but it cannot rotate secrets based on schedules. I'll start with the Secrets Broker interface.
This Secrets Broker is already configured to communicate with an SPP appliance. It has two plug-ins installed-- one for HashiCorp Vault, and one for CircleCI Secrets. If I click on the CircleCI plug-in, I can see the configuration that is used to talk to CircleCI, including the name of the context that Secrets Broker is going to push secrets to.
Below, you can see the managed accounts from the SPP that have been mapped to this plug-in. Notice the account name, Test User One. So closing that out, I'm going to start the Secrets Broker monitor. Then I'm going to go over to CircleCI and show you the current value of the secret.
As you can see, I can only view the last few characters of the secret in the CircleCI interface. They are 1 pipe sR. If I head over to the Safeguard UI, where I have the Test User One account open, I can manually request a password change. You can see that change running in the Safeguard task pane.
When it is finished, I can head back over to CircleCI. Let me just refresh the page here to see what we got in the update. Scrolling down, and now you can see that the secret ends in 5e1.
Thank you for joining this presentation on secure DevOps. I hope you have learned how Safeguard can integrate into any environment, and I hope you will choose to use Safeguard to move at the speed of DevOps. I'd like to close by thanking all of the great partners and sponsors who make Unite possible. Thank you.