Welcome to today's webinar, brought to you by Quest and Enterprise DB. I'm Stephen Faig, Director of Database Trends and Applications and Unisphere research. I will be your host for today's broadcast. Our presentation today is titled, Ensuring Performance, Scalability and Security in the Cloud. Before we begin, I want to explain how you can be a part of this broadcast.
There will be a question and answer session. If you have a question during the presentation, just type it into the question box provided and click on the Submit button. We're going to try to get to as many questions as possible later. Plus, all viewers today will be entered for a chance to win a $100 Amazon gift card just for participating.
Now let's introduce our speakers for today. We're excited to have with us Krishna Raman, Vice President, ISM R&D at Quest, and Aaron Sonntag, Senior Software Development Engineer at EnterpriseDB. Now I'm going to pass the event over to Krishna to get us started. Welcome to the broadcast, Krishna.
Thank you. Thank you, Stephen. Thanks for having me here, giving me an opportunity to talk about the perspectives on how do we, as a company, Quest, navigate the modern cloud landscape and give that perspective to the user. Hopefully, there are few points that the listeners on this broadcast can be walking away with.
So welcome, everyone. Good morning, good afternoon, and good evening, depending on where you are. I live in the DC area. So it's hot here. So let me start you off with today, we're going to talk about how we as a company in Quest Take the cloudification of our products and when we try to solve customer problems, how are we looking at it? So we all know the classifications of the cloud. We have a public cloud, private cloud and hybrid.
The hybrid, when we mean it's not just on prem-- it's not just your private, but also on prem, private cloud and public cloud. So we all know public cloud is a vast multi cloud atmosphere that we have. Big, big companies rallying behind that. So unlimited resources. You can scale how much ever you want. There is no limitation.
But what do you get with the private cloud is, how can we be more secure and you have more control of the environment that you deploy? And you have control over when you want to scale, when you want to scale down in terms of the infrastructure itself. You're not going to invest your capital infrastructure beyond a certain means. Hybrid has-- you have still on prem desktop applications that users in the enterprise use. And then you have your combination of your private cloud, where certain applications that you want to be in control of and you want to be cost efficient.
And then you have public cloud where the needs are, where that gives you the unlimited scaling. And there are plenty of reasons behind why you want to go to a public cloud. Because sometimes it also needs an unlimited check that you have to be writing. So when we getting grounded on that concept, if you really look at-- there are still on prem applications, can there be a enterprise just with on prem applications? No, you cannot.
Those days are gone. After the internet technologies that I used to be part of back in the 90s, the cloud is single handedly the biggest growth engine. And that's the way business has come alive. It has enabled the government, it has enabled the private enterprises, it has enabled retail market of everything in our life. So just on prem, I believe it's a no.
But you cannot-- what is also not true is are we going for everything, are we going to Cloud? That's also not true. Because you still want certain aspects of your applications, the data that you have, you cannot absolutely get those things out to the-- whether it is a private cloud or a public cloud, because the security measures, everything, is different for that. So it's also not true that everything has to be in cloud either.
So what do people do? People do-- the most common answer you will find is, hey, we have a hybrid deployment. Because whether it is storage costs that we are limiting, whether it is-- if you take an architecture, application architecture that is suited for running as a monolith, and then you take that to a public cloud, you're going to maximize the cost of it.
You can't scale. And when we talk about public cloud and scaling, there ought to be some architectural differences that are to exist between the applications, how it is optimized to run on the cloud, where we talk about microservices, distributed applications and distributed systems. So if the application is not optimized and you lift and shift, you're not going to get the benefit of being on the cloud and it's not going to scale as well as you would want it to be or you imagine it to be.
So there are project and execution time that it takes to change over to the architecture. So we have to bring in a lot of data points before we do that. So it's not-- you pretty much, you crawl, walk before you run kind of approach. And then you have dependency on the critical data. You cannot have just the data exists on the cloud. What if something were to happen or connectivity goes away and it is mission critical, business critical. We need access to that data.
So there are those aspects of it. And then all the risk mitigation and HA and VR scenarios, you have to work this out too. So predominantly, I believe the most common answer here is the hybrid. Now, I'm going to tell you, being part of a company like Quest, where we have a lot of products that we solve in the data management space for our customers, we have taken a hybrid approach on how we re-architect and enable cloud in our gallery designer applications.
So if you take the example of, what do we do as an organization? So we have a couple of applications. Erwin Data Modeler is one of the applications, which has been there for 25 years. It's one of the leading applications that people use in the market. Pretty well known. But when we talk about Erwin Data Modeler, the older architecture talked about how a user can check out a particular file and then they do modifications to the model.
So you pretty much lock that file not in a collaborative way that you interact. Now what we have done is we have introduced the concept of the collaboration part sits on the cloud. So either we can host it, or it could be a customer's private cloud. But there are common things what multiple users want to do. Those things, the collaboration point of view, we offer the pieces of the software that offers the collaborative part on the cloud and the users are still using the desktop.
Now certain enterprises can have virtualized desktop and they could be on the cloud. That is different. But the application architecture itself enables the cloudification. Another application where we have Toad Data Point, the Toad Data Point applications, or again, there's a Toad TIC, what we call as Toad Intelligence Central, and that also offers the collaborative part on the cloud where we store the data model people have worked on. And then you use a source code, think of a source code application where you check in and check out stuff. That's the way they operate.
And then they use the data and the reports that people have created. How do you go share that with other users? And that's the model that they use. And then I have one more example of another application, which is our Urban Data Intelligence application. This one, we do a lot of processing in terms of metadata, which is the source. We read the data from different sources.
And then this computes the lineage of how the data, a customer name to a customer number, to how the data got marked over a period of time and which table what you call as a customer number is connected to. What is the customer name? So it identifies all the lineage. So from a data governance perspective, the data stewards can identify how this data lives in our enterprise.
In this case, what we have done is, we are going to release data intelligence as a cloud. In this architecture, what we have done is we have done a-- sort of taken a DevOps approach, where when you read a lot of the data and then the compute power of identifying the lineage and all that, this exists on prem, and then we export a concise version of it to the cloud. And then now you have 500,000 people in the enterprise who have access to certain data sets and they want to go to the data marketplace and see what is there for me to use? What is the data quality?
They can interact at that level with the application. But we have taken out the major data from a storage perspective. And how long it will take for this data to move to the cloud, we are keeping it closer to the customer. We're not moving it to the cloud. So that is another approach that we have taken in terms of the hybrid solutions. So it's not just about these applications and how we architect it. We also do a lot of risk mediation and security because as you can see, there is the performance, there's the scale. And security also is an important aspect of the data.
So we do SOC2 type certifications every year for every product. And then we use secure coding principles that we follow. And then we have in our Ci CD, we have pipelines that go through different, you know, SAST, DAST, or SCA, we make sure the applications and the code bases tested on a daily basis, on a check in by check in basis. All these are computer made sure that we create tickets for the developers to fix it right then and there.
Basically, it's shifting left towards the developer. And that's how we stay on top. The pen test is another one, which our customers ask us all the time, hey, what kind of security testing do you do? What kind of secure development lifecycle you have? What kind of security certifications you have? So we go through all of these to make sure the applications that we release are secure, performant and scale, and then addressing the needs on the cloud as well.
So that brings to the last slide of mine, which what are really the takeaways? So there are variety of options available for us in the cloud. I foresee customers, sometimes they go all in cloud and then they pull themselves back, realizing some of the key holes that they have, and then pull it more towards the hybrid. This is what we are seeing in the market. This is what my own experience teaches me.
And then I would say, it's a case-by-case basis that you treat whether you want to fully be on cloud or you want to be in a hybrid notion, how you architect your systems. And then the cloud application obviously has plenty of benefits, but we also have to be worried about data, security, privacy, all of that comes into play.
We continue to, as Quest software, continue to take the steps that not alone solves our customer issues, solve our customer issues in a scaled fashion. But we take a very deliberate approach when it comes to cloud as to how we re-architect and how we release our applications. Hopefully, that gave you a perspective. My own personal perspective and then what we do in quest. I'll pass this on to Aaron to take it away.
Wonderful. Thanks very much, Krishna. I just got to say, I've been a fan of Quest Software my entire career, and it's a pleasure to be sharing the stage with you. So let me see. I'm going to stop here. So we should-- OK, great. So EDB Postgres, AI, this is the topic that I'm trying to cover today. And it is a broad topic for a short amount of time. But thankfully, Krishna hit this time right on the dot, and I will do the same.
The way that EDB is interacting with our customers is we're trying to help them with the mandates that they're facing. So what mandates are you facing in 2024? Is it modernization? Is it optimization? And of course, AI, where does that land and how are we delivering AI for you?
Given the topic is cloud and performance and security and optimization, I'm going to focus a bit more on the modernization and optimization portions of this mandate, but I'm definitely going to try to land on AI as well. So to start off, what I want to start with is the idea that the cloud strategy is whatever works for you, any cloud anywhere, any development environment. Another way of saying that is poly cloud. If you hear that term, the way I define poly cloud is that it is leveraging a particular cloud for a particular function for the most optimal case that you have.
So alignment of your case, the cost and the runtime. And what we want to do is enable you to run in all places. So in terms of the cloud, there's a bit of a shift here that I'm going you're going to hear me talk about later in the discussion, the idea that what we're seeing is, there's definitely cloud movement towards the cloud acceleration of development, and that's writing in place for AI.
But in terms of how we're helping our customers through this process, at EDB, what we're looking at is our feature set. And our feature set here is, we look at transactional, analytical and AI as the workload that we're trying to serve. We have a single pane of glass. So this is a view. And a key to optimization is having a holistic view of all of your data, your data estate.
There's hybrid clouds. So the idea that you're running the same technology stack on premise and in the cloud to the greatest extent possible. Because that's a key part of being able to extract performance in the public cloud, or on prem. The practices that you learn sometimes the hard way on premise, you can translate those to the cloud and back.
And then we have observability, which is key to understanding your data estate. And enterprise security, of course, which we'll touch on more today. And then hybrid, multi-cloud, public, private and on prem. So we're partnering recently with-- partnered with Quest successfully in the past, and that was very exciting. We're partnering recently with Red Hat and Supermicro to offer on prem appliance that is Kubernetes and Postgres together so you can bring the cloud into on prem space.
And then platform tools and services, migration portal, high availability, and of course, backup and recovery is key for good governance. So what does that look like in practice? So in this diagram, I'm showing Postgres implemented in a public cloud, let's say, AWS-AKS, or Azure-- sorry, AWS-EKS, or Azure AKS or GCP, and their Kubernetes offering. Or private cloud. So this could be your Kubernetes implementation on prem. Or our upcoming on premise appliance.
And any of these systems, we're offering a wraparound support and security functionality. And then a set of tools to have a good view of your estate. CLI tools, a UI, a dashboard, a single pane of glass, a console in order to make sure that you're optimizing your footprint for your data estate. And then at the bottom, there's a note there, but it's pretty critical, is a reference to security.
So you want to make sure that across these platforms, you have a unified approach to security. And that includes things like transparent data encryption, which is just a recent rollout into Postgres. And I'm very excited to see that land in all of our offerings. So you can have that extra layer of encryption and complete control over the keys that are used for that encryption.
In terms of modernization, I also want to touch on the potential to migrate from Oracle to Postgres. And having come from an Oracle background myself, I would say 80, 90% of Oracle databases that are running probably by count, certainly not by volume, it could be converted to other database technologies, assuming you can trust the quality of service that you're getting. Definitely, there's always going to be that top tier, and I understand exactly where that's coming from.
But what I want to highlight here is that we can bring in our mastery of security or migration portal, and we have a particular flavor of Postgres based on the very reliable, very popular open source Postgres database that adds a certain very valuable Oracle compatibility layer that eases the development effort it takes to adapt an application from Oracle to Postgres.
And so our key here is you got your Oracle databases on the left, and we're moving them either-- we want to do two moves at once. You can do the migration. And when you're doing that migration, you can execute the migration, let's say, to the public cloud, or you can execute that migration to private cloud on prem. And the migration toolkit, which I've used quite a bit myself, is an excellent logical replication system that gets the job done. And we have some options, depending on the complexity, that can get you down to zero downtime. Or you can just do like a straight export and import of your data, effectively.
And we have a-- there's a migration portal, where you can just quickly upload your schema and it'll give you 90% accurate list assessment of your schema, making that migration much easier to tackle, or at least measure or consider. So that is speaking to modernization. But I also want to touch on the AI portion of things a bit.
So in terms of AI, I think the interesting part of AI from our perspective is, where is the data hooking into it? And so we're very familiar with this idea that billions upon billions or even trillions of words are used to build an LLM. We're not building our own LLMs here, but how do we leverage them and how are our relational databases plugging into these systems? And so I want to touch on that a little bit here.
But first, I think to talk about AI and how postgres aligns with AI delivery and practical terms, it's also good to understand how it aligns well with analytical workloads. First, because the transition to data lakehouse as a recent development is key to understanding how structured and unstructured data are going to come together, or to serve both a modern analytics processes, as well as generative AI.
So here, you have a conventional data lake on the side. And in order to bring some analysis together, first off, in the center bottom, Postgres in EDB is offering a lake house implementation and that'll help-- again, it'll store structured and unstructured data together, adding structure to your unstructured data, let's say. You also may have your own systems as well. And for analytics processing, we also want to acknowledge that it's really important to include transactional data.
Normally, you'd be addressing transactional data through daily ETL. And that's still quite possible. But we have some other mechanisms to accomplish that. And the public cloud, definitely our cloud native technologies make that easier to accomplish, whether they're in the public cloud or even on prem. So you bring these things together to be able to offer analytics and have the analytics run at a point in time that you need it, and not running on a system that might be idle when your ETL or your reporting is not running, or if you have a burst of folks that are looking to gather business insights from the data that you're providing to them, that can be run and the compute can be spun up and then torn down.
Then from AI perspective, and this is what's most interesting to me from a personal practice and what I'm doing in my spare time, playing with LLMs locally, you take an LLM that is offered from ChatGPT and there's just no way that any of us are necessarily having the compute power at our fingertips to build an LLM that is as effective. But what we do is we take an LLM that has been built and we use embeddings, which are vectorized data that can be stored in Postgres effectively.
And the LLM can take advantage of the vectorized data and it can change the way that the LLM responds. So it responds within the scope of the data that you provided. So I'm really excited about this pathway that we've built up here. And then, of course, you can also index the vector data set that you've had so that it can effectively, be used from applications. So applications use to vectors differs slightly from a prompt-based LLM in a way that indexing has an advantage, whereas the LLM is looking for an embedding approach.
So just finally, I just want to highlight that EDB Postgres for the AI generation. I'm very excited about where we're going and what we're implementing most recently. We've got over 700 employees and 20 years of engagement in the Postgres codebase. And I'd be very happy to talk with you all more, and I'm looking forward to the discussion that we're about to have. Thank you very much. Back to you, Stephen.
Thank you, Aaron. It's great to have you and Krishna with us today. I'm excited to have this discussion. So first question, and Aaron, I'm going to let you go first, cloud optimization has become increasingly popular as more organizations have migrated workloads to the cloud and are now seeking ways to improve performance and cost efficiency. What are key strategies, techniques you think all organizations should consider?
I definitely have an understanding that I have come to over the past year in grappling with AI and structured data and how these things are going to come together, and I really believe in the idea that we've got to give ourselves a good picture of our data estate. So the idea that you understand where all your data is so that we can look for duplications of data, duplications of schema, workloads, underutilized compute. When you're dealing with hundreds, or even thousands of database instances, having a good map is important.
And so what I'm particularly interested in is going down below the kind of compute level to being able to understand the schemas that are involved so that you can make real decisions about where you need to merge or split or move data that should be in the cloud, or not. Again, going back to that idea that you're balancing between them these days.
Understood. Krishna, would love to hear your thoughts on this. You know what, Krishna, I think you might still be on mute.
I apologize.
No worries.
So I'll come at it from engineering perspective, head of engineering perspective. Because in my life, my SaaS products have moved from-- 3,000, 4,000 customer type SaaS products have moved from private cloud to public cloud, various scenarios. So there are multiple data points I would like to identify. One is, how much of a spend capacity do you have? And then how we talked about-- I personally talked about the lift and shift workload. So how optimized can this application be to run optimally in the public Cloud?
Because that cost optimization is extremely important because you're going to pay per VM on the time. So you don't want to be running 100 VMs for an application. Because that's what your peak load is going to be. Then you've failed in that approach already. So how do we auto scale-- how do we re-architect the applications just enough that can provide the auto scaling ability on the cloud?
And we don't have to completely re-architect the application, but you have to think of sessions. You need to think of, can this workload microservice, can it run by itself, even if you're not looking at involving Kubernetes and stuff like that, even as a process, can I decouple the coupled systems to run those unit? And I can autoscale that unit based on number of transactions, put that behind the load balancer.
So you have to think of various strategies like that. So it's not just like, I want to go to public cloud because I have unlimited money. Because I don't think it's optimal. And then final thing that I would offer is the skill sets of people that we have. So in these cases, five, six years ago when I moved such workloads to the public cloud, my team was not fully equipped on all of this. So I worked with a external vendor who brought the expertise in and then my team collaborated with them. Because these guys are the subject matter expertise of the architecture, of the platform, of the use cases.
So the external agency came in and helped. And we worked together to hit the end goal. So you have to be thinking about various things. And it really depends on how much cost are you trying to optimize. And then you need to have monitoring software tool sets that gives you the ability on transactions and database capability and all of that.
Yeah, I like your point, Krishna, especially on the resources part. And the way I look at it is, one of the recommendations I have is to seek a challenge to consider homogenizing your tech stack. So very often, you get splintered with five or six database technologies. And that is really hard to maintain expertise and be secure and tune your performance. And these are very high level things that I'm talking about, but I really think they're valuable to get to the endpoint of-- a very highly optimized, highly performant system is to look at homogenizing your technology stack so you've got just a couple of database technologies.
And another key to this is that the database technologies are, you want to look for database technologies in your stack that are feature parity between the public cloud and on premise implementation. So for example, EDB postgres and our cloud services platform gives you 98% of all the configuration available in postgres, whereas typically, AWS is minimally implemented. They've got about 60% of the options available for tuning in. So a key part of even tuning performance is having a feature parity stack. So you modernize your stack, make sure it's got feature parity from on prem into the cloud.
That's a great point, Aaron. I was going to ask, Krishna, when you were speaking to the extent that organizations can be successful with cloud optimization, and everyone's different, that it isn't necessarily a one size fits all approach, but we're talking in generalities here. What percentage of this mix is just having the right technology and tools and what percentage is having the right people with the right skills and strategies?
I would say 50/50. It's like you can't do one without the other. It's super important because that background, that knowledge on what exactly to do, sometimes we overindex on going for the blue sky type of solutions. That will just lengthen your project, eat up all your costs and then we are way over complicating it. You just need to know, how I can tune this architecture, tune this application to run on Cloud that is optimized just enough and then let it use the resources the public cloud can offer, or any cloud technology, even if the private cloud, that it can offer. So I would say it's equally important.
Understood. Aaron, that's what you would think as well?
Yeah, I am already, thinking about skilling up. So I apologize. I was just listening to you, Krishna and just thinking about skilling up and what kind of skills that we would want to see the DBAs need to be thinking about trying to acquire in the market. So one thing I wanted to make sure I landed in here is that containerized databases are a great fit in general for performance. There's just no performance impact, measurable or appreciable performance impact in a containerized database for the vast majority of relational database implementations.
And so it's going to be in the handouts. I include a link. I have a link to a page on enterprisedb.com. And it's just a quick ramp up to implement the kubernetes-based postgres, so cloud native postgres. And the Kubernetes operator, I encourage you to go try it out. Because speaking of skilling up and dealing with the public cloud, it is a great way to understand what's going on under the covers. Whether you're choosing an opaque database solution like Flex, or you're going with EDB Postgres, which is transparent and you'll know how it's running by going through this process.
Understood. And that link will be on the archived version of this event. So that should be posted by tomorrow afternoon. Once it's up, folks will be able to access that. So I think that is a great segue, Aaron, into our next question. And Krishna, I'll let you have at this first. As more and more DBAs work in cloud environments, what are the most important skills for them to be successful?
Yeah, it's a very interesting question, Stephen. Because in my previous job, I was part of SaaS company-- SaaS ops company. And what I typically found out is that in big enterprises, IT enterprises, there are still DBA functions. But on the software company side, because of the toolings and all of the help that we get, these DBA functions do not exist anymore sometimes, which is I believe is a wrong choice if you're application is a transactional database application.
Because what happens is, then I looked in between the cloud ops and your SaaS ops and the dev team. And what I would find is, interestingly enough, you will have customer issues, you have APMs, whether our, you know, Foglight for databases, the product that we have, it goes and gives you the information and perspective that your traditional APMs don't. Because it offers you performance insights of the database itself. Lack of such tooling, what would happen is that we will have customer issues. And then the SREs that we had did not exactly know how to optimize queries and such. We didn't have tools.
So this will be thrown across to the dev team. And in the dev team, there are no true DBA function. So your organization typically is gated by one or two people, really senior who know how to do it. So they have collection of these tasks to be done. So in which case, that's when it occurred to me, oh boy, I think I want site reliability engineers and function to be together for that space. I'm just taking one example. For the cloud ops and SaaS ops, what if they knew exactly how to, not just use the tools to manage the systems, but how the databases worked?
DBAs have intrinsic knowledge about how the database is function. But then if you say my mind went, we should make some of these DBAs as SREs as well because now they have both application infrastructure and the database function as well. So if you are a traditional DBA, in terms of acquiring in the cloudification, getting some of these cloud-based certifications, whether it could be AWS certifications or Google certification, it doesn't matter as your certification. It doesn't matter. But getting yourselves more equipped and skilled up on applications and distributed systems and cloud knowledge will hugely complement. And that is an asset to an organization. The DBAs can become even more of an asset to an organization. And I love the DBAs.
Makes perfect sense. Make makes perfect sense, Krishna. And I feel like, exactly, you're right on point there. And another way of framing it, I would say, is as a DBA, you know Unix, you know Linux or Solaris. Because it is the place that your database runs in so you need to know how to run commands to trace, et cetera. That is why I recommend spending some time learning kubernetes, at least implementing an RDBMS in the stack so you just get a feel for that operating system. Because effectively, whether you're running it yourself, it's on prem and OpenShift environment or it's in the cloud or it's RDS, that's what's going on in the back end. So the better you understand the limitations and behaviors, the better off you'll be.
Sorry, and I just want to mention something else. You mentioned the Quest Toad has a data modeling function, and that made me think about the idea of a data lakehouse is a kind of newer concept, about a year or so old at this point. We're implementing our data lakehouse approach as well, but I really encourage folks to go look beyond the acid database that we hold close to our hearts here and look at the data lakehouse. Because Structured Query Language, SQL is coming back.
And so the open source table format, you can look at Iceberg from Apache. It's really interesting stuff. And the data modeling is how you're going to get the data from an unstructured system into a structured system, let's say, or from a data warehouse, where it's very inefficient, but very powerful into a lakehouse for later analytics.
Sure. Here's an additional question. At least historically, there have been a lot of database administrators out there, and not all DBAs are created equal. There are different types of DBAs and there are also different kind of specialties and focuses for database administrators. Are the days of just being a SQL server, DBA or an Oracle DBA, et cetera, are those kind of coming to an end? Do you think there will always be kind of specialized DBA functions when it comes to certain types of databases or database brands, or do you think the trends we're currently seeing with the continued adoption of cloud and automation, they're going to become-- maybe take on additional responsibilities and become, I don't know if I would say jack of all trades, but be pulled in different directions? Krishna, I'll let you start, and then Aaron, I'd love your thoughts on that as well.
Yeah, that's a good question. I believe, we use the terminology, like how Aaron used poly cloud, polyglots, if you will, people who are familiar with multiple languages and they're able to code. In the same way, if you look at the landscape, a lot of these specialized roles have become more vendor agnostic. No more of the question of, you know, enterprises that are only sticking to Oracle databases, that are only-- they've all become application-oriented. What is the application I'm deciding? Which database-- am I going to use structured database, unstructured database? So my usage have become pretty broad.
So I would say the cue here is for, if you're a specialized DBA only for Oracle, don't want to say it is the end of it because there are bigger enterprises just using just Oracle databases. That is not what I'm saying. But I believe for the betterment of yourself and for the organization and the company, the more multiple more hats that we can wear in terms of what our knowledge about-- because it's all about translation of skill sets from Oracle to Postgres, at DBA, Oracle DBA can much more easily, in a speedy manner can learn the intricacies of the Postgres database.
So we are becoming vendor agnostic. And more general purpose DBAs I think would be the need for it, as opposed to a very specialized DBA for one particular technology.
Yeah, well--
I hope that answers your question.
One thing-- Yeah, it makes me think of, Krishna, it's like, Structured Query Language is dead. Long live Structured Query Language. I mean, the fact is that when I was in the DBA space with running Oracle, Hadoop came on the scene and everybody's like, no SQL, unstructured data, this is it. And so I ignored it. And to your point, I think this is where the DBAs actually have a good opportunity to wrap their hands around all of the data estate. And EDB is helping with this. We have solutions for doing this at a deep level.
But however you accomplish it, I think from a career perspective, a skills perspective, leaning into the data is key. And it's going to be valuable all the way into-- even to the AI revolution that we're living through and experiencing today. So look at vectorized data in your RDBMS. It's legit.
And in fact, Aaron-- sorry, Stephen, let me just make one more point here that came to my mind while Aaron was talking about. We used to have all these DBA roles that are very specific to things that they did. We need to think all things data, basically. AI is not AI without data behind it. There are data engineers. There are DBAs. So then it's, how do you be a data steward in a company, to you have all these data sets people use, how can we make it available for others, like our data marketplace from [INAUDIBLE] product does.
There are so many things happening around data in an organization. So you're not necessarily limited. They have options now. I would feel in more positive way, they have much more options as to, you know what, I've done this for a long time, I would like to focus on this aspect of the data as opposed to just looking at the database. I believe they're empowered to make those decisions and they have more options now than before.
That makes a lot of sense. And yeah, I don't necessarily think that overall, we're going to see all of a sudden, specialized DBA roles will fall off the cliff. You can go on LinkedIn. There are lots of them out there. There are still COBOL programmers out there. The mainframe isn't dead either.
But yeah, there are certainly options for folks. Whether it's getting more into data architecture, data stewardship, data engineering. I think this is a good segue into our next question for discussion. Aaron, you brought this up before. The question is-- I'm going to let you have at it first-- how are current advancements in AI impacting cloud adoption and needs? And you can feel free to expand upon that as well.
Actually, I have a pretty succinct answer for this one. I think that the way I see AI and cloud adoption is, this is cloud native technology's strength. And so again, it doesn't necessarily have to be in the cloud. It could be on prem if you're, leveraging Kubernetes and you've got your database running and there allow you to experiment, to experiment and iterate much faster. And this has always been the strength of cloud, and it will continue to be the strength for AI. So it's not necessarily a new statement. But since so much of this is in flux, experiments are key and iteration is how you win.
So let me add on to what Aaron was saying. I believe because of cloud, AI adoption is greater. Because you have all these facilities. You can just-- I was looking at, for example, me and my friend were having an argument about whether this college fees was greater than this college fees and all of that. No more Google search. I went to ChatGPT and I said, tell me University of Virginia versus University of Maryland. And it gave me such a structured response.
The conversation stops because you have the data in hand. And all of this-- and I'm talking about a very practical day-to-day example, think of workforce and companies that are trying to implement AI features. Now a company like us, we have to be super careful. Because the enterprise software company, when we enable features in the product, we have to involve our legal folks to have the conversation, basically, all these LLMs. And we are going to use something like an open API, or rather, OpenAI. Is our data getting shared on the model is getting trained on? That's why Microsoft has come up with all of these different statements that they make, to make it easier for adoption. They know that people like us are going to hit roadblocks as to and we need to be careful in what we do.
Compliance, security.
Yeah. Yeah. And it's not always cost effective to buy and run an LLM for your company depending on the size of the company and what you want to do. But there are so many aspects come into it. We are, again, crawling, walking before we can run. But you can already see startups trying to innovate and getting solutions out. They are a little less focused on the kind of things that I would have to be focused on when Quest releases products. Because they are at the edge of innovation, state of the art, they can ignore some of these aspects.
And because they're trying to bring products to the market, we can get hit with liability like that. But startup ecosystem and innovation ecosystem, they can easily adopt cloud easily and AI easily that can innovate faster in some of the ways. But when the product hits maturity curve, when they start using it, they're going to hit what we hit. But we can already see-- which is what we are doing right now, and we are getting great feedback from our customers. For Toad for Oracle, we released a feature called AI Explain.
So you have all these DBAs, 25 years ago, DBAs who wrote these big queries. And newer people have come, they have to now go understand what this report does and how the query was structured. You just hit a button, AI Explain. And that tells you, summarizes, as opposed to me having to drink six coffees to decode what the other person has written. It summarizes effectively for you. But the key thing is, how are we enabling it? We would have to provide the customers with, hey, we give you the flexibility. You can choose to buy a $0 AI skew, or not.
If you buy it, you are enabled to use this feature. And then we have common platform that we have come up with that enables per tenant admin can enable per user basis. Because we sell software and subscription model, they can come and enable user by user. AI want Stephen to have access, not Aaron AI. So we are enabling very carefully on these technologies, but you can really see cloud is accelerating the adoption.
Yeah, it seems like vice versa too. You would think that with-- AI needs the cloud. Not everyone's going to build their own data center to do this. But AI should be accelerating cloud adoption or at least putting more workloads into the cloud, I would think. I don't think a lot of people are going to be-- that's a really interesting point. Aaron, did you want to add anything else on this one?
All right. Then let's move to our next question. I'm going to skip to one that's near and dear to my heart that I'd love your input on. Aaron, I'll let you go first. What do you consider the top data management related challenges faced by cloud users today? Is it security, performance, governance, integration, overall complexity, just to throw some terms out there.
Yeah. So the first thing that I see as the biggest challenge for mature organizations is the multiplication of data stores. The public cloud just opens gates and speeds things up. And so again, that idea that one of the key focuses for your organization in 2024 really needs to be getting a handle on your data estate or getting a better handle on it. I mean, obviously, we've all got a good grip on things.
We get challenged with cost and questions like that, but I want things to be more optimized. So aside from the data state, I just want to mention, obviously, security. And I have a little bit of my own wisdom to share from a security perspective. So I'm thinking from a database, data management perspective, as you enter the public cloud, I say, follow your IT organization. Follow their implementation or push them into it.
You want the coverage from your IT organization. You want security professionals involved. You should expect a risk assessment spreadsheet and add rows to that spreadsheet to the annoyance of the security professional. You want to go with your company and with your IT professionals into the public cloud. Treat it as an extension of your data center is the best practice that I see.
That's smart advice. Krishna, curious to hear your thoughts. I know it's tough to pick one.
Yeah. So I would say data privacy. Data privacy and security. We all are sitting on influx of data, sensitive information. Is that data anonymized? Is the right person having right access to all this data? Do we have data duplication? And then you have all these GDPR and all this compliance stuff that you need to do. So it's a pretty complex world if you look at it.
But I think there are frameworks and strategies and software tools that you can use to automate and do more with-- you don't need more people. I believe you need smart solutions that you have to implement in the enterprise to deal with that. That's what I would say.
Understood. Next question. How significant are vendor lock in concerns right now, and do you see multi-cloud adoption continuing to grow as a strategy? And let's go with you first, Krishna.
So vendor locking. So vendor locking is always-- it always happens with different technologies, different. So I believe there are bigger organizations that are probably not just going with AWS, not just going with Google. There are in fact-- one of our-- we keep talking to the customers. The reason I know this is because we keep talking-- we have a tool called Shared Plex. Aaron is probably more aware of the Share Plex. And we have inquiries where customers have data residing in Amazon and then they want to keep that current with on Google.
And hey, can I use Share Plex to keep the data synchronized across? It's not just migrating workloads. In this case, they want to-- across clouds, they want to keep the data synchronized. That just means that's a one little example of how you should look at multi-cloud. In some really mission critical applications, I kid you not, they probably have HAVR stuff. Of course, Amazon has its own region based and it offers all of this. But there's always more that you need than that because you're going to have one use case, one corner case that might still break.
So in that case, what do we do? We do things where it is poly cloud than just one public cloud that we rest. And it could be a combination of an organization's own private cloud and a public cloud. That also becomes-- you have footprints in multiple places.
You know what, Aaron, I think you might still be on mute.
I apologize. Thank you very much. So one thing I just want to key in on the multi-cloud piece is when you enter the public cloud, it is very tempting to commit to a particular public cloud and leverage all the tools. And I totally understand that. But the transportability of your skill set and your generalizability is very, very valuable. I mean, I want to speak to your wallet.
Cross CSP capabilities builds credibility and gives you a good negotiation position with the public cloud, which is difficult to get against Amazon. And then transportability. The skills that you're building in the cloud actually need to be translating into on prem implementation. And that's why we're trying to do this. We're building this on prem implementation with Red Hat and Supermicro, so that we can allow you to bring those cloud practices down.
And I also want to encourage you to take on prem practices to the cloud. I don't see most-- I don't see many customers implementing multi-tenant or shared use of a database like we would in a bare metal on prem. And so from my perspective, multi-cloud is part of a strategy that says I'm going to balance what I'm doing, I'm going to narrow the tech stack and implement technologies that are feeding off of each other. And the expertise I'm building, both in the public cloud and on prem. I can't hear you, Stephen. Sorry about that. It looks like you're muted.
Oh boy. I did it. All three of us. I was just going to say, Aaron, real quick, we have one question from attendee I'd like to pose to you, and then I think we'll probably wrap up. The question is, is this Postgres architecture a kind of a rag implementation or would you say it's something different?
And I was trying to form up a reply, but I'm glad you asked it. It's interesting. It's both. We're covering both. So the on demand context for a prompt is something I would say is supported from the Postgres lake house. And that's the path that you would see that integration. And then the vector is more of an embedding that would shape the nature of the prompt and let's say, limit the behaviors and responses from the prompt. And both of those are paths that start with, or are fed through your data estate, your Postgres database. Good question.
Real quick before we break, Krishna, if there's one thing you'd like our attendees to walk away today, keeping in mind what would that be? And this will give you, Aaron, some time to think about your response.
So the one thing that I would want-- when you think of cloud, think of cloud, not just what pass functions platform as a service functions that you can use in the cloud, which is a better usage of cloud. And then you need to be thinking about performance and scale goes without saying, and security and privacy aspects of it, of the data that you're managing. And that's what you need to be.
How data-- how are we providing security for the data. And then put out all the fires around privacy concerns on the data, how we manage. If you keep that at the front and center of it, the rest of them play out. The data strategy is the important part. How are you going to handle it.
Understood. Aaron, how about you?
Yeah. This is coming from a cloud architect. But I would say in the industry, now that the kind of industry view, now that the fever of the cloud is dying down and we're being more intentional about what we're doing in the public cloud and why, I really want to underscore the transportability of your data, of your practices, of your technology stack, so that your strategy, especially as you start pushing into AI, you seek solutions which are open source, adjacent solutions, which are transportable. From a poly cloud perspective, it's OK to go to a particular public cloud because you need something that they've got.
But I definitely want to impress upon folks when you're talking about optimization and security, the kind of homogenizing your stack and treating it very intentionally and demanding the public cloud experience be a high quality of service like you experience and build on prem. That's what I want you to take away, please.
Fantastic. Well, I would like to give a huge thank you to our speakers today for coming on board and sharing their insights and expertise. Once again, Krishna Raman, Vice President, ISM R&D at Quest. And Aaron Sonntag, Senior Software Development Engineer at EnterpriseDB. If you in our audience would like to review this presentation or send it to a colleague, you can use the same exact URL that you used for today's live event.
This will be archived. It'll probably be up on DBTA.com tomorrow afternoon. And you will receive an email once the archive is posted. And again, if you would like a PDF of the deck, you can go to the Handouts section once the archive is live. Also, just for participating in today's event, you could win this $100 Amazon gift card. The winner will be announced on June 28. We'll let if you're the lucky viewer. Thank you, everyone, for joining us today, and we hope to see you again soon.
Thanks for having us.