Check out the first 2024 Gartner® Voice of the Customer report for SAM!

Resources

Cloud, FinOps, and ITAM – industry trends & best practices

Webinar

 

Ron Brill, Anglepoint’s president and ISO’s ITAM standards committee chair, discusses the impact of Cloud and FinOps on ITAM, and shares some best practices from our clients.

By the end of the webinar, you will have a high-level understanding of:

  • The hybrid cloud
  • Cloud migrations
  • Cloud governance
  • FinOps
  • The role of ITAM in all the above
  • And more

Our Presenter

Ron Brill Anglepoint President

Webinar Transcript

Ron Brill:

All right, welcome everyone. So today we’re going to be talking about the role of ITAM in cloud and in cloud financial management. It’s pretty clear from the registration questions that cloud cost optimization and FinOps are a key challenge and a high priority for many of you.

As you can imagine, we could easily be spending a whole week on these topics. So what we’re aiming for instead in the one hour we have today is a high level overview. Touching on just some of the key considerations here. With hundreds of people registered for the webinar it’d really be inefficient as Braden said to, to make it interactive.

So again, we’re going to be taking questions through the Q&A channel at the end. So before we start a quick introduction my name is Ron Brill. I’m the president of Anglepoint and outside of Anglepoint, I chair the ISO committee for IT Asset Management Standards, also known as Working Group 21 which is the committee that owns the ISO 19770 Family of Standards.

In addition, I’m vice chair of the board of Trustees at the ITAM forum. A global nonprofit based in London that’s working to promote the ITAM industry, particularly through ISO organizational certifications, which we hope to see available soon. And for those of you who don’t know Anglepoint, we’re a global software asset management managed service provider.

Anglepoint has software licensing expertise for hundreds of software vendors. And we have over 160 consultants around the world serving many of the Fortune 500 for the last couple of years, Gartner recognized Anglepoint as a leader in their magic quadrant for software asset management providers.

And that Gartner report is available for download on our website.

All right, so let’s get started and, before we actually dive in just to provide some quick context. So, all of IT infrastructure nowadays is really hybrid, meaning that it includes on-prem, software as a service, or SaaS, and infrastructure or platform as a service, or IaaS or PaaS.

Also known recently as cloud infrastructure and platform services are CIPS. This hybrid infrastructure is already the IT reality today and all components, the three you see here and others that are not on this chart, such as, mobile devices and others all of those make up a single infrastructure that supports the business.

And because it’s a single infrastructure, ITAM can’t decide to simply ignore any part of it. Not if ITAM wants to remain relevant to the business if an IT asset is paid for or deployed or used by the organization, it generally needs to be in scope for ITAM regardless of if it’s hardware, if it’s software, if it’s a service, if it’s physical or virtual, if it’s on-prem or in the cloud, and so on.

During the rest of our time today, we’re focused only on CIPS, but it’s important to keep this broader context in mind. So, the world of cloud infrastructure and platform as a service is quite broad and complex. But from the limited perspective of I AM, it may be helpful to look at it as having two main phases.

First, there’s the cloud migration, and here ITAM should be involved in both the planning and execution. In planning, ITAM should be involved in prioritizing applications for migration, which we’ll talk about in a bit. And once migrations are being executed and workloads are actually being lifted, ITAM should, at the very least, be aware it is happening because any migration has, of course multiple implications for IT.

Once migration has been done and the application is already in the cloud, it needs to be governed. And typically, that would be done by a group called the Cloud Center of Excellence, CoE, or a similar name that may be part of DevOps or IT infrastructure or in operations. And in some organizations, cloud governance is all the CoE does.

And in other organizations, the CoE has other I would say more operational cloud related responsibilities. The high-level governance process really includes three phases, right? You first assess the risks and the organization’s tolerance for those risks. Then you develop a policy to mitigate the risks, and then you develop processes to implement that policy.

And there are multiple domains that can fall under cloud governance or under a typical CoE, and each organization calls them by different names. However, five common domains that we see are cost management, which we’ll spend most of our time on today, security, identity access management, resource consistency, and deployment acceleration.

And of these five domains cost management is of course the most relevant to ITAM. But ITAM does have a role with security, with resource consistency and others that we won’t have time to get into today. Also, just as a side comment, I personally believe CoE is a concept that will start disappearing in the next few years, right?

Cloud used to be the new shiny thing, maybe 10 years ago. So, creating a cloud center of excellence made sense initially. However, now cloud is really business as usual, right? And fully part of any infrastructure, and it really makes more sense to have a single governance function for a single infrastructure.

So, let’s focus on cloud migrations first just a bit. Do you understand what it means to migrate an application to the cloud? We should first understand that cloud is not a one size fits all, but rather a spectrum of options. As you can see here when you choose to migrate to the cloud, you need to also decide where on the spectrum do you want to migrate to.

On the leftmost side is infrastructure as a service, meaning having a structure that’s essentially similar to what you have in your own data. With the only difference being that the servers are being hosted virtually by AWS or Azure. Instead of being on-prem as we go from the left side to the right side, the solutions become more cloud native and provide additional business operational and cost benefits.

And towards the right we have some of the more recent technologies such as server and function as a service. And of course, the ultimate cloud native solution, which is SaaS. As you can imagine, the different points on the spectrum require different levels of effort to modify your on-prem application before it could be migrated.

Such technical modifications are called refactoring and Gartner and others are using kind of the five Rs framework to indicate the different levels of refactoring options. The two ends of the spectrum are maybe the most straightforward. Either you move your application as is. Without any major refactoring what’s called rehost or lift and shift layman’s term or else you can take this opportunity to really completely replace the application, for example, by moving to software as a service.

And kind of in between these two extremes, you have three options representing increasing levels of refactoring effort. As you can see, higher levels of refactoring are required, the more cloud native you want to go but the benefit potential also increases with that increased effort.

The one exception perhaps, is replace or moving to SaaS where you essentially abandon your current application instead of migrating it. So the effort is a bit reduced as you can see. How do we determine which applications should be migrated to the cloud first and what refactoring needs to be done to them?

Gartner recommends analyzing all applications using a two-by-two matrix. The two axes are technical fit and business impact, and together they create four combinations or quadrants. And as you can see here, where each of your on-prem applications would fall within one of these quadrants.

Multiple elements go into the assessment of technical fit, which is the horizontal access here, and some of these elements cannot be accomplished without ITAM. One example is software licensing. Moving traditional on-prem software to the cloud is called bring your own license, or BYOL.

Many older software contracts don’t allow you to use the software in the cloud at all. Or else the software publisher may require different use rights or even different licensing metrics which could materially impact the cost that’s involved to make matters even more complex. There may be differences in licensing depending on the cloud provider you’re moving to.

For example, the same Oracle product may be licensed one way when it’s on-prem, a second way on Azure, and a third way on AWS. The same goes for assessment of business impact, which is the vertical axis here. Again, multiple elements going to this assessment as well. Some of which may require input from ITAM.

One example is OpEx versus CapEx. In the on-prem data center world investment in equipment were capital expenditures, meaning they were recorded as an asset on the books and depreciated over their expected useful life. In the cloud world nearly all expenditures are operational expenditures, meaning they’re recognized as an expense when incurred.

And this is a big difference with potentially significant financial consequences. Also, I’d mentioned that, if a data center if a data center equipment and licenses that still have a residual book value will need to be written off because they’re no longer needed after a migration to the cloud that may have a negative financial impact during the year you’re completing such a migration.

So, once you’ve completed both the technical fit and the business impact assessments, you can plot all your applications on this two by two matrix. The top right quadrant represents the applications you want to migrate first. These are the applications that require minimal refactoring and where migration will have the highest positive business.

The next phase is the top left quadrant. These are the applications that have high business impact but will require some or sometimes extensive refactoring. And note that business impact is really more important than technical fit, right? Business impact is what’s primarily driving migration decisions.

This is why the top left quadrant goes before the bottom right quad. One other thing to keep in mind about this prioritization process is that it really needs to be iterative, right? It’s not a one-time exercise and you really need to visit this periodically. There are always changes to the business needs new applications, changes to existing applications.

New technology offerings and new price points from the cloud providers and many other factors. The key message here is that really ITAM needs to be aware of this process and needs to be involved in both the planning and the execution of this. All right. Now that we’ve touched on cloud migration, just a bit let’s focus on the infrastructure and platform services environment.

So, what are cloud infrastructure and platform services? These are the services that are provided by kind of the likes of Amazon AWS, Microsoft Azure, Google Cloud platform, IBM, Alibaba, Oracle, and many others. If we’re looking at it only from the perspective of cost, there are two components to CIPS.

There’s the virtual infrastructure, which is where we pay for things like compute, power, storage, and so on, and software. Software in turn can come from one of two sources, and we are going to ignore internally developed software for now. Software can be proprietary to the cloud provider. So, for example, each of the major cloud providers offers their own version of a managed database service.

The other source of software is of course traditional software publishers, think you know Oracle, IBM. Publisher software can come also from two sources. Either you have subscribed to the software through a marketplace run by your cloud provider, functioning essentially as a reseller where you pay for the software as part of your monthly cloud bill with no direct contract with the publisher.

Or this is software that you’ve licensed from the publisher directly and then deploy in the cloud, which is the BYOL, bring your own license scenario we discussed. It’s also helpful to understand orders of magnitude here. Gartner published the following quote in a recent paper that I saw last month, saying that the cost of virtual infrastructure, meaning the money you pay, AWS or Azure, which is the first box we looked at here is for some organizations, larger than the total cost of all software licensing and SaaS combined.

Or in other words, larger than the entire scope of traditional ITAM. And this kind of helps us understand the importance of optimizing this particular component and the importance of this component being in scope for ITAM again if ITAM wants to remain relevant to the business,

Anyone who’s been working with cloud on any large scale knows that cloud costs have a natural tendency to get out of control. Cloud costs are hard to predict and always keep going up. And cloud pricing is very complex and it’s constantly changing. With literally hundreds of new services, new features, and new pricing options being introduced every year by every cloud provider.

Monthly cloud bills can get to hundreds of millions of lines for larger organizations, and more importantly, in many cases there’s not a good control process over our cloud costs. And in fact, the nature of cloud where practically unlimited resources could be provisioned within seconds by engineers bypassing procurement doesn’t really make it easier.

And you can see some statistics here that Microsoft had shared all essentially telling us that this really is a big problem for many organizations. So, cloud financial management was really created to address all these challenges. We’ve mentioned that DevOps or IT infrastructure and operations oftentimes controls the cloud and the CoE Cloud Center of Excellence oftentimes control cloud governance.

But all these functions typically really lack the mindset, the skillset and the tool set to actually manage cloud costs to actually manage the cost aspect. So FinOps has really evolved to fill this gap. And it was named FinOps to make it easier for DevOps people to relate to it.

And that’s truly the origin of the name though, is someone coming from the finance side. Myself, I think the name is more than a bit misleading. But in any case, that’s the name. The FinOps methodology is detailed in a book called Cloud FinOps, which is actually a pretty good book for an entry level overview.

And FinOps is promoted today by the FinOps Foundation, which is part of the Linux Foundation. They run a FinOps certification called FinOps certified practitioner, FOCP, which is a certification that I personally hold and so do many others at Anglepoint and many in industry. If you’re interested in FinOps or in cloud financial management in general, the FinOps Foundation website and their user community, and they have an active Slack channel and others all those could be a really good starting point for you.

And participation is free for end user organizations.

The FinOps Approach: Inform, Optimize, Operate

So what is the FinOps approach? A very high level at the heart of the FinOps methodology is really a continuous cycle that’s made of three phases inform, optimize, and operate. During the inform phase, you are essentially showing your engineering teams and others what they’re spending and why.

And you do this by delivering business relevant dashboards and alerts about cloud usage and costs. This information then enables and empowers everyone to communicate at the same level, and for engineering to essentially take ownership of cloud costs. During the optimized phase, you identify ways to maximize return on investment by reducing pricing as well as reducing consumption.

And finally, during kind of the operate phase, you execute the optimization that you have identified. And typically that’s done by creating ongoing automation. And you also work with the business to enable better go forward decision making. Perhaps the most important aspect about the FinOps cycle is that it’s continuous and iterative, right?

In the steady state, you’re always working on all phases. So, let’s look a bit closer at each one of them.

The FinOps Inform Phase

The first phase of the FinOps cycle is inform, which is all about timely, relevant, complete, and accurate information. Nothing can really happen without that. The first part is to establish a common language or taxonomy within the organization for discussing cloud costs. One example is reflecting cloud costs in terms of unit economics or in terms of common business metrics that everyone can understand.

And we’ve talked about, complexity of cloud, monthly bills getting to hundreds of millions of lines. And fortunately, cloud providers allow you to classify usage and costs into tags, using tags so that, you can create more meaningful dashboards from all that information.

So, tags and also called labels by some providers are essentially metadata that you can assign to a cloud resource. So for example, you can assign an environment name, an application name, or a cost center. Tags are the main tool used by FinOps.

They can be changed at any point, and because they’re essentially metadata, changing tags has no impact on the configuration or operation of the instance. However, they’re not effective retroactively, meaning that all prior history of the instance before the tag was assigned to it cannot be captured and is essentially lost forever.

Therefore, really want to ensure that tags are assigned when the instance is first provisioned, and in fact, you want to prevent instances from being provisioned if they lack the proper tags could be used and reported on also across cloud providers as well. So you can have, for example, a single cost report where some of the instances being reported on are Azure, while others are AWS and third party tools can really help you create such multi-cloud reporting.

The concept of tagging is really critical to achieving any level of control in cloud. And you should implement a comprehensive tagging strategy and taxonomy. And ideally, you should aim for a tagging coverage of over 99%. And this may sound high, particularly for people coming from the SAM side. But it is doable and in fact it’s critical that you achieve that level of coverage.

Once you have tags, you can start creating dashboards that are relevant to the stakeholders. Dashboards should also include analysis and not just data. For example, trending budget versus actual high and low spenders, internal and external benchmarks, anomalies, and so on. And finally, another important concept of FinOps, showback, which is essentially like chargeback just without the actual charge.

The intention here is to show the engineering teams what their costs and usage are so they can optimize them. The idea is that if the engineering team had relevant information available to them, they would optimize usage as much as they optimize parameters every day, such as security, performance, and availability, right?

Engineering and engineers in general, right? They’re good in optimizing things. That’s what you know, engineers do. You just need to give them the right information and the right incentives.

The FinOps Optimize Phase

All right. In the optimize phase we’re essentially looking to maximize our return on investment in cloud. We’re looking to maximize the ratio here as opposed to solely focusing on maximizing the denominator. In fact, cloud costs are expected to gradually go up, for nearly all organizations.

And that’s normal. And any incremental span that that’s resulting in an even greater return is a good business decision and it should be encouraged. And so investment or cloud costs are calculated as essentially, Price times quantity P times Q. Fairly straightforward, to optimize costs.

We need to ideally optimize both price and quantity. So, let’s start with quantity or consumption optimization first. These are all the activities that engineering teams and only engineering teams can do to reduce excessive consumption, which is why it’s showing as decentralized here. There’s really no way to do the central.

It could be enabled centrally to an extent, and that’s the role of the FinOps team. But it cannot really be done centrally. There are many activities that fall under this category of consumption optimization. But we’ll touch on maybe just three examples. The first thing you want to do is to eliminate resources that are not being used at all.

Many organizations have orphan machines that you know may have been created in the past by engineers who may or may not be with a company anymore, where these instances are no longer needed, but someone just forgot to turn them off. You can identify those typically by looking for ideal compute resource, unused storage, and so on.

And sometimes the best strategy is just to turn them off for, maybe 90 days and see if anyone complains before, permanently removing those instances. Smart scheduling or turning things off temporarily. Meaning scheduling things off is similar to turning the lights off when you leave a room.

Smart scheduling is really key to effective FinOps. For example, for test and dev boxes, if you schedule them to run only eight hours per day, five days per week, instead of 24/7, you save about 75% of the cost and you can do the math. Right sizing is about avoiding overall allocation of capacity, meaning, over provisioning, over specking with respect to things like, processors, memory storage, and so on.

For the most part, autoscaling is considered the holy grail here, and autoscaling is where you leverage automation, and it could be tools that came with the cloud provider. It could be third party tools. It could even be around code to continuously right size the instance, so here’s an example for autoscaling.

And you can see in this chart here, the projected demand is the yellow line, but the actual demand is the orange line. So, in the traditional on-prem data center world, the dark blue line would represent points in time, you would make capital expenditures to upgrade the hardware. And of course, this may result in you sometimes not being fast enough, which is the kind of orange area here representing either, like revenue loss or customer issues.

The second problem in the traditional model is that there’s no effective way to scale. Now autoscaling though you have automation that is monitoring your environment and can analyze the utilization more frequently and adjust up or down according to some preset parameters.

And this results in a much more optimized capacity that’s represented by the light blue line here.

Quantity or consumption optimizations must be decentralized and carried out by the various, engineering teams within the organization. There’s really no other way. The engineering teams are the ones who have control, and this is really as it should be. And that’s where the informed phase, which we covered previously, is so important, right?

It’s really all about collaboration and empowerment, right? Educating the engineering teams. Giving them relevant and timely information to make informed decisions, and then holding them accountable to create that sense of ownership and constantly challenging them to optimize consumption.

So, let’s look at pricing optimizations. These optimizations can only be done centrally. And it should come as no surprise that the first thing you would look at in order to get better pricing is contract negotiations, right? If you’re able to commit to certain minimal levels of spend, you can get better discounts.

Same goes for time commitments, right? For example, a three year commitment as opposed to a one year commitment. And there are also many negotiable items in the form of support, training, services, and others. Cloud salespeople have all kinds of discounts and options at their disposal, which they can offer, particularly if it means bringing over incremental workloads and locking you in for a longer period.

The default mode to consume cloud resources is on demand or pay as you go. You can consume as much as you want whenever you want. And this model is really great for the end user, but not so great for the cloud provider because they have little predictability, right? And so, it’s no surprise that on demand is really the most expensive way to consume cloud resources.

And if you only did lift and shift and no other optimization. That’s exactly what you will pay. Now, if the end user is willing to trade some of their flexibility for lower prices then they can get much better deals. Another word for this is really commitment, right?

Finding the right level of commitment is the art of FinOps. You don’t want to overcommit, which is really paying more than you end up using essentially chauffeur in the cloud and you don’t want to under commit, which is essentially paying much higher prices than you could have had, based on your actual usage.

It’s really all about finding that right balance between overcommitment and under commitment. And of course, it’s best to commit what you can at the time of contract negotiations. That’s always ideal. But the problem is you often really don’t know much about your true future needs at that point in time.

And fortunately, you can adjust your commitment continuously during the contract term as well. One way to do that is to reserve capacity or to buy reserved in. Think of reserved instances as non-refundable coupons that you can buy in advance for a lower price and that you can utilize against usage prior to the coupon’s expiration date.

And this is a great deal for the cloud provider because they get paid in advance whether you actually use the reserved instance or not. And if you have areas in your environment where you know, consumption is steady and predictable, then this may be a really good deal for you. In fact, using reserved capacity, you can lower your cost by up to 70% or so in some cases as compared to on demand, pay as you go pricing.

Reserved instances is really one of the main ways that FinOps can achieve pricing optimization. Now naturally, everyone wants flexibility, and nobody likes to commit, once you realize how much avoiding commitment actually costs you in the real world, you’ll be looking to commit anywhere you.

Choosing the right provider for the right workload is also key, particularly because nearly all our clients, for example, use more than one cloud provider. And of course, there’s some engineering considerations here, which we’re not going to get into, but they’re also pricing considerations.

Just for example, Azure hybrid benefit a b in your Microsoft agreement, and you meet the requirements and, running Microsoft SQL workloads on Azure may be significantly cheaper than running the same workloads on AWS. Again, this is just one consideration there, again, there are many more examples that we don’t have time to get into, but you get the flavor for the type of kind of activities and type of optimization opportunities that you would look for in the optimized phase.

The FinOps Operate Phase

The operate phase is more technical in nature, and we’re not going to spend as much time on it today. At a high level, it has two main steps or objectives. The first is execution of the optimizations that were identified in the optimized phase. And typically, that needs to be done through automation using tools that are available through your cloud provider, using a third party tool or even using code that you’ve developed internally.

We have many clients who do that very successfully. But here you’re not only executing on specific opportunities for optimization that were identified previously, but you’re also setting up automation for ongoing optimization where that’s possible, which is auto scaling that like we discussed before.

And one concept I’ll mention here is MDCO or metrics driven cost optimization. MDCO is about creating automation activities that are based on certain predetermined rules and thresholds and such automation cannot really be created in one day and it really needs constant tweaking and adjustment.

You want to start very small, focusing on some low hanging fruit and kind of build from there. And then, the next second objective in the operate phase is really helping the organization make better informed decisions for the future. When you first get started with FinOps, you’re naturally in kind of damage control mode, right?

You must focus on the lowest hanging fruit, meaning kind of the biggest optimization opportunities in the current state. Now, that said, optimizing after the fact is never ideal, particularly given the nature of cloud assets. Meaning that they can have a very short life, in some cases down to hours and minutes, right?

So in this kind of world, if new instances are being deployed in a non-optimized way, the organization is leaving a lot of money on the table. The holy grail here is therefore to empower engineering to make better informed decisions to begin with. And that’s really the key in the ultimate objective.

All right, so this was like, a light speed overview of the FinOps methodology. Again, this is a week worth of discussion. But now let’s talk about how this all relates to ITAM.

FinOps & ITAM

If you talk to some FinOps teams you will see that they did genuinely believe that nothing of importance really exists outside, the AWS or Azure environment.

That’s their entire universe. And here are some examples where FinOps would typically have blind spots or elements that are not on their radar. And the dots on the chart here are really meant to show is being outside the range of the radar. So first of all, and maybe before we dive in, I would just say that, there are no two FinOps teams and no two FinOps approaches that that are identical and no two companies that are doing it in exactly the same way, but the following errors are typical based on what we see.

FinOps Blind Spots

So first of all, FinOps teams typically ignore on-prem. They’re completely ignore on-prem. That goes about saying, and typically address assets that are already in the cloud and that are appearing on your monthly cloud bill. So, in know our previous discussion about cloud migrations typically outside of scope or outside the expertise of FinOps. FinOps teams typically ignore software and SaaS.

Again, if it’s not on your AWS or Azure bill, it does not exist for them. But even marketplace software that is on your monthly cloud bill, as we discussed, is typically out of scope for FinOps. They just ignore it for the most part. And finally, of course, FinOps would ignore things like other IT costs such as labor services, consulting, and so on.

So, we can see that, FinOps alone again, the way such functions commonly operate nowadays cannot and does not replace ITAM. It’s actually fairly limited in scope. It’s as if ITAM would only do, say Microsoft licensing, ignoring all other publishers. It’s nice, what about the rest?

And again, whatever FinOps functions do around optimizing AWS and Azure span, they do well. But again, that’s insufficient to address the overall needs of the organization. Now let’s do the reverse exercise and look at ITAM, right? As with FinOps, again, no two ITAM functions are really the same as far as, scope capabilities, maturity, and so on.

ITAM Blind Spots

But the following seems typical to many ITAM functions at large organizations today. Where does ITAM typically have gaps or blind spots? Here are just a few examples that come to mind. ITAM functions often have limited visibility into the cloud. If a SAM tool agent happens to have been correctly deployed to the cloud instance, then ITAM will have visibility into that instance.

But in many other cases they will not. And this is a problem because ITAM can’t do its current job today without visibility into all the cloud instances where products of the software publisher that they’re working on may be deployed. And also increasingly, software license agreements specify certain terms and conditions that related to the publisher software is a BYOL, bring your own license in the cloud as well as general spend commitments such as, an Azure commit as part of a Microsoft agreement.

It really is impossible for ITAM to just ignore the cloud. But sometimes they do just that. Also, ITAM functions often have limited agility. ITAM typically does, certain things quite well, but is typically less adaptable to changes in new technologies throughout the organization, such as, virtual containers, IoT, and edge computing, and many others.

ITAM functions often operate in a kind of a relative silo with insufficient collaboration or integration with other IT functions. And again, as we discussed, collaboration is really an important principle in FinOps. But for ITAM that’s oftentimes still gap. ITAM functions are not used to operating in real time.

Many ITAM programs still do things such as annual or quarterly licensed reconciliations, right? And concepts like live dashboards showing near real time data are really not commonly found in IT. And all of this may have been adequate, five or 10 years ago, but with the continuing evolution of the hybrid digital infrastructure that is becoming increasingly insufficient for ITAM.

And finally, ITAM functions often have limited visibility into IT costs outside traditional software publisher spend. And certainly, they have limited visibility with respect to cloud cost and cloud data. So, we can see that ITAM alone is not so well equipped to tackle the dynamic world of the hybrid digital infrastructure and to remain relevant to the business in the years to come. So, what is the solution? The solution is ITAM and FinOps working together.

ITAM & FinOps Working Together

ITAM and FinOps share the same basic objectives of really maximizing return on investment in IT assets and enabling other IT functions to do their job and make better informed decisions. The objectives are really identical for these two functions. ITAM and FinOps almost perfectly address each other’s blind spots, right?

If you compare our last two slides, you can see that it’s almost a match made in heaven. They perfectly address each other’s blind spots. And then finally I’ll say, if you think about it from the CIO’s perspective, at the end of the day, there is a single infrastructure that should be managed, a single infrastructure right, and any function that focuses only on-prem, or only on the cloud has no right to exist in the long-term.

IT will not be able to really provide the organization with that required single pane of glass into the infrastructure. So if I’m a CIO, I will not be tolerating these two functions being separate. Now in ITAM, the real value is being generated through publisher specific licensing expertise.

That’s where you identify the cost savings and risk mitigation opportunities using people who have dedicated their career to specializing in a single publisher and keeping up to speed on the latest licensing and pricing rules and so on.

There’s an equivalency to that in the cloud, right? You need people who are experts in AWS, people who are expert in Azure, people are experts in GCP and whatever other cloud providers you may use, people who will keep track of all the changing pricing plans and options that are available and continuously drive optimization in your environment.

You need true Azure licensing expert whether internal or external, just as much as you need, Oracle, IBM, and SAP licensing expertise, right? It’s really no different. So how do we get started? As you can imagine, a lot really depends on the specifics of your current organization and situation.

Is there currently a FinOps function or not? And so on. But we’ve found that the following high-level approach works well for many clients and at a high level, there are one-time activities around, education, assessment, and program development, which then transition into a rollout of the FinOps program where the three phases we discussed previously, right inform, optimize, and operate are continuously delivered.

Again, and this could be done either internally or as a managed service. The first step is to educate your team on the cloud providers being used by your organization. For example, AWS or Azure, or both. Each of these providers offer multiple relevant certifications that the ITAM team should take.

And then and probably only then the team should really be educated on FinOps as well, ideally by getting the FinOps certification, the next step is to perform an assessment over current ITAM capabilities and current FinOps capabilities throughout the organization, regardless of where those capabilities may be located.

And this includes assessment of people and competencies, tools that may be deployed in what level, coverage they provide, processes, quality of data available, and so on.

Then the third step is to develop a rollout plan, designing a governance structure that’s aligned to the needs of the organization and would support the various stakeholders planning around team competencies tools that may be required, the processes, and at this stage, you also look into tool selection and conduct a tool RFP if required.

Also at this stage, you start looking at prioritizing certain low-hanging fruit opportunities as part of that future rollout. And then the actual rollout phases correspond to the FinOps methodology as we mentioned. First you establish cooperation with the various stakeholders and develop a tagging strategy and taxonomy to address everyone’s needs.

Then you develop and essentially implement that strategy and typically by using automation, meaning your tool of choice, then once you have data that starts coming, you can start developing dashboards. And then once you have data that’s good enough to share, you can move on to enable optimization to happen whether through the engineering teams for consumption optimization or centrally for pricing optimization.

The key here is really crawl, walk, run, right? You want to focus on the low hanging fruit first. You want to develop your competencies and establish trust. That’s absolutely key. And then build from there, right? So, crawl, walk, run is a good guiding principle here. And then finally you move to set up ongoing automation for certain optimization activities, for example, autoscaling.

And then, and you establish business processes to help the organization make better informed decisions. Again, this is only a sample high level roadmap to get started with FinOps, and the details are of course, going to look very different for different organizations.

If, for example, the organization already has a FinOps function and that function is going to remain separate from ITAM for political or other reasons which in my mind that can only happen temporarily, I think the long-term vision is that those functions will be integrated one way or another. But if they’re separate for now, then at the very least item should look to establish close daily collaboration with that FinOps function. And there will be a different set of activities around things such as, mutual access to tools and data, alignment of processes, certain joint deliverables and so on that we’re not going to get into today.

All right. With that, I want to really thank you all and open it up for questions in the few minutes that we have left. And again, we opted for breadth over depth today in the short time that we had and we barely scratched the surface. We’ll take some questions now and if I’m not able to get to your question now, or if you’d like to follow up separately, please feel free to reach out to me anytime you have my email here. And I would love to connect with you all on LinkedIn regardless, so that I’ll turn it over to Braden.

Questions & Answers about FinOps and ITAM

Moderator:

Great. Yeah. Just a quick reminder to everyone, if you do have a question, go ahead and throw that in the Q&A panel.

And while we give everyone a minute to put their questions in, Ron a few of our respondents is about 22% as we saw are saying that FinOps is not a priority right now in their organizations. Based off of your presentation, it sounds like maybe it should be, or it should be considered.

Do you have any recommendations for someone who may want to help others within their organizations realize the value and importance of FinOps?

Ron Brill:

Yeah, absolutely. I mean you saw the quotes from Gartner that organizations spend more money with, the likes of AWS and Azure than they spend with all software publishers and SaaS providers combined, right?

So, if you as an organization are leaving this as an area that’s not optimized, you’re leaving a ton of money on the table. A ton of money on the table. And so this is definitely an area you want to look at, particularly as your cloud costs are continuing to increase, which I’m sure is the situation for most organizations on the call today.

And I know a lot of organizations are doing activities around cost optimization in cloud. They may not call it FinOps. They may have not even heard the term FinOps, but they’re doing things around optimizing cloud costs, right? So, I think that may explain some of that percentage. But again, this is something that if you’re not looking at today you will need to look at in the not-so-distant future so that this should be high on your radar.

Moderator:

Thank you. So we had one question come through here. It says, being a SAM SME how would you suggest that we should move ahead? To start, should we begin with cloud training?

Ron Brill:

Yeah, absolutely. I think the first step is cloud training. So, each one of the cloud providers AWS, Azure, Google, and others, they have multiple certification tracks.

Some of these are more for engineers, but others are more for business consultants. And those might be more relevant to start with, right? Understanding price plans, understanding how things work, understanding monthly cloud builds and so on. So, the first step is really to get trained up and ideally certified on the major cloud providers that your organization uses.

And once you have that background you should then take the FinOps certification as well. I highly recommend it. It’s fairly kind of entry level, but it does give you a good background. And then connecting with the FinOps organization, their user community, can really help elevate yourself from there.

I would say that FinOps is the most exciting area in ITAM and SAM today, right? So if you’re a SAM professional and you want to futureproof your career FinOps is the way to go. I absolutely believe that you will not regret investing in getting trained on that.

Moderator:

Excellent, excellent, thank you. Another question that we have here is where would you place enterprise SaaS cost management for user-based solutions like Salesforce or Microsoft 365 in terms of priority?

Ron Brill:

I had placed them very high on the priority list, right? We have many clients where they spend more money with the SaaS providers now than they spend with traditional on-prem licenses.

In fact, most software publishers are SaaS providers already to some degree, right there, there are very few software publishers left that are purely on-prem or even if you look at, Microsoft and with all their offerings and off 365 and so forth, IBM is moving more and more things to subscription based and so forth, right?

So the SaaS is going to be a big part of the future for many organizations. It’s already most of their spend with software publishers, and there are a lot of activities that need to be done there. And if you’re not looking at this today, you should definitely start doing that. There are multiple activities that could be done.

It’s not the topic of the webinar today. But maybe we’ll have a webinar on this in the future, what, how to, what’s the approach for optimizing SaaS and the role of ITAM in that. But definitely something that should be higher on the priority list as well.

Moderator:

Wonderful. Wonderful. Thank you. All right. Any last advice or any last thoughts that you’d like to share with us, Ron?

Ron Brill:

No. Again I think the future seems fairly clear that FinOps is already and will increase it will be even more in the future. A big part of ITAM and that those two functions we need to be combined, right? The cloud is to big a part of the infrastructure for ITAM to continue to ignore if that’s in fact what you’ve been doing, right?

For ITAM to continue to be relevant, it really needs to focus on this and embrace it. And again, in the short term, there may be different functions in some organization that focus on these. I believe, said that those functions are going to be combined in the future. It makes no sense for them to remain separate.

But in the short term they may be. But again, this is big and this is really the future. I would say it’s not just about the cost savings as well, right? FinOps has attributes that are really like new blood so to speak, right? New ways of doing things for ITAM that ITAM should embrace regardless of optimizing cloud spend, right?

Things like collaboration, things like real-time reporting. All these attributes that are the bread and butter of FinOps today, but ITAM has been slow to adopt and ITAM will need to accelerate even for, managing the traditional software publishers and the traditional activities of ITAM, right?

So I think there, there are multiple reasons for item to be aware of what’s FinOps and to fully embrace it.

Moderator:

Absolutely. Absolutely. Thank you. Thanks for your time today, Ron. We really appreciate it. And thanks to everyone who participated and who attended. As a reminder, again, we will be sharing the recording with you all in, in a follow up email.

And if you have other questions, you can feel free to reach out to us info@anglepoint.com or you can reach out to Ron directly through his contact information there on the screen.

Ron Brill:

Thank you okay, thanks everyone.