One of the most common obstacles I encounter at the start of a cost transparency initiative is a concern about data quality. It’s an area with a lot of common misconceptions, and can stop IT from even getting started down the road to improved cost efficiency. The most common mistake I see is to think that it is a lack of data that causes the problem. Usually, the issues start with multiple data sources.
There’s an old saying that if you have a watch, you always know what time it is – but if you have two watches, you’re never quite sure. In my experience, large IT departments usually have a lot of watches. In other words, they have a number of systems of record, which don’t agree with each other all the time.
As a result, most managers have had bad experiences trying to pull together reports and spreadsheets in the past. They’ve sat in meetings where conflicting Excel information prevented a decision being made. Maybe they tried to build a chargeback process from scratch, but stopped after it was taking up too much of their time. It’s easy to just settle in to the belief that because the data isn’t good enough, a disciplined approach to strategic cost management isn’t possible.
As a result, a CIO or IT Finance Director can encounter heartfelt resistance to plans to make cost transparency a reality. There are two things to remember at this point. First, if you let the current state of your data be the problem, you’ll never get started. You need to plan forwards to what you can achieve as your data continually improves over time. Second, you should engage with the concerns that cause the resistance. Here’s are three meeting topics that should get the project moving:
- Explain WHY a healthy financial management process is important
- Show HOW their data contributes to the process
- Detail WHAT steps they need to take to improve data quality over time
Everybody understands that financial discipline is part of running a business. Somehow we often manage to forget this is the very moment our finance team asks us to devote some time to producing a report. For development or operational teams, requests from finance can appear to be a drain on resources. Therefore, it’s important to explain why a healthy financial management process is good for them, too.
There are two common outcomes from implementing an effective ITFM process:
Savings: A 5 – 10% reduction in costs is very achievable. Under-utilized servers; mis-aligned storage tiers; poorly negotiated telecoms bills; multiple overlapping applications – these all present savings opportunities that can be identified and driven home by a solid financial process. Savings can relieve the pressure of a tight budget, or be re-invested in new or improved IT services.
Agility : Most IT departments are not able to deliver the breadth and depth of services they would like. ITFM helps organizations move faster by making it easier to make good decisions – we often hear that after implementing Upland’s ComSci, we reduced the time to produce financial reports by weeks or even months. That is time that can be used to plan your next move instead.
Finance is often a black art to technologists. An enterprise architect might be able to design a data centre or plan an ERP upgrade, but struggle to build a solid business case. On one occasion, I saw an IT department engage an external boutique consultancy to assess cost savings from server consolidation, only for the analysis to be thrown out by the accounting team because the cost assumptions (based on price lists) didn’t match up to what was on the books (based on negotiated purchases and depreciation policies). These communications gaps between experts can carry a hard cost.
Understand your cost drivers
One of the great strengths of building a visual cost model is that it helps the technology experts collaborate with their finance counterparts to really understand the cost drivers in their organization. They can see how much a GB of Tier 1 storage impacts an application, how much it costs to give an employee a tablet with a wireless data subscription, or how much it costs to run a particular enterprise application in the data center. A set of clear reports and graphical model can reveal just how their own particular system relates to the corporate financials.
On several occasions it’s been a genuine pleasure to see people engage with their own data in a way they’ve never seen before. Without a good cost model, for instance, it is impossible to compare the fully loaded cost of running an application in the corporate server farm versus renting compute capacity from Amazon. This new understanding makes it much easier to identify savings opportunities, and to present new proposals with a strong financial business case.
Assuming everyone buys in to the potential of improved cost transparency – what practical steps can you take? There isn’t a quick and easy answer to this, of course. Data quality is something that improves over time.
The most common data issue we see at ComSci is incorrect Cost Center Assignment. For example, HR might say that Jonathan is in Cost Center 1234, but the CMDB has me listed under Cost Center 5678, while the Wireless Support Team puts me in Cost Center 91011. These data conflicts arise because as organizations grow, and implement or acquire new systems, moves and changes aren’t consistently applied across all the affected systems.
This presents an obvious challenge to billing my department correctly for the services I use. Similar data gaps will be found between a number of different data sources. Potentially, we could see some tension if this is allowed to become a source of organizational conflict, too.
Implement a unified process
Here at Upland, we’re proud that in addition to providing customers with software, we provide a full monthly service to run the IT financial process. This gives our customers the single most important solution to the data quality challenge: a unified process across all internal platforms. This ensures a single source system of record is identified for each domain, and that each system has the proper controls in place to consume, manage and identify gaps in that data.
The heart of this process is building in controls that call out the data anomalies for our client base. Reports easily call out data issues, which the IT department can use to address the discrepancies in their internal systems. To use the example above, for instance, the first incremental improvement might be to update the CMDB and the Wireless Billing system to use HR as the single source of truth for employee details.
Ultimately, the solution to improving your data quality lies in a better understanding of your current systems: so start today, start with a unified process – and let the data guide you to an efficient, accurate outcome.
Next best reads for you
- News VMblog: 2023 Predictions for IT Expense Management
- On-demand Webinar How IT Should Adjust for the Next Normal