In the first part of this blog post, we have integrated motivations and best practices for using collaboration tools into a measurement framework following a three-step process.
The fourth and final step is probably the trickiest one: Finding suitable key performance indicators (KPIs) for the identified development areas.
Step 4: Finding suitable KPIs
While generally characterized as a useful tool, research found a number of typical problems when using KPIs. So before going into detail, let us take a look at some common KPI pitfalls to avoid when using them for your collaboration tool development areas:
First, KPIs are often of arbitrary nature. Looking back at the before-mentioned example of page views in an enterprise wiki: is more necessarily better? Or couldn’t an increase in page views also indicate that people follow longer click paths, as the struggle to find relevant information?
Moreover, employees tend to perceive KPIs as a means of justification towards management, thus incentivizing to take counterproductive actions.
In his advisable book “Key Performance Indicators: Developing, Implementing, and Using Winning KPIs”, David Parmenter gives the example of a city train service: A key measure was on-time arrival, and for every delay, there were strong penalties for the train drivers. Consequently, train drivers learned to stop at train stations, triggering the green signal, and continue their journey without opening the doors for passengers. This allowed them to restore their on-time score, but obviously had disastrous effects on customer satisfaction. Over time, the train company realized that delays were not caused by train drivers’ incompetence, but rather depended on various effects, such as signal failures or track maintenance. As a consequence, more target-aimed KPIs like “amount of signal failures not fixed within n minutes” or “planned maintenance that has not been executed” were defined.
This example underlines why it is important to follow a holistic measurement approach as we have developed in part one, i.e. focusing on a set of key factors from within different development areas.
With these learnings in mind, we will develop a set of exemplary measures for three development areas: The “Management support score” from the organizational framework conditions development area, the “Collaboration tool support performance score” from the collaboration platform framework conditions development area, and the “Collaboration tool total cost of ownership” from the business impact development area:
Author’s illustration, based on Kühnapfel (2014): Balanced Scorecards im Vertrieb, S. 4
Management support score
Regarding the organizational framework conditions, management support for collaboration tool efforts and management acting as role model are important success factors. As the key element is the perceived support by employees, an employee survey can give valuable insights.
In this context, online surveys bring several advantages: They are inexpensive, allow to question a large sample size in short time, and feature a high user acceptance. Moreover, data quality is better than compared to offline studies, and perceived anonymity is higher.
Against this background, we suggest an online survey with a representative sample of employees, asking them about the perceived management support and behaviour with regards to collaboration tools.
The questionnaire might look like this:
Based on respondent’s answers, a “management support and role model score” can be calculated. If the 1 – 5 scale is transferred to a 1 – 100 scale, the outcome is easy to understand for all stakeholders.
Collaboration tool support performance score
The performance of IT services is typically tracked through service level agreements (SLAs). An aspect that is often covered through SLAs is resolution time, i.e. “requests should be resolved within n hours”, in many cases depending on the respective severity level.
With the goal of making collaboration tools as business relevant as possible, users need responsive and competent support they can rely on in case of problems. Therefore, a “collaboration platform support performance score” can serve as a potential measure. This metric is calculated as the share of requests that were resolved within a defined resolution time. If the requesters will need to mark requests as “successfully resolved”, it captures both the effectiveness and efficiency of support.
State-of-the-art ITSM tools like Jira Service Management usually feature SLA tracking and reporting out of the box.
Collaboration tool total cost of ownership
In the context of a collaboration tool’s business impact, this exemplary KPI aims at capturing the total costs of operating the tool, based on the Total Cost of Ownership (TCO) approach.
In the context of information systems, it is important to distinguish between direct costs (e.g. license costs) and indirect costs (e.g. productivity loss for employees due to required training time). Combining these two aspects, the TCO approach aims to capture the total costs over the entire lifecycle of, in our case, a collaboration tool.
Although there is no standardized TCO approach in business practice, we can recommend Ellram’s eight-stage approach as a basis for TCO evaluation.
After assessing the TCO, the collaboration tool cost per employee can serve as KPI to estimate a company’s financial commitment in a collaborative context.
Critically evaluating this KPI, it must be said that unforeseen cost factors, e..g sudden changes in licensing costs, might not be recognized. Moreover, the KPI appears to be rather open towards manipulation, for instance through influencing what indirect costs are considered, and how those are measured.
While the developed measurement framework and KPIs are only examples, we hope that this blog post may serve as an inspiration and potential starting point for your approach to measuring the success of your collaboration tools.
What are your thoughts and experiences with this topic? Do you have any questions? We are looking forward to your comments below!