Part 18 (1/2)
Other options included using Reichheld's ”Promoter to Detractor” ratios This was actually atteave into the inevitable (I couldn't get the third party to use a 10-point scale and the concept of proers) The consistency provided by using percentages quickly became expected by the workforce I still believe in the value of the promoter to detractor analysis, but for the purpose of the report card, I added percentage satisfied to the measurements for our particular audience
FREDERICK REICHHELD'S PROMOTER-TO-DETRACTOR SCALE
According to Reichheld's The Ultimate Question (Harvard Business Press, 2006), the most important question you should ask is: ”Would you recom a 10-point scale for the answers, with 1 being ”definitely not” and 10 being ”definitely yes,” if a respondent gives you a 9 or 10, she is considered a ”proe others to use your service (or buy your product) If the respondent gives you a score of 6 or less, she is a ”detractor” Detractors will actively discourage others fro your product) Nu that you can't predict if they will promote or detract from your reputation You need to have a ratio of two prorowth (neutrals are not counted) The higher the ratio, the better your word-of , and the row
Figure 9-7 shows the ratio of proative if you have more detractors than proh ratio For exa) the Service Desk had over 50 proure 9-7 Ratio of promoters to detractors for overall customer satisfaction
The ratios were i Reichheld's h 3 equated to 1 through 6 The 4s equated to 7s and 8s I stopped using the ter the proper question It was ful to sihly Satisfied” (5s) to those who could not say they were satisfied (3s or less) This in itself was e, but still not as clear as I would have liked
”What about 4s?” was a common question that I received when I revealed the data this way Explaining that 4s were ”truly neutral” didn't swaythe ”satisfieds” and that a 3 was neutral My correlating the values to Reichheld's fore part due to fear of the ood as they could
Using the ratio of highly satisfied to not satisfied ical to you (it does to me) But I found that this wasn't the norm for custo this data for over a year and they always reported it as an average score like 47 (out of 5) It seemed as if the best way to show the results would be to use a Likert Scale
I looked at all Service Desk reports for the past three years The first year showed an average score of 47 for the year The following year was 476 and a 48 for the ht upward trend, I couldn't figure out what the data ood or bad? Well, the third party provided benchmarks for our industry and for all users of their service So noe could see that ere above average in the case of our scores But, I still felt a little lost I didn't see how 48, 49, or 458 e were 50, I could know that all scores were fives This would hly satisfied with our services But as soon as the average score fell below thatwhat it meant Even when I added in the total nu ure 9-8 shohy it was hard to coure 9-8 Custoe
So, the average score lacked hly satisfied to the not satisfied was a little confusing A third choice was to use the percentage satisfied I could understand quickly that a certain percentage of our customers (those who used our service) were either satisfied (4 or 5) or not satisfied (1, 2, or 3) Even with this I received argu of ”not satisfied” I had ers anted 3s not to be counted since they thought on a 5-point scale that 3s were neutral
I had to explain that ”neutral” meant the respondent, while not ”dissatisfied” couldn't say ”satisfied” either The chart wasn't co satisfied to ”dissatisfied”-but satisfied to ”not satisfied” Notice that the saers anted to include neutral scores in the first ratio (5s to 13s) wanted to drop neutral scores if they thought it would ure 9-9 shohy ”percentage satisfied” was a sie of satisfied custo inal plan, so we had no reservations when it came to custo at feedback froive us a skewed view of our customers' overall satisfaction with our services We would never hear from customers ouldn't call our Service Desk because they didn't like our services Or we could miss those who liked our services, but either didn't choose to fill out a survey or just hadn't used it in the current year Basically, anted to hear from customers who hadn't called into the Service Desk We wanted to hear from the rest of our custoe measures
The ansas an annual customer satisfaction survey We sent a survey that not only asked the basic questions about satisfaction with our services, but also which services were seen as most important to the customer This helped with the other services we included in the report card We also asked the ”who is the preferred source for trouble resolution” question, which we used for Usage This annual survey provided many useful measures besides customer satisfaction Table 9-8 shows the first breakout of data for this category
For the annual survey ere able to show the sae satisfied, but pulled from a different context This may not seem too important, but it was very useful to allow for different viewpoints One telling result was that the scores from the annual survey were considerably ”lower” than those for trouble resolution This flew in the face of what the depart true for all of the services) The staff incorrectly predicted that the scores for trouble resolution would be worse than those for annual surveys They figured that custo out the trouble-resolution surveys were predisposed to be unhappy since they had a ”proble the custooodThe resolution survey scores were significantly higher than the annual survey This led to further investigation to determine why there was such a drastic difference (and one that went against the predictions) The investigation wasn't intended to improve the annual survey nu and, fro, possible ideas
One conclusion was that the resolution was done so well, (fast, accurate, and with a high rate of success) that the custos Conversely the annual survey reflected simply ambivalence at the time It wasn't that the annual survey nuood But they were low in coood scores received for trouble resolutions
Another conclusion was that some of the respondents to the annual survey (especially a considerable aiven by respondents who hadn't used the IT services-especially in the case of the service desk Since the IT organization had a poor reputation from a few years prior when service delivery ay below par, the respondents were rating the IT department based on this poor reputation This is akin to the perception of japanesein the mid-20th century If you said ”made In japan” it meant that the item was junk If it broke easily, didn't work, or failed to work more often than it did work-you would say, ”it must have been made in japan”
Today, that reputation has been essentially reversed Now ”Made in japan” describes the height of quality japanese-made cars are more respected for quality than Aet into the story of how an American helped ) because he couldn't get our ownindustry to listen The point is that the japanese had to overcoative reputation It wasn't as siher-quality products Their potential custoive them a try Those who only knew japanese products as the answer to a joke had to be won over The saood portion of our customer base
Unfortunately those detractors Reichheld discusses can seriously daood portion of your custo your services and products, you will need to counteract that Hoping and waiting for theerous path to travel You may find yourself out of business well before the customers realize that their perception is outdated and that the reality is that you were providing a healthy service
The ation pointed to the need of a e in the service, processes, or products
The Higher Education TechQual+ Project: An exaher Education TechQual+ project is a great example of how an annual survey can help provide not only satisfaction data, but also insights into what the customer sees as iia, and for the last six years, his pet project has been the development of the TechQual+ Project The purpose of the project is to assess what faculty, students, and staff want froher education It is prianization to find out its customers' perceptions of its services
The TechQual+ project's goal is to find a coe” for IT practitioners and IT users This is part of what makes Chester's efforts special But, the first brick in the project's foundation is ”that the end user perspective” is the key to the ”definition of perforanizations” In other words, the custoanization and the ram
Chester writes, ”With end-user-focused data in hand, one can easily understand failures in service delivery as one-ti problems in IT”1 In the Protocol Guide for TechQual+, Tim Chester explains that the tool's key purpose is to allow ”IT leaders to respond to the requests of both adly request evidence of successful outcoanizations a tool for cooes on to explain, ”[For] IT organizations, dey services is vital to the establishment of appreciation, respect, and trustworthiness”
This project lists the most crucial inputs for its purpose as valid and reliable effectiveness measures of IT services Chester also believes that while standardized perforher education IT industry is still far off fro this need
TechQual+ attempts to provide anization's custo results between institutions, and an easy to use survey tool for producing the data One of the defining points of the project is that TechQual+ defines outcomes ”from an end-user point of view” Chester understands the need for more than a customer satisfaction survey and uses his tool to capture the customers' viewpoints on any and all facets of what the Answer Key identifies as Product/Service Health
__________________
1 techqualorg
This project fits in hat I've presented in this book It is a great way to ”ask the custo not only the customer's evaluation of hoell a service is provided but what the customers' expectations are Where I have relied on the service provider to interpret the customers' expectations, the e froy worth looking into
TechQual+'s approach is based on evaluating the following three measures: The minimum acceptable level of service (Minimum Expectations) The desired level of service (Desired Expectations) Hoell the customer feels the service meets these expectations (Perceived Performance) The results of these measures are used to develop a ”Zone of Tolerance,” an ”Adequacy Gap Score,” and a ”Superiority Gap Score,” described as follows: The Zone of Tolerance: The range between minimum and desired expectations (what the Report Card calls simply ”Meets Expectations”)
The Adequacy Gap Score: The difference between the ”perceived” performance and the minimum expectation
The Superiority Gap Score: The difference between the desired and perceived performance
You should see how these ”scores” correlate to the Report Card's scores If you look at the charts offered for each measure in the Report Card, you could detere of Meets Expectations) and those values that represent a positive or negative Adequacy or Superiority gap score
The beauty of the TechQual+ Project is that the results reflect not only the custoh a survey instruh a survey) It is an excellent feedback tool I highly reco the tool (it's free) or i the concepts offered by it, in your survey instruments When used in conjunction with your objective ives a fuller picture of the health of your service You can use the TechQual+ or other survey tool for the Customer Satisfaction part of the Report Card While it is labeled ”Customer Satisfaction,” you'll see that the questions you can ask in the survey are not restricted to this area You can (and should) ask for feedback on the i It can be especially useful for getting input on the range of expectations
Two major areas of difference between the Report Card and TechQual+ should be obvious The Report Card attempts to use objective measures collected in other ways besides the survey ulation demands that you use different collection methods and different sources The Report Card, while also using expectations, treats ”Superior” (exceeding expectations) performance as an anomaly
The conclusion? The TechQual+ Project (and other survey-based innovative tools) should be looked into-especially as a solution for the Custo information on the expectations for all of theExpectations
You hout this chapter are ”ful” Part of this is the inclusion of the expectations for eachcharacteristic Go back and look at the Custoain Notice it doesn't have expectations I left them out for two reasons The first is that we didn't have them e started, but we could still produce the basic charts you've seen so far Secondly, as mentioned earlier, many times the data can help you determine what is ”normal” When we look at ”normal” coupled with the service provider's assess the reported tiood estie values, for at least the ure 9-10) easier to read
Figure 9-10 Custoe satisfied with the values for the last year A little easier to read At a glance we can see hoe faired vs the previous year We can also see if we have upward or doard trends (three data points in succession that(up or down) you won't know if the data is ”good,” ”bad,” or ”indifferent” So before we get to expectations, this chart already tells us to look at AugOct 2010 What was happening? What was causing the steady incline? Was it so we needed to look at more closely?
One of the most important steps we had to take was to develop expectations As explained in the chapter on expectations, you can't always ask all of your custo on the size of your custoe widely SLAs help, but they don't always reflect the customers' expectations Soreed-upon requirements