Part 20 (1/2)

Figure 10-13 Ratio of Proe Grade, where we ”looked good,” this representation (see Figure 10-13) ood! And the funny thing was, I believe this was a ood they were

After two years of battling this argument, I acquiesced and found a different way to represent that data I still believe the proood one (and perhaps the best) one to tell There is an established standard of what is good that can be used as a starting point Where Reichheld uses that standard to deterh quality That said, the few departments that could conceptualize the promoter-to-detractor measure invariably raised that benchmark way above this standard One client I worked anted a 90 to 1 ratio As a provider of fitness classes, they felt highly satisfying their customer was their paramount duty and they expected that out of one hundred students, they receive 90 proed their 5-point scale to a 10-point scale happily)

We ended up with a newthe data We showed the percentage ”satisfied” (Figure 10-14) This was the number of 4s and 5s compared to the total nue The third-party vendor of the surveys had no proble the data (ours and for our industry) in this manner They actually produced their reports in nuure 10-14 Percentage of satisfied custo this view of the data It still takes sou (4s and 5s are luether, and 1s, 2s, and 3s aren't actually shown at all) Interestingly enough, I think this is e issue than it is a fault in any of the representations of the data Most of those who didn't like the proainst percentage satisfied) didn't coranularity In each of these newer views, e, as shown in Table 10-7

Weights and Measures

With a complete set of ether into a ”metric” To be a Report Card, we needed to roll the data up as well as provide a deeper dive into the anoor and soor

Each ulation; each was ories of infore, and Customer Satisfaction) Information was built from as many measures as the service provider saw fit

Eachto meet expectations

Each measure was from the customers' points of view and fit under the rules for Service/Product Health

Flexibility

Each measure was selected by the service provider (our Service Desk department) Each data set was built into a measure per the service provider's choice Expectations, while representative of the customers' wants and needs, were defined by the service provider and could be adjusted to reflect what the service provider wanted the customers' perception to be For example, if the customer was happy with an abandoned rate lower than what the Service Desk thought was adequate, the expectations could be higher

Another important area of flexibility for the service provider was the use of weights to apportion importance to the ory Since Delivery, one of the three Effectiveness areas of measure, was made up of multiple measures-Availability, Speed, and Accuracy-the service provider could weight these sub-categories differently

With an organization just beginning to usethese factors was not a trivial task Wea recoested that speed was the most important to the custoned suprehts: Availability: 35

Speed: 50

Accuracy: 15

These weights could be changed in any manner the service provider chose The key to this (and the entire Report Card) was the ownershi+p of the metric Since the service provider ”owned” the data, measures, information, and theto the this chapter was to help you understand the need for accuracy, but more so for the need of honesty As David Allen, the author of Getting Things Done (Viking, 2001), has said, ”You can lie to everyone else, but you can't fool yourself” If the service provider is to use the ht way, leadershi+p can never abuse or reatest trust in this depart it like it is” The depart” to hear bad news; they had to want to hear it, if it was the way the story unfolded

Weighing the factors can be an easy way to chase the data and make the measures tell the story you hoped for rather than what it is One thing we do to hts before looking at the data Then, if we need to change theulate the tehts sireat use of the annual survey We can ask the customer what factors of measure are the most important Simply ask if Time to Resolve is etting put on hold for 30 seconds is more troublesome than if they have to call back more than once to fix the saht these factors equally

Along eighting the coories-Delivery, Usage, and Custohts should be clearly co the Report Card

Let's look at hoe roll up the perfor Up Data into a Grade

It's tiories for effectiveness, triangulation, expectations, and the translation grid-to create a final ”grade” This includes aquickly and clearly the custoers, and leadershi+p

You'll need the translation grid (see Figure 10-15) as before but with neutral coloring so that it is less enticing to consider ”exceeds” as inherently good

Figure 10-15 Translation Grid The values I' do not reflect the information shared earlier I wanted to ate the measures Table 10-8 shows all of the measures, their expectations, their actual values and the translation of that value into a ”letter grade” These can be programrade for you

In Table 10-8, the ”grade” (shown in the Result column) has already been translated to a letter value-if the actual measure exceeded expectations, it earned an E, if it met expectations an M, and if it failed to meet expectations an O

Within each iterade would be a result of an average using the translation grid As ory For example, abandoned calls that were less than 30 seconds could be given a weight of 85 percent, while the total nuhted 15 percent Another exaht (50 percent) to the other three satisfaction question coo with those teighting choices and all others will be of equal weight within their own inforory Table 10-9 shows the next step in the process of rolling the grades up toward a final Report Card Note: a double asterisk beside the grade denotes an O grade at a lower level

Let's look at the teighted ht the total grade would sirade of 75 If we rounded up, then this would make it an E But, since ays choose to err on the side of excellence, we don't round up You have to fully achieve the grade to get credit for the letter grade If the weighting were switched (Abandoned Total = 85 and Calls Abandoned in Less Than 30 Seconds orth 15 percent of the grade), you'd have an overall E since the 10 for abandoneds would give you an 85 before you even looked at the abandoned calls in less than 30 seconds

In the satisfaction ratings, we find that the grade is an E even though there is an M If, instead of weighing overall satisfaction at 50 percent, we gave all three questions equal weight, the grade would si us an 875 Still an E, but a lower grade

Notice that the Speed: Time to Respond came out as an M, Meets Expectations, but I added the asterisk to signify there was an O hidden beneath The sarade because of the E hidden beneath It helps the viewer of the Report Card quickly note if she should look deeper into the information Buried Es and Os are anomalies that need to be identified

Let's continue with these results If we go with the weighting offered for Delivery (Speed at 50, Availability at 35, and Accuracy at 15), we get the next level of grades for delivery, as shown in Table 10-10

Let's continue on in this ain have decisions to e, and Customer Satisfaction of equal value? This is only necessary because we are atteanization, we stopped at this level, choosing to keep these three key inforories separate, even across different services So if ere to roll up three support services (Service Desk, second- and third-level support) we'd show a roll up in the Delivery overall, Usage overall, and finally Custo this basic triangulation as far as we needed to go If you want to roll it into a final grade (GPA), the only question left is the weighting For the purposes of this exaht

So the final Report Card for the Service Desk, based on the weights for each category of inforrade of M This can be interpreted easily toexpectations overall with soated

If you looked at the grades as ories, into a single grade, but looking at each : Delivery: O is an Opportunity for I the causes for the anoe: M ation required in this area