Managing ‘Cherry Picking’ Successfully

CSAT Central is a new, dynamic provider of customer satisfaction (CSAT) solutions. We deliver real time insight across voice, web, email and social channels via multichannel wallboards and measurable, comparable reporting.

Since starting out last year, we’ve been asked a fair few (to put it lightly) questions about ‘cherry picking’, like; “how can we eliminate it from the CSAT process”?

For those who aren’t entirely sure what ‘cherry picking’ is, here’s a quick definition:

  • Cherry Picking; is where contact centre agents pick and choose which customer calls they transfer into an automated customer satisfaction survey. The common fear is that if the transfer process is entirely controlled by the agent, the agent will only look to transfer ‘happy’ customers and, as such, the resultant surveys will be skewed with very positive scores. Therefore, the subsequent reporting and management information is almost useless, as it only shows surveys from already delighted customers.

The fears around cherry picking are well founded. Although high CSAT scores may look good in a management report, most, if not all businesses, want to leverage surveys to produce an accurate measure of customer satisfaction across their base. Additionally, feedback from unhappy customers can be used to identify certain issues that organisations need to address, and could be considered even more valuable.

So… the question remains, how do we eliminate cherry picking from the CSAT process?

If your CSAT provider claims that ‘they’ can eliminate cherry picking, don’t believe them (it’s a myth)! They will, no doubt, talk about taking the transfer process away from the agent and instead look to ‘automate’ the transfer – Big mistake! In doing so, the percentage survey uptake will drop through the floor. The fact of the matter is, a warm agent transfer should result in over 90% of surveys completed, whereas the very best uptake we’ve seen from an automated survey process is in single figures.

The two main methods of automated transfer are:
  1. Automated Surveys via Pre-Call Opt-in: This is where you play an automated message at the start of a call asking whether a caller wishes to participate in a survey at the end of the call. If the caller presses 1 (for yes), there is an attempt to initiate a survey at the end of the call. As you can imagine, the resultant uptake is very low.
  2. Automated Transfer (usually defined percentage of calls): is where your CSAT provider can work with you to transfer a defined percentage of calls into the CSAT survey. Once again, because there is no warm introduction to the survey from the agent, uptake and completion stats are very low.

So… if these two automated methods do not work effectively, how do we manage cherry picking successfully?

Well… the answer is not automation at network level. Rather, deployment at a local, agent level. Almost every contact centre we talk to has a local, agent desktop application that involves scripting and workflow. Clearly, we can’t be sure exactly which local system agents use, but invariably there is the opportunity to build in a (trackable) CSAT transfer script. For example, as the call nears completion, we add a script to enable a warm transfer – ie ‘thank you for your call today [customer name], please could you help us improve our service by completing a short survey.’ Implementing this warm transfer results in extremely high uptake and completion rates, often well above 90%, and tracking means that those agents not engaging with the transfer script (i.e dodging dissatisfied callers) can be identified and dealt with appropriately.

The crucial part of deploying a local, warm transfer process is to ensure a random sub-set of surveys. For example, you could decide to invoke the CSAT transfer script every 5th call, which would result in 20% of callers being surveyed. The huge benefit here is ensuring that you have a much higher number of surveys to analyse and, most crucially, that you have eliminated cherry picking. The net result is a whole array of disparate surveys, which should include both very satisfied and very dissatisfied customers. Thus, producing the sort of customer feedback which is most useful to your organisation. It also provides a CSAT benchmark, which you can measure and compare across departments and agents over time.

We just want to sneak in one other recommendation too (because we’re nice like that ;). Agents are generally tracked and compared on the ‘total’ score of each survey. When you consider that most surveys include questions about first call resolution, or indeed Net Promoter Score (NPS), you can understand an agent’s frustration when their perceived ‘average’ doesn’t accurately reflect their individual performance. Therefore, our final recommendation is to track and compare agent performance using a stand-alone metric to differentiate between the callers overall customer experience and the agents call handling performance. This is what we do, because absolving agents from factors unrelated to their individual performance creates a level playing field, which enhances agent engagement by providing a true reflection of their efforts!

I very much hope this blog has busted a few myths on cherry picking, and remember; automated solutions to cherry picking just don’t cut it. We work with every one of our clients individually to deploy a much more effective local solution to this issue. Find out more about our products and see if you qualify for a free trial by clicking here.