In simple terms, they should be about understanding how well you are meeting the needs of your customers, so that you can continuously improve how you deal with them, on the basis that satisfied customers will keep coming back, and they’ll tell their friends about their good experience. Dissatisfied customers won’t come back and they’ll tell even more of their friends to avoid you like the plague. That provides a simple and hard-nosed message about how satisfaction surveys and customer feedback can drive revenues and profitability.
In reality, most automotive customer satisfaction surveys have steered a long way from that simple ideal. They have become wrapped up in numerous business metrics, audits, incentive payments and benchmarking, which may have little to do with understanding genuine customer satisfaction.
Whilst the mantra of ‘What gets measured gets managed’ is true, most surveys have extended that to include ‘What gets incentivised gets managed’, resulting in rewards and penalties being paid on the results to the surveys.
The unintended consequence of putting money on the scores – as the entire industry knows all too well – is that people ‘game’ the system. They manage the scores, not the underlying issues, finding ways to artificially inflate scores so that incentive payments are paid out. And that’s not surprising given that these payments can amount to tens of thousands of pounds for some dealers. And in many cases, this money starts life by being held back from the standard dealer margin – breeding an immediate resentment from dealers that ‘their’ money is being held back by this surveying measurement process.
There are many ways to game the system, which will be familiar to most people in the business – from simple ‘instruction to the customer about how to fill in the survey’; pre-filtered lists of ‘positively minded customers’ sent through for surveys, right through to bogus customer email addresses so that surveys are actually completed by dealer personnel.
Here’s the view from the sales floor of a dealer in the US:
“At [my brand] the surveys are graded on a 1000 point scale through JD Power and associates. My regional average is a 966 (96.6% positive) at the moment. In order to get above this level I must get nearly all 10′s on the survey, just one 9 or 8 may be able to be over a 966 also, but any more and I’m cooked.
The catch is that only 1/3 of the survey is about the sales experience, the other 2/3 are about the experience in the finance office and how the dealership facilities looked, but they are all the sales persons grade. At my dealership any survey below a 966 results in a $100 deduction from your paycheck.
Keep in mind that 95% of the new cars I sell earn me a “mini” or minimum commission of $150 before taxes. That means that if someone absolutely LOVED me but thought that my coffee was sub par and put a 5 for “refreshments” on the survey that I effectively make about $20 for the sale and my time, I am definitely reprimanded by my managers and am one step closer to being replaced.
I don’t understand how [my brand] or any brand is supposed to improve when the survey penalizes the sales person monetarily. I absolutely cannot afford to risk losing $100 after taxes on a $150 commission so everyone here just sets up ghost email accounts and has the surveys sent there. Then after [the company] sends it along I log in with my phone (after restarting it which cycles the mobile IP address), make sure I’m not on WiFi, and give myself a perfect survey. I do this even for customers that love me and send me many referrals.”
Some manufacturer management and field teams are also remunerated on the satisfaction scores, so there becomes a tacit collusion in ever-rising scores. In other cases, senior management are remunerated against external syndicated benchmark surveys such as NCBS and IACS in the UK and JD Power survey in the US. This provides motivation to also maximise scores in those national syndicated surveys through fair means or foul. Whilst there is limited ability to influence the data samples, some of these survey companies themselves provide ‘mimic’ surveys to help manufacturers predict their likely scores (with varying levels of accuracy). The manufacturers’ own customer satisfaction surveys increasingly mimic the questions used in those syndicated surveys to help them get an understanding of where and how to optimise those syndicated survey results.
In many cases, those syndicated survey questions do not reflect best practice for customer satisfaction, nor necessarily the company’s own strategic aims, resulting in targets and rewards being focussed in the wrong areas.
In our experience, there is rarely any robust analysis carried out between the rating metrics and real business performance. A common example is the blitz on ensuring high levels of test drives for new vehicle buyers. Whilst test drives are undoubtedly an extremely effective process in the sales conversion process, not every customer needs or wants one. Forcing such customers through the test drive process provides no business benefit, yet absorbs dealer sales team time and demonstrator vehicles.
In other cases, target metrics are simply unattainable. One example is where a manufacturer set minimum thresholds for customer follow-up of 92% after service transactions. In tests, around 12% of customers fail to recall that they have actually been contacted by the dealer after a service. Setting minimum targets of 92% against that background is simply unattainable.
One of the most common top priorities for after sales metrics is the rate of return for follow-up work. Sometimes this may be inevitable and can be arranged at minimal inconvenience to customers. My own experience of this was laughable. At the end of the day when the dealer phoned to confirm that the service had been completed and payment could be taken before delivering the car back to me, they asked if there was anything else they could help with. It occurred to me that the automatic tailgate opener (waving your foot under the rear bumper) was not working. With insufficient time to look at the problem on that day, they wanted to keep the car in the workshop overnight rather than return it to me to avoid having to take the car back in and risk it being classified as a return visit, to avoid the associated penalty for that ‘re-work’. It would have been extremely inconvenient for me not to have got the car back that day, so we agreed that we would leave the tailgate until the next service. Crazy! Here is evidence of metrics driven by financial rewards working against genuine customer satisfaction.
The impact of this focus on managing the scores for the rewards is that the whole scoring system becomes discredited in the eyes of customers, dealers and in reality, the manufacturers who are driving the whole process. It is often made clear to customers that the dealer will not get paid by the manufacturer unless scores of 10/10 are given. This example from a US GM dealership might be extreme, but there is widespread evidence of such customer persuasion.
Artificially skewing results in this way leads to extraordinarily high satisfaction rates which are not supported by the equivalent real improvements in customer retention and advocacy, and so become meaningless.
Customers are now starting to play the game themselves. Having been ‘educated’ into the importance of high scores from dealers, customers now realise that they have their own leverage, and are starting to blackmail dealers into doing things “otherwise I will mark you low on the survey”. The escalation of that is shown in this extraordinary example from a Ford US dealership:
“Customer service surveys at car dealerships must be serious, serious business. That’s the only conclusion I can draw from Bob’s story about being bullied by the Ford dealership where he bought his Fiesta. They called him up to say that if he planned to rate his (unsatisfactory) service experience as anything but satisfactory, he would be hurting the dealership and practically stealing money out of employees’ pockets and yanking food out of their kids’ mouths. If he didn’t say nice things, the service manager insinuated, the dealership might decide not to service his car at all.”
The ultimate irony is that where we have scrutinised customer feedback for customer comments about this type of survey abuse, the most prolific dealers tend to have the lowest profitability.
So what is to be done?
Moving away from payments on scores to focus attention on delivering great customer service will be difficult. Everyone is stuck on the incentive merry-go-round and it is difficult to make a break.
One of the most successful approaches we have seen for shifting away from rewarding the scores is to reward the follow-up communications that help to build the relationship between dealers and their customers. Rather than focussing on skewing the results before they are collected, this is about closing the loop after the survey has been carried out. First, make the survey customer friendly so that it focusses on customer feedback, not a dealer audit. That should give the customer scope to say openly how they felt about their experience. Then, following up on any issues enables problems to be nipped in the bud early on, as well as thanking customers for positive feedback.
This type of approach has demonstrated continuous rises in Net Promoter Score ratings – without any financial reward on those scores – at the same time as substantially reducing goodwill payments and calls to the manufacturer’s customer services department. In this programme, dealers now receive just the feedback comments without any scores for the closed loop follow-up process. Scores are only circulated at the end of each month, well after the follow-up communications are completed.
We have many instances of dealers saying the verbatim comments from customers are the most valuable part of any feedback programme – it helps them understand what needs to be done to actually improve customer satisfaction.