The past few days, I have been discussing a drop in rated customer satisfaction tickets with our CSM, Zendesk; while it is not unusual to have a low % of rated tickets, CS was first alerted when we had zero-rated tickets in a small timeframe. The cause of this issue could have been many things, such as a problem with my ‘Request Customer Satisfaction Rating’ automation, having a CS agent manually close tickets instead of just solving them (which would make the satisfaction survey’s link go dead), our emails going to the requester’s spam inbox for some reason, or even something to do with our recent changes to workflow which drastically reduced the time to solve tickets (do customers have a lower chance of responding to satisfaction surveys if their problem or transaction is solved quickly?)
Thankfully, there are several things one can do within Zendesk to help troubleshoot issues. The first statistic I wanted to check was how many times the automation I built fired this week. Looking for any relatively low numbers, I was glad to see that it had been activated 602 times with the past seven days; we get far more tickets than that number, but I have it configured to not trigger on internal communications or on our e-commerce channels.
Next, I inspected the automation itself, looking for any incorrect settings such as when the automation fires, what it applies to, and how it responds to the customer. Since automations are time-based, the request customer satisfaction survey only fires after 24 calendar hours after a ticket is marked as solved, thus sending an email to the requester; after four business days, that ticket is then closed. Long story short, there were no issues with the automation.
Finally, I turned to Explore, Zendesk’s data and analytics tool for help. Since I did not find any issues with the survey settings, I had to prove my opinion on that everything was performing as it should.
After doing some digging, I didn’t see any area of concern regarding surveys. Below, the data for our surveys are shown for each month this year. We didn’t enable surveys until March, so January and February are blank. As you can see, each month we average around 12% rated tickets.
I did manage to see the issue CS was talking about on May 27th, shown below. While it was strange to not get any rated tickets for that date, the problem quickly went away, and the number of total rated tickets for May was still around 12%.
To summarize, after CS thought the system I built was failing, I was able to confidently state that everything was operating as it should and that the drop in rated tickets was just a real-world issue. Maybe we didn’t solve that many tickets on May 27th, maybe customers didn’t really care for responding to our surveys, or it could have been due to that date falling on a Monday. Whatever the reason, I can’t stress enough how much I love data and graphs; they make the most complicated issues seem simple.