Not sure if I am just looking at this wrong, or what….I don’t think our CSAT scoring is calculating correctly. We ask 3 questions on our survey with 2 of them being a 5 point scale and 1 of them being a Yes/No question. Based on this image, if I add the total score and divide by the maximum score, this agent should have a 93%, but the CSAT is calculated at 3.75 out of 5 or 75%. I am sure I am missing something but I cannot wrap my head around it LOL. One of the tickets only answered 2 questions and not the 3rd, so not sure if this is impacting it, but the scoring doesn’t look right for any of my agents. Thoughts?
Page 1 / 1
Is the score the best out of 5? Or out 4? I just don’t know what the 3.75 score is rated on???
Hi Shannon.
Hope you’re doing great.
The Best score is based on what you define in your CSAT, in your first question:
For instance, we also have a 5 point question, and this is set up counting from 0 to 4, so, in our case, 4 is the Max.
Without seeing your setup, it seems that you’re considering something similar, being 4 the top.
As I have noticed, fully answered surveys are considered.
You also need to match the time period which you are filtering your report and seeing the CSAT score.
These are the hints I may suggest you to double check. I acknowledge that, at the beginning, it was indeed a little hard to get for me, but yes, they are.
But, I also use my own KPIs.
Regards,
Elvis, as always, you gave a much better answer than Support did . So this was very helpful! I am still a bit confused though because I looked at the question, and the first question is a 5 point scale. If I do the math though, the 3.75 out of 4 ( as I described above) would be 93% as I calculated manually. So it does seem it is a 4 point scale, but what you described makes it be a 5 point scale. Thoughts?
Glad I could help a little.
mmm… Interesting.
Just double checking: I notice the screenshot of your CSAT is active, but is a Copy. And there are case numbers going from 435, passing by 3328, up to 4289…
Are all of them using the same CSAT (your Active Copy)? Maybe some of them are from other.
I also recall you mentioned once that you needed to submit a support case in order to request a refresh on some data calculations; not sure if it was Dashboard or Analytics; I think it was Analytics.
Anyway, are you getting this from a Curated report? If so, may I ask which one in order to compare?
Regards,
I built it myself as the canned one wasn’t showing a collective score. Here is the filters for the report
Hi @shannon.mejia
2 thoughts -- (1) I wonder if using Average is affecting the output?
(2) The other possibility is that the CSAT calc is doing something mad like:
There are 3 tickets with CSAT scores on 8 questions. 6 / 8 questions got max score so 6 / 8 = 75%
INC-3328 = 5/5 + 5/5 + 1/1 = 100%
SR-438 = 4/5 + 4/5 + null = 80% (or it should but may have scored Q3 0)
SR-4289 = 5/5 + 5/5 + 1/1 = 100%
10 + 10 + 8 = 28 / 30
Hi @eeha0120 → can I ask why you are multiplying by 4 here? There are only 3 tickets and only 3 questions.
Back to Elvis’ question about using the same CSAT -- do you have more than 1 CSAT - add the Survey Name field to your output to check and then you can see if the weighting is the same.
Since there is such a large number gap 438 to 3328 it does suggest that ticket is quite old and possibly working on an earlier version of the existing CSAT?
Typically when I do CSAT reporting I put a TTICKET CREATE DATE] = “THIS YEAR”
HTH
Bryn
@BrynCYDEF (1)-what should I use instead of average? I thought that was how to get an overall scoring for the department.
(2) I think maybe that is more likely, because it does show a score for each ticket as actual and possible. So that may be possibly what is happening.
As far as the different surveys. I only have 2 tickets on the “non-copy” survey. I changed the survey very early at our Go Live (we went live May 30. 2023). So just ignore the name of the survey including “copy”….I just didnt go back and rename it after I made the change.
Hi @shannon.mejia
2 thoughts -- (1) I wonder if using Average is affecting the output?
(2) The other possibility is that the CSAT calc is doing something mad like:
There are 3 tickets with CSAT scores on 8 questions. 6 / 8 questions got max score so 6 / 8 = 75%
INC-3328 = 5/5 + 5/5 + 1/1 = 100%
SR-438 = 4/5 + 4/5 + null = 80% (or it should but may have scored Q3 0)
SR-4289 = 5/5 + 5/5 + 1/1 = 100%
10 + 10 + 8 = 28 / 30
Hi @eeha0120 → can I ask why you are multiplying by 4 here? There are only 3 tickets and only 3 questions.
Back to Elvis’ question about using the same CSAT -- do you have more than 1 CSAT - add the Survey Name field to your output to check and then you can see if the weighting is the same.
Since there is such a large number gap 438 to 3328 it does suggest that ticket is quite old and possibly working on an earlier version of the existing CSAT?
Typically when I do CSAT reporting I put a STICKET CREATE DATE] = “THIS YEAR”
HTH
Bryn
Hi Bryn.
Thanks for your wonderful help here. @shannon.mejia : Bryn is Amazing at Analytics! ;-)
Going to you inquiry Bryn, I was just picturing if, somehow, in Shannon’s instance, the Max was set to 4 instead of 5, so, I was just converting the percentage result to a 4-max scale number (Simple rule of three). I apologize to both of you for any confusion as I didn’t write completely what I was thinking of.
@BrynCYDEF (1)-what should I use instead of average? I thought that was how to get an overall scoring for the department.
(2) I think maybe that is more likely, because it does show a score for each ticket as actual and possible. So that may be possibly what is happening.
As far as the different surveys. I only have 2 tickets on the “non-copy” survey. I changed the survey very early at our Go Live (we went live May 30. 2023). So just ignore the name of the survey including “copy”….I just didnt go back and rename it after I made the change.
Just double checking: We guess that none of those 2 tickets is any of the ones shown here, right?
But, as Survey Name is set to Not Empty, I second Bryn’s suggestion to set your specific survey, so you won’t have mixed up calculations.
Regards,
Thank you guys so much for your help on this!!
OK, so I changed the Survey name to this specific one we use. Same results. We have only been live with FS since May, so I didn’t add in a specific time frame because I want all thus far.
Looking at another Agent, based on the actual vs potential, he should be at 100%. He had 5 surveys so far and they are all 11/11. So why is he showing 3.67???
Thank you guys so much for your help on this!!
OK, so I changed the Survey name to this specific one we use. Same results. We have only been live with FS since May, so I didn’t add in a specific time frame because I want all thus far.
Looking at another Agent, based on the actual vs potential, he should be at 100%. He had 5 surveys so far and they are all 11/11. So why is he showing 3.67???
Hi again. Anytime.
Would try using Group By Associated Agent Name and Question?
@eeha0120 Elvis, do you mean group by question on the CSAT Avg chart by Agent? When I did that it prompts for “Bucket by”. It doesn’t make sense….
@eeha0120 Elvis, do you mean group by question on the CSAT Avg chart by Agent? When I did that it prompts for “Bucket by”. It doesn’t make sense….
Hi.
Apologies for the delay in the response. Got trapped in work and other stuff.
In your screenshot you’re grouping by Question Score. Is there any particular reason why you would like to use this? Suggestion was to use Question in order to check some output.
Going back to your inquiry: When you Group By Question Score, Analytics allows you to create groups based on Ranges. You can create Groups from 1-1, 1-3, 1-5, 2-4, or according to your needs. These are called Buckets.
Regards,
@eeha0120 Question score is the only option for “group by” related to “question”:
Ohhh.
Interesting.
Didn’t noticed it as I was testing and modifying an already existent widget from a curated report.
And I couldn’t even clone the widget!!!
I had to clone the whole report and then work with the widget.
Just for testing purposes, would you try it as well?
I cloned curated report “Employee Satisfaction”, went to tab “Survey Results and Score” (last one), then widget “Average Survey Score By Question” (first lower row).
{I’m answering but I cannot stop thinking “Why would FW create a widget in a report that us can’t create by ourselves?”
Now I’m getting more and more the phrase from @afautley: “release the chains and set us free!” }
Going back after the break, hehe, apologies for it, as this already has the Group by Question, you may add the Associated Agent Name and see how it goes.
Ohhh.
Interesting.
Didn’t noticed it as I was testing and modifying an already existent widget from a curated report.
And I couldn’t even clone the widget!!!
I had to clone the whole report and then work with the widget.
Just for testing purposes, would you try it as well?
I cloned curated report “Employee Satisfaction”, went to tab “Survey Results and Score” (last one), then widget “Average Survey Score By Question” (first lower row).
{I’m answering but I cannot stop thinking “Why would FW create a widget in a report that us can’t create by ourselves?”
Now I’m getting more and more the phrase from @afautley: “release the chains and set us free!” }
Going back after the break, hehe, apologies for it, as this already has the Group by Question, you may add the Associated Agent Name and see how it goes.
Yes I can do exactly like you did and have “Question” in this widget. It is very strange it isn’t available in the other widget…..it is the same metric!!!
This still doesn’t help my calculation though LOL. The score is still 3.67 out of 5 for my people who should be 5 out of 5.
I tried to contact FS Support, but they were not very helpful. I may open a ticket and get it escalated up, but that isn’t as helpful either…..
Hi. I see. Would you mind sharing the result? Just for checking. Or privately if you can’t publish it. I’m just trying to figure it out.
Regards,
And….looking at this the first person, for example, has 5,5,1 for the question scores, which would be 100% or 5 out of 5, but he is 3.67 out of 5 (or whatever out of but not a 100%). Makes no sense!!!
Hi.
Thanks for sharing.
Agree: Makes no sense at all. Compute should be done only by first question; the average of all first question of all tickets he/she has received a survey response.
If all are 5, 3.67 is definitely something odd.
Too bad support has not been able to help you. Let’s try a different approach: @alyssia.correa@Kamakshi V : Would you mind getting an experienced backend / L2 / L3 supporter for our friend Shannon for this odd issue?
@alyssia.correa@Kamakshi V I opened another ticket on it and linked to this thread in that ticket. The ticket is 14962507. When I chatted in for support, I was not given much help. If you can help, that would be great. I have to report these numbers up to my upper management and a 3.67 out of 5 does not look good, especially when it should actually be 5 out of 5 for many of my agents.
Thanks!
Hello @suvashini.balashanmugam
Is this something you can help?
Hello @shannon.mejia - thanks for letting me know! I will share this with the team and try and get you a response asap!
Hello @shannon.mejia - I had a chat with the agent handling your ticket (#14962507) and this requires further troubleshooting from our end to give you an understanding on whether this is an expected behaviour or a product defect. We will update this thread based on our findings.
Thanks @suvashini.balashanmugam !
It looks like the CSAT score calculation might be off. Could you check if the weight of each question and handling of incomplete responses are set correctly? Specifically, ensure that the scoring formula matches how you're calculating scores and normalizes for different question types.