top of page

Roll out reporting with data confidence



Photo by Choong Deng Xiang on Unsplash

When you are leading a data team that is putting in place a new analytics or reporting system you face many challenges. And often the frustration is that the biggest challenge is not technical but the attitude of users.


They see a number or chart that "doesn't make sense" and it becomes "the system is wrong" even though the issue is that the data quality coming in is poor. All of your work is for naught and you in turn blame the users because they “just don’t get it”.


This actually is the fault of your team but not necessarily because of a flaw in the system design or your technical approach. Rather it is a flaw in your people approach, specifically that you did a poor job managing expectations in your user base.


The key is to realize that trust is not a judgment, it is a feeling.


At some level people think of computer systems as being perfect and when they are not, then that must because something is wrong with the way the system was designed. Or more accurately that the team building it made a mistake.


Intellectually they may know that if there is an error then that could be due to something outside the reporting system itself, in the data entry phase. But no matter what they may rationally think, what they feel at a gut level is that the system is the source of the dashboards and charts in and of itself and thus any inaccuracy is evidence of a system flaw not a data issue.


They unconsciously (or consciously sometimes) think of analytics as solely a creation of the data team.and as a result any discrepancy or inconsistency is due to mistakes made by you and your staff.


You can talk all day about how the quality and accuracy of the dashboards etc are a combination of the processing and calculations that the system is made as well as the quality of the data coming in but users don’t feel that way. They may say they agree with the idea that “garbage in is garbage out” but what they focus on is the “garbage out” part of that.


In order to get around this, their expectations must be managed.


Easier said than done. Managing expectations in your user base is very hard to do because it is fundamentally contradictory. On one hand you need to make sure the users are braced for bad data but on the other hand you need to make sure they think the system is good enough to make it worth engaging with.


It is almost impossible to recover if you set expectations that everything works and then when they use the real system it turns out that they can’t trust the numbers it is showing them.


On the other hand if you are too pessimistic in your description of the amount of work that they will need to do in order to make the data system usable, then it begins to beg the question as to what benefit it is really giving them.


You risk the users wanting to stick with the devil they know in the form of the broken reporting they are dealing with rather than go through all the work and time to fix things for the new system.


After all, isn't it the job of the data team to develop the system? And your users get irritated that you are expecting them to fix the data just so *your* system works properly.


So how to square this circle?


You need to be realistic about the challenges while at the same time providing your end user stakeholders a path to the future where those challenges have been addressed.


This is where the “data confidence graph” below comes in. It helps to set the expectations for how the data journey will go when the system is implemented. It describes a realistic path in terms of the users’ evolution in how they can trust the system.


Starting from the initial drop when they see how bad their data quality is and then showing a rise in confidence as that gets ironed out. More importantly it also illustrates how knowing how crappy your data is is itself progress that can lead to building more confidence.





Before we have an actual reporting system, there is usually a lot of manual effort involved in getting data and as a result the reporting is very simplistic and reactive. At this point, your users are doing their best but there are a lot of unknowns.


Once the reporting system project has started, data quality issues that were previously hidden due to the difficulty of accessing the data are now starting to be uncovered. This is where things normally start to go awry because even though the system did not create those issues, they were never before displayed in a way that highlighted them.


At this point they start to have a much better idea of what the data quality issues are. This is where you often lose them due to having set expectations that the system is going to improve things when the feeling is that they have just made things worse. There is seemingly a mountain of stuff to fix with no end in sight.


In order to avoid this natural dip turning into despair, you must emphasize that even realizing the poor quality of their data is progress. It goes from a vague sense that things are not quite right, or even worse, complete ignorance that anything could be wrong, to knowing the particulars of what can be trusted and what is inaccurate.


As a result, though our confidence in the data starts to plummet, our certainty about the data actually starts to increase. It’s just that we become more certain about how bad our data is.


However things are not actually worse, it is actually better just but feel worse. But if you don't prepare them for this dip in confidence then it becomes the final situation rather than merely a temporary stop along the way.


The next phase is when data quality issues start to get fixed. Your understanding of the data and its quality is allowing you to know what you can trust and what you need to take with a grain of salt immediately and also what fixes to prioritize in order to improve data quality.


This is the upswing of the graph. Our certainty in data quality remains about the same but now that certainty becomes in how good our data quality is rather than in how bad it is. This is the end goal that we want our users to know is ahead of them while they are mired in the middle. This also serves to show them why they should engage with a system that still “feels” inaccurate.


By setting out this journey ahead of time you are allowing your users to meet the data where it is rather than having their expectations and reality be wildly divergent. People can handle bad news, it is surprisingly bad news that is stressful.




Comments


Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • LinkedIn Social Icon
  • Twitter Basic Square
bottom of page