Efficient vs inefficient market research data processing
I am sometimes shocked to see how inefficiently some market research data processing tasks are carried out. Every business has some inefficiencies where the wrong people are being asked to perform a task, the wrong tools are being used or the wrong approach has been adopted. The ‘price’ of these inefficiencies can be anything from a few pounds or a few hours right through to a task taking ten – or, perhaps, even more – longer than it could do.
Types of projects that often go wrong
- Big (and, in fact, small) tracking studies
- Big sets of data
- Large volumes of tables/charts/reports
- Complex tasks
- Repetitive requirements/outputs
- Merging of data sets
- Repetitive data sets, such as diaries, trip-based data records, eating occasions etc.
- Unusual data file formats (either as input or required as outputs)
My own learning curve
I remember many many years ago when I was learning to program that I spent about 3 days programming a data management and analysis task in Visual Basic. I was delighted when my program worked perfectly and produced correct results – I awaited a pat on the back. Sadly, someone looked at my program code and showed me in about 20 minutes how I could have written the program more efficiently. He had taken 20 minutes to do what I had achieved in 3 days. I suddenly realised that inefficiency was not about 5% or 10% difference, it could be massive.
DP inefficiencies can inflate costs exponentially
The fact of the matter is that badly handled tasks in data processing can increase staff times – and, of course, costs hugely. Badly handled data processing or programming tasks are more prone to huge penalties for approaching wrongly than most business tasks. This certainly applies to my industry, market research, where there are large numbers of variables that may need to be analysed in depth.
One of the reasons that data processing tasks go over budget is that the wrong people are being asked to carry out a task. This may be due to lack of training in handling something more complex than a person or team have handled before or it could be that a task needs an expert. I have seen staff struggle over a complex task (and often fail) for hours on end when an expert can solve the problem in minutes. Many people put into a position beyond their knowledge may continue to struggle until they succeed, but this is generally not a good use of someone’s time.
Of equal importance is the issue of trying use the wrong software to carry out a data processing task. Many software packages have a way of getting around a problem, but if this means jumping through too many hoops, it will not only mean that the task is carried out inefficiently, but it may mean that it is prone to error or will be difficult to amend when changes are encountered. I could name more than one company that, in my opinion, have persisted with an unsuitable product rather than spend less than £1000 on a suitable thus saving £10000 in staff costs.
There are two types of data processing tasks where I see bad approaches taken. The first, unsurprisingly, is where there is a complex requirement outside a company or team’s experience. However, equally, I have seen relative simple but large, often repetitive, tasks handled ineffectively. Something that is big probably needs the right tools to make that task easy.
Perhaps, the worst problem is ‘digging holes’. I have witnessed companies embark on major projects that need specialist knowledge to handle efficiently. They tend to soldier on and succeed in terms of getting correct results out for a while, but as the task carries on, it grows and grows to a point where it becomes impossible to manage. To make this scenario worse, undoing all the work that has gone into surviving for the first few months becomes a living nightmare to unravel. Starting again is the only sensible decision, but that’s not easy to tell a client who is expecting correct results – and now!
Prototyping is often seen as wasted effort
One thing that is often underestimated is the benefits of prototyping. In the engineering world, it is standard practice to build prototypes, but in market research data processing I have rarely seen prototypes built to test processes for major projects. Why not? Is market research so time-dependent or is it better to take the risk of a process failing?
Tracking studies need thought
Perhaps, the most common type of project that is mishandled is the research tracking study. Clients often promise that the questionnaire and analysis required for a tracking study will not change – or, if it does, not much. The reality is that for most projects, the questionnaire and the analysis will change more frequently than expected. This can cause problems. If different software systems are being used to collect and analyse the data, a major data management problem may also arise.
Not many software packages handle tracking studies well
The fact of the matter is that few software packages are geared to handling changing questionnaires, changing data maps and different analysis over the course of a tracking study. Add to this the fact that there may be data calculations, weighting, changing variables and more which are all dependent on the other changes cited. Most software packages are not able to handle these problems. Or, if they can, it is by clumsy recoding processes, which are time consuming to carry out and highly prone to error.
An example of where you need the right tools
I was asked to help a client who had a tracking study in the mobile phone market where the questionnaire changed every month. Added to this, the brands, sub-brands and tariffs for each sub-brand changed every month. The project also required external data to be merged. The project worked on a month cycle such that the key person working on the project had to work on the next month’s data processing as soon as he had finished the previous month.
A tracking study going off the rails
As the project grew, it became clear that the 30 days’ work per month was creeping up to 40 days and delivery was becoming unachievable. In this case, MRDCL was able to store all the changing controls in Excel templates so that the whole project could be automated – this included code lists to tables changes to brands, sub-brands and tariffs. The task became more of a clerical task rather than a data processing task. More to the point, several people could easily understand the process and help.
Finding the right people to help
What is the learning from this example? Where things become difficult, repetitive or cumbersome, it is time to think about what input is not functioning well. Is it the staff? Is it the expertise? Is it the software? Is it the hardware possibly? Or, is it several of these things. In the example above, one month’s restructuring of the project by an expert meant that the project could be handled with 2 days’ work per month rather than 30 days’ work.
What projects are most likely to cause data processing headaches?
I’ve already mentioned tracking studies as being potentially problematic, but it is hard to make a general statement. The best I can offer is this: if you have a project that is outside your comfort zone or different to anything you have handled (easily) before, you should put more effort into checking your planned route. It is easy to get tricked where projects are conceptually quite easy, but they have many steps or minor complications. These can soon stack up into a bigger problem.
Feel that you have a problem project or projects?
If you feel that you have a problem project (or projects), ask us for help. If it’s not something that we are skilled in, we will tell you and try to advise you of what type help you need. Going on blindly, doesn’t often work!
Please feel free to contact me for any advice firstname.lastname@example.org