Everything you need to know about producing market research cross-tabulations
Tabulations and Survey Analysis
If you collect data using a questionnaire or are about to, you will need to think about what you are going to do with the data you collect. If your survey is more of a poll with just one or two questions, your analysis and reporting will probably go no further than producing percentages or counts for each response in the survey.
Products like Microsoft Excel and Google Sheets will probably be all you need to calculate your analyses. Alternatively, you may find that the software platform that you used for data collection provides enough analysis tools. For surveys with several or many questions, there is often a need to give more thought. Your goal is to tell the story that comes from the data and to provide insights.
Once you have collected your data, there are five steps that you may need to complete:
The path from data analysis to producing insights might seem a logical one, but there are a lot of considerations to make when deciding on your approach. If you don’t know where to start, it might be worth looking at all the paths you may follow.
It often makes sense to give some thought about your final goal. What you are hoping to deliver might range from basic analysis right through to complex analysis and online reporting portals. So, it may make sense to consider what you want to deliver to the recipient of your data. There are at least 12 ways that you might deliver data from surveys. These can be summarised as:
If this is your first project, you will almost certainly need to consider what you can offer. We have some good advice if this is your first survey project or your first major project. In such cases, you may want to consider one of the lower-cost products which do everything from start to finish. This may be especially true if you only plan to have one or a small number of projects each year.
The five steps from starting data analysis through to reporting on your survey break down as follows:
It’s usually a good idea to get some topline counts on your data, so that you have a picture or an overview of your data. This usually takes the form of an output showing the percentages giving a response to each question or, in the case of numeric fields, showing the range of values and the mean score. You can get these counts from software ranging from Microsoft Excel, Google Sheets, usually from the data collection software you have used or from specialist market research software.
It’s a step you can bypass if you are already familiar with your data or plan to spend more time analysing through cross-tabulations (see below).
Tables are the most common way to obtain analysis from your survey. They are also referred to as crosstabs, crosstabulations, tabs, matrices or banner analyses. They provide detailed information about your survey data so that you can see whether there are, for example, differences between the behaviour or opinions of males versus females – or, perhaps, from respondents from different regions. They are in rich in data, although they have the disadvantage of being slow to work through if there are a lot of questions in your survey.
It is common practice to produce statistics or market research data. The two most common statistics that are utilised are mean scores and significance testing. Mean scores are a way of showing an average ranking or the average amount for a value. This can provide an easy-to-use measure for rating scales, for example, where very good is scored as 5, quite good as 4 down to very poor being scored as 1. A mean score of 4.5 would show in this example that respondents on average fell halfway between very good and quite good. Mean scores provide good information from survey questions that contain numeric fields. As an illustration, you might be able to find that employees travel an average of 5.5 kilometres to their place of work. Significance tests allow you to see at a glance the confidence you can place on the difference between different samples being significant. There are many other statistical options that are available. Again, for simpler projects, this step may be skipped.
When running a survey, it is normal to report the key findings. This may be in the form of a short summary report or a detailed report. Sometimes, it is important to report findings for each outlet or region and to compare findings with comparable outlets or other regions. Again, this step may be skipped if the sole goal is to provide analysed data rather than research insights.
In-depth analysis can take many forms from complex tabulations, multivariate analyses or by giving users of the survey data access to the data for re-analysis. It might also include the need to provide key findings (often called KPIs – key performance indicators), so that survey data can be merged with other business data. Increasingly, this type of analysis is carried out through online dashboards where users of the data can explore data and drill down to look at and compare sub-samples of the data.
After looking at topline counts on your survey data, producing crosstabulations is the most common next step.
If you are planning to produce survey analysis from your first research project, it is probably a good idea to find a tool that is easy to use even if it limits the depth or complexity of analysis that you produce. If your survey is small (say, up to 20 questions), you might Excel or Google Sheets to be the right tool to use. There are two good reasons – it’s free and it’s easy to use.
Firstly, everything that applies to Excel in what follows also applies to Google Sheets. Excel has an excellent feature called pivot tables that allows you to produce crosstabulations either based on codes or on texts. If your data is available in a spreadsheet form, it is easy to make crosstabulations from your data. Excel has the added advantage that it is easy to convert figures you produce in your pivot tables into charts.
Excel, however, does have limitations – not in terms of the number of questions or the amount of data (at least, not until you exceed about one million cases) – but it soon becomes cumbersome for certain things you might want to do. This is not wholly surprising as Excel is not designed for the market research industry – it is for general business use. Here are some examples of where Excel will struggle. For example where you want to:
If your first project is the first of several or many, it will, of course, be a good idea to find something that offers more than Excel. Excel starts to become cumbersome if you have a series of projects to analyse, particularly if the more advanced requirements are numerous or are needed regularly. At this point, you will realise that you need a survey tool that is more focused on market research data. Some of the free or low-cost online survey platforms have some useful tools to help you, but again you might outgrow these quite quickly.
In most cases, it is worthwhile thinking ahead to consider your whole analysis and reporting needs. It’s important that the path from survey analysis cross-tabulations to final reporting or data delivery is as smooth as possible. It is our view that survey analysis is changing. For many years, cross-tabulations and reporting have been considered separate tasks, but this is changing rapidly.
The traditional separation of survey analysis and reporting has arisen because traditionally data processing experts have produced cross-tabulations and researchers have produced reports, often in PowerPoint. This moved forward in the 2010s whereby data processing experts often managed the data leaving researchers to produce all or some of their own cross-tabulations to link (or copy/paste) into their PowerPoint presentations. The next step in the evolution of data analysis is to make the link from data to online dashboards. Therefore, thinking ahead to consider the deliverables is becoming more important. What are the deliverables?
At this point, you will need to decide the level of complexity of the analysis you plan to carry out and the level of software that you need.
Let’s look at the advantages and disadvantages of each type of software product:
Type of software | Advantages | Disadvantages |
---|---|---|
General Purpose Software | • Usually free to use • Easy to use • Easy to get from results to basic charts | • Limited analysis options • May not handle MR data well • Unsuitable for larger projects |
MR software using GUI (graphical user interface) | • Usually easy to use • Most MR needs handled • Good for irregular use | • Slow to produce repetitive requirements • May struggle to handle bigger projects • May offer poor productivity |
MR software using scripting language | • Able to produce complex analysis • High productivity if used by skilled staff • Efficient at handling repetitive tasks | • Steep learning curve for trainees • Unsuitable for infrequent users • Likely to be more expensive |
If you are purchasing crosstabulation software, it is important to consider you current and future needs and the skills that you either have or plan to have available. There are many considerations which will be influenced by short-term and long-term goals. As a company that offers all types of software, we feel we have an important role in advising on the right type of software.
One of your first considerations will be the type of software you choose. There are three basic types:
In most cases, if you are running several surveys, you are likely to outgrow general purpose tools such as Excel quite quickly unless you are only conducting small surveys of a 1-10 questions. Choosing which type of market research software you use may be a difficult decision. Some companies benefit from having both GUI and scripting language MR software. This can work well when you have a range of types of analysis work or differently skilled staff.
The number of GUI software programs for market research analysis significantly outweighs the number of scripting or hybrid systems. The main differences will usually be:
Type | Advantages | Disadvantages |
---|---|---|
GUI | • Low training • Moderately skilled staff • Lower cost | • Limited functionality • Slow to specify complex needs • May outgrow |
Scripting | • High level of functionality • Shortcuts for repetitive needs • Shortcuts for complex needs | • Skilled staff needed • Higher cost • May be difficult to share projects |
Hybrid | • May offer best of both worlds | • May fail to offer the best of either world |
The above table will not apply to all GUI software or all scripting software. Some software systems will fall short on the advantages or negate the effect of the disadvantages. However, the decision of which type of software to use is a fundamental one and one that needs to be considered at an early stage. It almost goes without saying that if an easy-to-use, low cost GUI software product fulfils all your needs, there is no need to consider anything more complex. The exceptions to this may be where you have big surveys, long questionnaires, large volumes of analysis or particular types of surveys, such as tracking studies, multi-country studies or projects with multiple reports. In practice, low-cost GUI products may be able to handle these types of project, but they may take staff several times longer to produce results than staff using more advanced scripting languages. As for hybrid systems that support both GUI and scripting, it is hard to generalise as most will be more inclined towards one type or the other and call fall short in both ways.
It’s important to consider the staff that will be using the software and the projects that they will need work on. Learning a scripting language is not something that everyone will be adept at doing, whereas most computer-literate people with a basic knowledge of market research terminology will be capable of using most GUI software packages. You must also consider the size and complexity of the projects they will need to work on. Perceptions of what is a big project can vary between 50 questions to 5000 questions and 1000 respondents to 500000 respondents, so making sure your needs are accommodated by whatever software you need is a must.
MRDCL is one of a small number of products that are available in market research which are driven predominantly by a scripting language. The alternatives to MRDCL are Quantum, Uncle, Ruby and Merlin – we have produced a comparison of these products which mostly originate from the 1990s or earlier. We believe that MRDCL’s developments have focused on productivity. This is in response to rising labour costs around the world which mean that production efficiency is far more important in the 2020s than it was in the 1990s when most of the other products in this category were introduced. MRDCL is aimed at DP professionals who will spend most of their time using the product. As with any ‘technical’ product, learning to use MRDCL takes place in steps – learning the basics followed by understanding all the tools available. At this point, most users find creative ways to make real productivity gains. The target market is not casual users in most cases (See comparison of scripting vs GUI).
By contrast to MRDCL, QPSMR is for less regular users of the software or users handling more straightforward surveys and analysis. It uses a graphical user interface and is generally easy to pick up. Unlike MRDCL which handles analysis only, QPSMR handles data collection for paper, CAPI and CATI surveys. It uses the same engine as MRDCL to produce tables, so it is easy to pass a project between QPSMR and MRDCL.
Snap overlaps with QPSMR but its focus is on online data collection, analysis and online reporting. It does not extend to online dashboards (see CYS Platform). Again, Snap uses a graphical user interface and is easy to use. However, it covers a lot of ground from data collection, particularly the design of online surveys, as well as analysis and smart reporting. Although it is easy to learn, there is a lot to learn if one person covers all aspects of the software.
The CYS Platform is an online product (often called SaaS) that allows you to collect data, analyse it and provide the data in online dashboards. The analysis tools are easy to use although complex requirements can be prepared as well. The usual goal when using CYS is to provide data in an online dashboards for colleagues or customers to explore. The platform works well if the main analysis has been analysed in any of the above three products (MRDCL, QPSMR, Snap).
Some of our customers like to use MRDCL and QPSMR together. This means that data can be collected using QPSMR and then analysed in QPSMR or MRDCL depending on the complexity of the analysis required. The decision as to whether MRDCL or QPSMR may depend on such things as the complexity of the analysis, whether a tool like MRDCL is more beneficial as the data is complex or is part of a tracking study, for example.
In some cases. It can make sense to use MRDCL, QPSMR and Snap. Moving projects between the three products is easy to do. Snap has some reporting tools which can help with the production of multiple reports, for example.
We do not want the wrong people to try to use MRDCL. The evaluation of MRDCL is a particularly important of the process, more than most other products, in our opinion. For MRDCL to be the right product, it is worth considering the barriers to get the most out of this premium product. You should be prepared for steady, consistent progress on your route to getting huge productivity gains if MRDCL seems to the right product for you. We prefer to tackle the issue of the progress you can expect before you start to use MRDCL and have discussed these at length with customers. Our step-by-step guides and video library will get you to where we think you should be with steady progress. Understanding the right way to learn MRDCL and make maximum use of its potential is a key to your success with MRDCL.
Data comes in many different forms. Market research has some of its own forms as well as using some business-wide forms. Some formats contain metadata. Metadata refers to the survey specifications and the data associated with survey respondents as opposed to the data associated with survey respondents only. The most common ones are:
Type/format | Description | Used in MR only/Contains metadata (texts) | Software used to view files |
---|---|---|---|
CSV/Excel | Data is stored in fields usually with a header row containing the variable name. | Used in many businesses. Survey texts not stored. | Excel, Google Sheets. CSV files can be viewed in any text editor such as Notepad. |
ASCII | Data stored usually uses codes that represent responses to questions. | Used in some businesses, but common in MR. Survey texts not stored. | Any text editor, e.g. Notepad. |
Column binary | Data stored using codes. Several column binary forms exists (mostly redundant) | Mainly found in MR. Survey texts not stored. | Specialist data editor needed. |
SPSS format | Data is stored in fields, but it is encoded in proprietary format. | SPSS is a MR product. Survey texts are usually stored. | SPSS or SPSS-compatible software needed. |
Triple-S format | Data stored to MR-wide standard using XML and ASCII files. | Used in MR only. Survey texts are stored. | XML files viewable in an XML editor. ASCII files as above. |
SQL Server Data | Data stored in fields. | Used in many businesses. Texts may or may not be stored. | Data can be extracted with right tools and credentials. |
Proprietary formats | Varies | Data may focus on MR or other sector. Texts may or may not be available. | Need specialist or compatible software. |
As time goes on, the expectation is that data can be made available wherever it is needed. This applies equally to market research data as any other data. There is a need to be able to move data between different types of specialist market research software as well as platforms or systems used in the wider business world. Data portability is something that will increase in importance rather than be an issue that will subside over time. The need to use more specialist software products or platforms will increase in line with the expectation of clients and users of research data.
Triple-S is a data interchange format that was developed in the early 1990s as a way of moving survey metadata between different software products. Arguably, it was ahead of its time in that the need to move data between different products and platforms has increased significantly over the last 10 years. As more specialist software products arrive on the market, it is important that it is easy for those handling market research data can move it to the product of their choice. Any market research product that is not Triple-S compliant is, in our view, best ignored.
Just as it is important to be able to output data in a Triple-S format, it is important that data can be taken from Triple-S format. If the software you are using cannot import Triple-S data, it is likely to mean that you will have a lot of re-specification of question texts as well as, possibly, data conversions from one format to your own format. Again, being able ‘to talk’ to other software products is a necessity.
Whilst we do not find the format that SPSS stores a project’s metadata to be the most convenient, many software systems have made SPSS as standard data interchange format. There’s good reason for this. SPSS is used widely in Government and education around the world and has statistical tools which are used widely and uncommon in most software packages. SPSS is also part of platform that some companies use for data mining and business information, so although it doesn’t store market research data particularly tidily, market research data is often needed in SPSS. Again, having an import and export to and from SPSS is a necessity. SPSS offers APIs to connect to its data storage methods – APIs are ways that programs make themselves accessible to other products, although linking to each API has to be programmed separately.
It’s appropriate to make a comment about the quality of imports and exports to and from Triple-S and SPSS. Firstly, there are some exports to Triple-S and SPSS which have flaws making the transfer difficult or impossible. The programs importing such data will either fail or ignore question/variables/responses that it cannot read. These problems can soon become more than minor irritations. Similarly, when importing from Triple-S or SPSS, it is important that the import produces a ready-to-use project file in the software you are using. SPSS, for example, usually treats multi-response questions as a series of single yes/no questions. If these are imported to the recipient software product as several questions with yes/no, it may make analysis or reporting either impossible or time-consuming to implement. Therefore, a good quality, ready-to-use import is crucial for good productivity.
Researchers often need to weight market research data so that it matches known demographics. Good research software should allow four types of weighting. These are factor weighting, target weighting, rim weighting and volumetric/quantity weighting. There is also occasional multi-stage weighing although this is rarely used these days. Each is different and each is important for making the most of research data. However, some caution is needed – find out more about weighting data here. Let’s first discuss these four types of weighting.
Factor weighting is the simplest type of weighting as there is no calculating work to do. Each respondent or type of respondent is given a pre-determined weight factor. This may be provided by a researcher, a client or by someone has already worked on the data. It will be a set of two or more instructions to apply different values to each type of respondent. For example, 16-34 year olds in the north may have a weight of 1.2 while 16-34 year olds in the south may have a weight 0.75. Likewise, there will be factors for other age groups. In your analysis, respondents will not each count as one, they will take the value of the assigned factor. This is more clearly explained in the next section of target weighting.
Target weighting is most easily explained by a practical example. Let’s say that when you have collected your survey data that you find you have 60 men and 40 women, but your sample should have been a 50-50 ratio between men and women, you could apply target weighting to achieve this. You would apply a value of 50/60 (0.83333) to each male and 50/40 (1.25) to each female. In your analysis, rather than each respondent counting as one, males would count as 0.83333 while each female would count as 1.25.
Your targets can be more complex than the simple example above. You could weight your data to age within income level within region. If there are three age groups, three income groups and four regions, this would mean that each respondent would be weighted to one of 36 targets (3 ages X 3 income levels X 4 regions). To apply this type of target weighting, you would need to have targets for each age group within each income level within each region. These are often referred to as target weighting to interlocking cells.
Rim weighting is similar to target weighting except that the targets for interlocking cells are not used or available. Rim weighting is used where, for example, you have targets for three age groups, targets for three income levels and targets for four regions, but not the interlocking cells. In other words, you have a total of 10 independent targets (three for age, three for income level and four for region). Even more care is needed when using rim weighting. We have a white paper on this topic which you can read. We also have a free rim weighting calculator available if you do not have software that can make these calculations.
Finally, there are volumetric or quantity weighted figures. These are completely different and can be used in conjunction with any of the three types of weighting above. Volumetrics are analyses where the data is not respondent based but is scaled by some other quantity. For example, you may have three respondents. The first may be male and spends $20. The second may be female and spends $15. The third may be female and spends $40. From this data, you could produce an expenditure-based quantity weighted. The analysis would show males sending a total of $20 and females spending a total of $55. This type of weighting is completely unconnected to the other types of weighting where you are sample re-balancing.
For factor, target or rim weighting, it is important to check the effective sample size and be aware of the dangers of weighting data. Rim weighting can be particularly dangerous if there is a high correlation between the variables used or you have too many rims. Our white paper explains how you might be ‘stretching data’ too far if you are not careful and producing spurious results. Effective sample size tells you the sample size that will produce the same accuracy of data if it has the right sample and does not need weighting. In other words, the accuracy of analysis you could achieve with correct sampling. If your effective sample size is substantially lower than your actual sample size, you are almost certainly applying some big weights to some records and some small weights to others and your data validity may be under question.
Sample adjustments by weighting can be a good solution or a bad solution. The salient points are discussed in depth. The worse thing you can do is to apply any form of respondent weighting and assume that it is solving a problem. The effective sample size and the spread of factors generated should be systematically checked. If the software you are using cannot do this, our free effective sample calculator can help you.
There are several types of data that can pose problems when conducting data analysis.
While the examples below are not an exhaustive list, consideration of each of these can be important when choosing which software works best for you. Our analysis products, MRDCL, QPSMR and Snap all have different capabilities. MRDCL, as a premium product, handles all the examples that follow.
For each topic, we have noted whether QPSMR and Snap can handle these more complex tasks. Let’s look at a few examples:
Hierarchical data comes in two forms. The first type is a respondent-based hierarchy and the second is a data structure-based hierarchy. Respondent-based hierarchies are where parts of the survey are answered by or for different people. For example, a doctor may answer some questions about himself or his practice and then discuss details of, say, 5 patients. Similarly, there may be questions about a household with a questionnaire also completed by each adult aged 16+ in the household. These types of surveys need specialist software like MRDCL to analyse them. Data structure-based hierarchical data are surveys where there are repeated sections. Examples of this are TV viewing over two weeks completing information for each day, eating out occasions or purchases of each beauty product over a two-week period. These tend to be occasion-based and are often referred to a ‘loops’ in the questionnaire. These types of data usually need advanced software to analyse like MRDCL. Many software packages will either be unable to analyse or may be very cumbersome to extract the desired analysis.
Some surveys produce a large amount of data for each respondent. This can be particularly true for hierarchical data. For example, where you have a survey collecting data for 24 two-hour periods per day for 2 weeks. This type of data is common for TV viewing data, meal occasions, doctor/patient studies and product tests. This can make for big data records that need suitable software that can handle that volume of data. Software like MRDCL can data records of any size and QPSMR has huge limits while Snap allows 64000 fields of data. Many software packages will not be able to handle such huge records.
Some software packages can handle up to 1000 or 5000 records easily but start to struggle when there are large volumes – maybe, in excess of 100000, particularly when there are a lot of questions or data fields. The algorithms that software packages use to work on large data sets can vary enormously. Some will slow down significantly if they are used for large volumes, others will handle data more quickly per record as the sample size increases. This can be an important consideration if you are planning to handle large data sets. MRDCL and QPSMR will handle these large files without difficulty while Snap has a limit of 100,000 responses.
Most market research software packages will have some capability for producing mean scores and handling numeric data, but control over how numbers are processed can vary from little or none to fully-featured. For example, does the software allow you to choose how to process a blank when it is a quantity? Can you choose whether it is treated as zero or unknown/null? Arithmetic is often needed on tables so that you can get differences and mean differences. MRDCL has all the tools you need for these complexities. With QPSMR most of these are possible, but you may need to use some script to achieve some results. Snap has a good range of tools but may have some limitations. Again, lesser software will not have these features. It is a standard practice to provide mean score calculations on any numeric data in a survey as well as to apply scores rating scores. For example, very good may be scored as 5, quite good as 4, down to very poor scored as 1.
There is a frequent need to produce statistical data on research analysis. Most tabulation systems will have options to generate a mean score (average), standard deviation, standard error, error variance. Some will offer minimum, maximum, median and mode (modal value). MRDCL, QPSMR and Snap all contain these features. There has been increasing use of significance tests, T-tests and Z-tests in market research in recent years. MRDCL, QPSMR and Snap all provide tools for these tests. None of our software packages contains multivariate statistics such as cluster analysis, factor analysis, correspondence analysis etc. Such tests are generally in specialist software tools such as SPSS, SAS etc. Generally, the specialist multivariate software packages have very limited data analysis and tabulation facilities compared to MRDCL, QPSMR and Snap.
Following on from the previous topic, MRDCL and QPSMR have some more advanced tools for handling variants of the standard significance calculations that can be used. These include features to calculate significance using T-Tests or Z-Tests as well as using special formulae for overlapping samples. Tests can only also be calculated on different minimum sample sizes as required.
In some cases, the final destination of survey data is to produce tabulated data that can then be inspected and reported on as required. Increasingly, there is a need for tabulated data to be used as part of a process. The final destination may something simple such as producing one or more charts in Excel, Google Sheets or PowerPoint, for example. On the other hand, the final destination may be to be some other reporting or output. This can include presenting parts of the data into an online dashboard or to generate some automated PDF reports.
All our products have a range of tools to automate these processes and to include MRDCL, QPSMR or Snap as part of a more extensive set of procedures.
Snap has its own set of tools to produce a wide range of chart styles. Snap also tools for smart reporting. This means that one or more reports can be automatically generated and be conditional on the data. Smart reporting is ideal, for example, for customer feedback projects where the content of each report may be different and dependent on the results. Smart reports can contain text, tables of figures, charts, lists and infographics.
MRDCL will soon have its own charting engine, but for now, you will need to take the tabulated data and produce charts in your preferred tool. For example, Excel, Google Sheets, PowerPoint, Google Slides etc. There are two ways to do this:
A core part of many market research agency’s business is often the management of tracking studies. These present challenges for data analysis systems. Typically, most aspects of a tracking study get easier as a project progresses. For data analysts and those reporting on the data, they can cause an increasing number of challenges. Tracking studies seldom stay constant. Often, there is a need to add or remove questions as the project evolves. Further, the codes for a question may change as brands and sub-brands change. In short, projects offer become more complex for analysis and reporting. MRDCL excels in this area as it is variable-based and a complete set of tools to manage tracking study efficiently and make automation and reporting easier. In turn, these advanced tools make it easy to incorporate MRDCL within other processes and procedures. These can minimise the time that staff need to spend on managing tracking studies.
There are occasions where you need to produce multiple runs of tables. MRDCL has all the tools you need to be able to do this. This may be something you need to do for tracking studies. For example, you may want monthly, quarterly, annual reports, but it may also apply to other types of projects. You may have a multi-country run where you need a set of tables for each country as well as tables for groups of countries, such as South East Asia. What’s more, MRDCL will allow you to produce different analyses for each set of tabulations from one control file. Having one control file is important. Having a series of control files will mean that you will need to replicate minor changes in every control file. This is both time-consuming and error-prone. MRDCL has tools that easily allow you to specify differences for sets of tabulations in ONE control file. For example, if your tables for one country needs different regions to another country and, perhaps, no regional analysis for a third country, this can all be controlled in one file.
One of the most under-considered matters when buying software is the question ‘how productive can I be when I use this software?’ One of the critical benefits of MRDCL is the productivity it offers. However, MRDCL is for data processing professionals and needs skilled and regular users to make the most of the productivity features it provides. MRDCL is a powerful scripting language for handling market research and most other commercial data. It allows users to produce cross-tabulations, which may be simple or complex. Understanding the difference between scripting languages and other more interactive software is crucial when choosing software. What follows is some of the main benefits that MRDCL will offer:
MRDCL has a unique technique known as EPS (Excel Productivity Scripting). This technique allows users to build templates of their own design to automate tasks. Typically, these are repetitive or laborious tasks or complex requirements that you need from time to time. The templates, designed in Excel, can contain lists or instructions that MRDCL can process. Once you have built a template, unskilled staff can enter instructions or lists without needing to be trained in using MRDCL. This facility has the additional benefit of meaning that you can share projects as well as reduce errors. Projects also become more ‘open’ and understandable to others, thus making project sharing and project handovers far easier. If you want to understand more about why EPS can change your productivity, there is a series of videos. This video is an excellent place to start.
Like other powerful languages, it is important to use MRDCL efficiently. A powerful scripting language gives you scope to handle things both highly efficiently and inefficiently. We are always keen to improve the productivity of our users. We achieve this by holding webinars – 5 tips webinar and 4 secrets webinar, writing blog articles, providing information sheets, critiquing scripts and providing one-to-one sessions.
MRDCL has a wide range of tools for writing your own subroutines and functions so that you can re-use commonly needed requirements from project to project and share amongst colleagues.
If you think MRDCL might be the right product for you, there are some important considerations to make. Many of our users have switched from Quantum. Therefore, what follows focuses on switching from Quantum> However, it will be equally applicable if you are changing from another scripting language. Many of our customers have switched from Quantum as it has fallen from being market leader in the mid-to-late 1990s. Since that time, it has been largely unsupported and, in terms of functionality, undeveloped software since 1997 or thereabouts. It has recently had a minor upgrade so that it runs under Windows. As far as is understood, there are no new, modern features and no emphasis on modern processing and productivity.
If you are switching from a non-scripting language or taking on a scripting language for the first time, it is important to have the right staff in place and to allow an appropriate for learning. Although the benefits can be huge, it is right to understand the process of moving to MRDCL. We prefer to tell you how a transition to MRDCL will be – the good things you can expect and the bumps you will encounter on the way.
The first consideration is how easy it is to switch to MRDCL. MRDCL is a very flexible language, which has positives and negatives. The negative is that there are a lot of possibilities and more to learn.The positive of choice and power to find highly efficient solutions outweighs this negative. Such flexibility is not possible in an older language like Quantum. A secondary problem we find when is that as Quantum is overly-structured. Users converting to MRDCL expect to work within a narrow framework. It is easy not to explore or discover some of the more productivity-rich features that are available in MRDCL. Some users converting to MRDCL make the transfer more like a language translation rather than embracing more advanced techniques that can be utilised.
In some ways converting a big Quantum project to MRDCL can be more concerning. As Quantum was last updated in a previous technology era, it does not work in a modern way by connecting to a range of other platforms. Data connectivity is something that has developed increasingly over the last 15 years and barely considered back in the 1990s. There are routes to achieve conversions to MRDCL that work quite well, but you cannot expect this to be painless for complex projects. We don’t want to sugar-coat this, but, to balance this point, it is, we believe, fair to point out that any conversions will probably become more difficult in the future. As technology moves forward further steps, it will leave Quantum even further behind.
As in other markets, there are alternatives to MRDCL. Our focus for the past ten years has been on productivity. We feel we have been highly successful in that respect. Our target for the next five years, at least, is to connect MRDCL to other platforms to add to its power. We will be developing a bridge to The CYS Platform using its API in 2020 so that you can feed data from MRDCL in real time to the CYS online dashboarding platform. No other scripting language, to our knowledge, is making these giant steps. Additional to this, MRDCL will have a charting module by 2021, which will allow users to automate charts, reports and presentations in Microsoft Office or Google products.
No appraisal of MRDCL would be complete without looking at its pricing. It is more expensive than some other tabulation engines. However, we believe its consistent development and focus on productivity means that it offers excellent value. We would expect that every user with the right staffing and the right types of work will see financial benefits in their second year of using MRDCL or sooner. We would hope that the licence ‘pays for itself’ by improving productivity. Ask us why if you are still not sure!