Get a first impression, scheduled soon.
Request a demo to see how NIPO can help you meet your requirements with our smart survey solutions.
In an ideal world, every survey you do will begin with a perfectly set up quota frame structure. Reality, however, can be a different story. Even after the most considered planning, a need for changes sometimes only becomes apparent once survey fieldwork has got underway. Nfield gives you the ability to edit quota frames even after fieldwork has begun, while retaining valid responses already collected.
Just because you can edit active quota frames, that doesn’t mean you shouldn’t strive to get them right from the start. This will create the smoothest ride. However, mistakes and unforeseen circumstances still happen. The most common scenarios which require quota frames to be edited are:
Spellings
Quota variable and items names (e.g. Leisure Activity Type – Dance / Football / Volleyball / Tennis) need to be spelled exactly the same as they are in the survey script. It sounds simple enough, but mismatches can still happen if:
Additional brand names
In surveys which gather information about competing brands, the number of brands specified is usually kept to a minimum to keep costs down. However, as the survey progresses, the selection may prove insufficient. E.g. Netflix is surveying rivals Videoland and Amazon Prime Video. When Disney Plus starts emerging as a strong competitor, this needs to be added.
Inappropriate structure
After a survey has got underway, it may become apparent that the quota frame is too complex, requiring responses from specific groups which are too difficult to achieve. E.g. region-by-region targets may have to be abandoned to a broader geographic grouping.
You can access quota in two pages – Quota tab under Setup Survey; and Quota tab under Monitoring Survey. Let’s see how it works.
Setting up quota frame is now available after fieldwork. You can go back to previous steps to define quota variables and order and nest quota variables, same way as in the first time setup.
Quota frame editing works with a concept of version. When the quota frame is changed, it stores as a new version. It needs publishing to make it effective. Successful interviews are now added to its corresponding quota frame version. It implies that succesful interviews before this change is counted in version 1.
Version 1:
Video Streaming Providers | Target | Successful Interviews |
---|---|---|
Netflix | 40 | 30 |
Videoland | 40 | 30 |
Amazon Prime Video | 40 | 30 |
After changing the quota frame (e.g. adding Disney Plus), this quota frame is now saved as a new version (thus version 2) to start with. Depending on your scenario and the purpose of changes, you may want to adjust the target. In this case of adding a new brand to catch up with the other existing brands, target of each brand in the new version is adjusted as Target minus Successful Interviews of version 1.
Version 2:
Video Streaming Providers | Target | Successful Interviews |
---|---|---|
Netflix | 10 | 0 |
Videoland | 10 | 0 |
Amazon Prime Video | 10 | 0 |
Disney Plus | 40 | 0 |
As a general rule of thumb, check all the quota targets to make sure they are updated as you wish and remember to update the ssurvey script before publishing.
This page allows you to edit quota targets and monitor quota progress. The addition is a timestamp of quota frame version. By default, it shows the latest quota frame version and its successful interviews added in since its last publish. You can also select other versions to show its quota progress.
Editing quota frame during fieldwork is a very powerful feature to adapt changes easier. The change should be carefully managed. We hope it benefits you. If you have any feedback on this feature or any challenges you have, please feel free to contact us.
To uphold the quality of your CAPI and Online research, you need to ensure survey results aren’t contaminated by ‘bad’ interviews. This calls for identifying and disqualifying responses which show evidence of being falsified.
Simple pointers to unreal answers and/or respondents include short interview duration, short time spent on each question, short or meaningless open-ended answers and lack of cohesion between answers to related questions. Another giveaway in CAPI fieldwork could be when a particular interviewer’s work is completed via the shortest-possible route.
Methods for identifying these pointers in Nfield CAPI interviews are explained in CAPI Quality Control: Audit Trail helps you eliminate falsified fieldwork and Nfield’s Quality Control Options for Rock-Solid CAPI Data.
In addition to identifying suspicious interviews from interview completion data, you can also adopt a practice of contacting certain respondents to verify their input.
Every company or survey project has its own criteria for whether an interview should be classed as ‘bad’ and therefore disqualified from results.
Samples from panel providers are generally trustworthy, as panel providers deploy their own mechanisms for maintaining an honest and active respondent pool. But if your survey is conducted outside the confines of a panel, there is a higher chance of contribution from respondents who are just making mischief, possibly to get promised rewards, whose answers are not genuine.
To determine where to set your tolerance, you need to get to know your own typical response patterns for the criteria described above (under “Keeping it Real). When answers fall outside of these, it’s time to raise a red flag.
Just as bad apples need to be disposed of to avoid the whole crop becoming unusable, ‘bad’ interviews need to be disqualified from survey results in order to come away with genuinely useful insights.
This is easily done in via the Nfield Manager for Nfield Online and Nfield CAPI.
Once you reject an interview, it is automatically removed from all counts related to successful completes and corresponding quota cells. So disqualification of ‘bad’ interviews may mean continuing your survey a bit longer to reach all the required targets.
In Nfield Manager, go to the Quality Control tab. Here you’ll find a Qualification Control overview, as well as each individual interview’s validation status.
To disqualify an untrusted interview from survey results, simply select the interview ID and click Reject.
As an alternative to rejection, you can carry out positive selection by approving individual interviews. And if you aren’t sure whether to approve or reject an interview, you can un-verify it, indicating a requirement to conduct further checking.
Clicking “Reset” will restore all validation states to “Not checked”. This will include any interviews which had been set to “Rejected”, thereby re-including them in corresponding quota counts as successful completes.
If you want to automate identification and disqualification of untrusted interviews, this can be achieved by integrating appropriate tools with Nfield via an API. See API – Developer’s guide to learn more about Nfield API.
You can download a record of all rejected interviews in the same way as you do for other interview records.
If you have any questions or comments, please do not hesitate to contact us.
It is estimated that half of the world’s population uses two or more languages in daily life. This is no surprise to us, as 91.6% companies of our customers live in countries where the primary language is not their native one. With so many people having multilingual backgrounds, it’s a good idea to offer survey respondents a choice of languages, with the option to switch part way through.
This is because respondents sometimes choose to begin answering a survey in the local language, even when it’s not their native one, but start to struggle if terminology gets more specific. This could happen if questioning goes into detail about things such as medical issues.
Respondents can often deal with this using Google Translate, but that’s not ideal. Enabling language switching within the survey itself provides a far easier and more reliable experience, as you can control terminology and phrasing by embedding approved translations.
Nfield supports multiple languages with ease, allowing respondents to choose their preferred language at the start of the questionnaire. In the case of known individual preferences, these are automatically offered.
Thanks to a recently introduced feature, Nfield also now lets respondents switch language at any point during the survey. It’s as easy as turning a dial!
To create a multi-language survey in Nfield, you simply have to append the translations with the main script. The language switching feature is enabled by specifying the language codes. So, if the questionnaire is offered in English (“eng”) and Chinese (“chi”), the command should be added like this.*UIOPTIONS "languages=eng,chi"
When you have more translations, their language names should be added in the above with a comma separator.
The Nfield survey will automatically add a drop-down box on each page, allowing respondents to switch at any time without disrupting their completion of the survey.
If you have any questions about Nfield’s support of multiple languages within a survey, please feel free to drop us a line.
In our documentation there is information about the NIPO CATI Client configuration, but in this news item you will find a quick overview on how to set up your CATI system in such a way that you can have your interviewers connecting from any location. Very useful if you want to give your interviewers the flexibility to work from home, or to give a partnering firm access to your CATI system so you can do projects together while working on the same server.
So, what are the steps to set up CATI@Home? To start with the obvious, your CATI@Home interviewers need an internet connection to be able to connect to the NIPO CATI Master. Since they will connect through the internet, the connection protocol for the NIPO CATI Client needs to be set to PureTcpIp. We advice you to place your CATI Master behind a firewall in which you open the specific port of the NIPO CATI Master. In a default configuration this port is 8001. The port the NIPO CATI Master is listening to can be changed from the CATI Manager › File › Configure › Master.
Settings on the CATI Master machine
Remote CATI Clients cannot simply connect to the CATI Master, but require additional verification in the CATI Master. The CATI Master will keep a list of so called StationKeys for which it will allow incoming remote connections. Connections are recognized as remote connections based on the IP addresses of NIPO CATI Master and NIPO CATI Client. The StationKey of the incoming connection needs to match a known StationKey value on the CATI Master.
In order to arrange this additional verification you will need to add some registry settings to the CATI Master. These registry settings are all stringvalues to be added under \HKLM\SOFTWARE\NIPO\CatiMaster
or in case of a 64 bit operating system under \HKLM\SOFTWARE\Wow6432Node\NIPO\CatiMaster
.
Add the following registry settings:
StationKeyDatabase=NipoFieldworkSystem
StationKeyTable=Interviewer
StationKeyTableKey=AuthenticationKey
StationKeyTableStation=AuthenticationStation
StationKeyTableStatus=AuthenticationStatus
StationKeyTableRemoteTelnr =RemoteTelnr
Restart the NIPO CATI/Web Master
After adding the registry settings and checking the port number of the NIPO CATI Master, you will need to restart the NIPO CATI/Web Master service to activate the new settings.
Prepare a remote CATI connection in NIPO FMS Client
Open the NIPO FMS Client and go the interviewer you want to enable to connect remotely. In the ‘Details’ view you may need to change the view to actually show the Authentication fields. Right-click on the page and from the popup dialog select ‘Fields…’ This brings up a dialog in which all fields of the interviewer table are shown, but only the ones ticked are shown on the interviewer’s ‘Details’ page. Make sure to tick the fields for AuthenticationKey (or the fieldname of the field you added to the Interviewer table), AuthenticationStation and AuthenticationStatus. And when a dialer is used tick RemoteTelnr as well. Then click ‘OK’ and the fields are shown on the interviewer’s Detail page.
Configure the remote CATI Client application
The remote CATI Client machine uses the normal CATI Client software with in addition the value of the StationKey. You have to define yourself what is the best way to get the software on the interviewer’s home computer; you could for instance prepare a client installation including the shortcut and make a self-extracting zip file and email/ftp that to your interviewers.
There are 2 ways you can specify the key on the NIPO CATI Client; either by adding a parameter to the shortcut: -KEY xxxxxx
, or by adding an entry Settings=xxxxxx
to the [OdQes] section of the niposys.ini file (where xxxxxx represents the actual AuthenticationKey as specified for the interviewer). Since this Key value can be a long string, it is preferred to add the settings=xxxxxx
to the niposys.ini file. Note when the key is used on the command line it should be present as the first argument.
The shortcut to the CATI Client then looks like this:"C:\your path here\OdQesu.exe" /N 100.100.100.100:8001 /P PureTcpIp
Of course, 100.100.100.100 should be the IP address of your NIPO CATI Master, and 8001 is the port number as defined above. And you may also use the other normal parameters like ? /SIZE
depending on your preference.
Test the connection
By itself, the above should enable access for the remote interviewer, so go ahead and try to connect. After successful contact to the CATI Master, check the interviewer details in the NIPO FMS Client, and you will see the computer name in the AuthenticationStation field. The workstation is now successfully configured for CATI@Home use!
Remark: Even though we use the interviewer table to store the StationKey values, there is no specific link between the interviewer number and the StationKey itself. Using the StationKey is only a control mechanism to determine if a certain remote NIPO CATI Client is allowed to connect or not. Once the connection is established, the interviewer could theoretically still logon with any interviewer number. It is however good practice to use StationKeys as described above, because each StationKey will then only be used by 1 interviewer which makes management of the StationKeys much easier.
Good quality CAPI research relies on trustworthy interviews. You need to be sure fieldwork integrity isn’t compromised by interviewers taking shortcuts, submitting false responses. Nfield CAPI now builds on standard verification checks with an Audit Trail / Logging feature that highlights suspicious input patterns. By analyzing statistical details, this enables you to identify when interviewers are likely to be cheating the system, and take appropriate action.
Traditional quality control components such as field validation, logic checking, ordering and randomization are, of course, all supported in Nfield. For face-to-face CAPI interviews, additional features such as GPS locations, photographs and audio recordings can also be activated to provide evidence of interviews taking place. (More information about these can be found at Nfield’s quality control options for rock-solid CAPI data.)
All this gets you off to a great start, but still leaves room for cheating. To further close off the possibility of falsified responses, you need to look more closely at response input patterns. Nfield CAPI’s Audit Trail function enables you to spot the tell-tale signs of dishonest interviewer behavior, such as:
The data for annotated items 1, 2 and 3 can be found in the Nfield downloads:
Nfield CAPI’s new Audit Trail feature records how long an interviewer stayed on a page and which button they clicked to move on. The results are provided in a csv file. The example below shows how this might look for a single interview.
You can compare each result line against your reasonable expectations and set benchmarks to flag up interviews that fall outside these parameters.
The best method for reviewing this is to build a report (in Microsoft PowerBI or other reporting system) to show suspicious interviews. Incorporating a drill-down function will give you an easy way to view more information.
The two illustrations below show how your reports might look.
Here, the results are sorted to show interview durations in descending order.
Armed with these insights, you can better fortify the quality of your CAPI surveys by eliminating falsified responses. Not only can you challenge interviewers whose input appears suspicious, and remove fake responses from your results, you can also use this feature to deter dishonest interviewer behavior from taking place at all.
And in case you are wondering, this Audit Trail feature will also be introduced for Nfield Online in the near future.
Nfield survey solutions can be customized to a very high degree. So much so, that it would be impossible to present the enormous array of varieties via a universal dashboard. Full customization of questioning and appearance can still be achieved through scripting and theming. Because we realize many of our customers don’t have (enough of) the necessary expertise to do this in-house, NIPO has teamed up with DataExpert to offer advanced personalization of Nfield survey questionnaires.
Here at DataExpert, our team will be delighted to support your use of Nfield through every aspect of its function, all the way from survey programming to data visualization. Our deep understanding of NIPO solutions, combined with our success-oriented mindset, ensures our work is always focused on bringing you the results you need.
We’re highly experienced in optimization and automatization of complete projects, and are also happy to coach you through making internal project processes more efficient. We can also provide advice on the relevant tools.
DataExpert provides a wide range of services, giving you the convenience and efficiency of getting all your Nfield customization done in one place.
DataExpert can support you in:
DataExpert can solve challenging data processing tasks, process market research datasets and make cross tables using all industry standard methods: weightings, computations, top2 and bottom2 boxes, nets, means, averages, etc.
In addition to traditional market research data processing and commonly used technologies, we are also specialists in languages such as R and Python. By combining these tools and techniques, we’re able to solve complex analytics tasks.
We’ve got experienced visualization experts who are dedicated to clear presentation. We’ve got data processors who delight in generating report-friendly data. And we’ve got talented graphic designers who are bursting with creativity. Put these all together, and you’ve got the dream team for communicating complex data in a clear and easy-to-understand way.
DataExpert can provide:
• Manual and automated PowerPoint reporting
• Online dashboards in various powerful technologies
• Standard and templated structures for quick reporting
• Migration of existing projects to different tools
• Mockups and final design elements for projects
Using various technologies and applications, we can perform data analysis and present actionable information which helps executives, managers and other corporate end users make informed business decisions. We’re fully equipped to convert raw data into meaningful insights with eye-catching reports and dashboards.
Your custom survey designs will always be created with user-friendliness, survey performance, design, reproducibility and effectiveness firmly in mind. We can build comprehensive systems with back-end and front-end development as well as full user interface design and implementation. Here at DataExpert, we’re always up for new challenges and finding innovative ways to bring your ideas to life.
If you’ve decided you want to change the technology you use for data collection, processing or visualization, we’ll be your experienced and reliable project migration partner, taking good care of your content, form and user experience along the way.
Some examples of DataExpert’s work:
Don’t hesitate to contact our team at sales@nipo.com for more information.
Have you ever concluded a survey to discover it’s generated many more completes than specified in your quota targets? This is called Quota Overshoot, and it happens when a lot of respondents fill in a survey at the same time. Quota Overshoot doesn’t reduce a survey’s quality, but it does result in higher-than-necessary expenditure on reward, sample and interview costs.
The good news is Nfield’s Max Overshoot feature enables you to control the number of excess completes, and so avoid unnecessary costs.
To understand how Max Overshoot works, you first need to understand why Quota Overshoot happens. The one-word technical answer to this is ‘concurrency’.
Concurrency is when two or more respondents are filling in a survey at the same time. For the most part, this is exactly what you need to get your survey completed in the shortest possible time. But if multiple responders remain active when just one more complete is needed, you’ll end up with more completes than the quota. This is because all the interviews already underway at the moment of the last required completion will also continue to completion
Nfield’s Max Overshoot feature gives you control over excess completes by limiting concurrency as your survey nears completion. You can set it to reflect your preferred balance between speed and excess respondent costs
To explain how this balance shifts, let’s play the Plane Game. Or in these COVID-19 times, imagine playing it!
THE GOAL:
Get 20 paper planes made, each by a different person. Of these, 10 have to be made by males and the other 10 by females. (Want to know how to make a “world record paper plane”? This video shows you!)
THE RULES:
Each plane-maker has to sit at a separate desk while performing the task. When finished, they vacate the desk for a new plane-maker to take their place. This all happens inside a room that players can only enter when a free desk is available. Players who’ve completed their planes stay in the room. Players who decide to give up have to leave the room. When you’ve reached the target number of planes, no more people can come in.
Everyone in the room at the moment the last required plane is completed gets a voucher worth $8 as a reward, even if they’re still finishing their plane.
SCENARIO 1:
You invite a lot of people (let’s say 100) to participate and provide the same number of desks. This way, everyone is making planes at the same time, with nobody having to wait their turn.
This will produce a really fast result. But because you had 100 people in the room, and none of them had given up at the point the 10+10 target was reached, you have to pay them all. That’s a bill of $800.
SCENARIO 2:
You have two rooms, each with just 3 desks. One room is only for the males, the other only for the females. Both rooms operate at the same time. Once 10 male planes have been completed, the male game ends. The same applies for the female game.
This limits each category’s concurrency and, as a result, the number of participants in both rooms combined when the targets are reached will only be 24. An excess of two males and two females. The total reward payout will be just $192. But the game will have taken much longer to complete.
Of course this Plane Game is a metaphor for respondents completing surveys, with the two scenarios illustrating extreme ends of the cost vs speed spectrum. As a market researcher, you’ll probably be looking for a happy medium. Nfield achieves this via a formula which changes the number of “desks” as each quota target comes closer to completion. All you have to do is set your desired Max Overshoot number per quota target.
|
If we illustrate this according to the Plane Game, per quota it looks like:
|
One new participant is allowed to start after each new complete ONLY IF Successful Completes < the quota target. (Nfield doesn’t allow new interviews to be started once quota target for successful completes has reached its maximum.)
As completes start to accumulate, the number of desks gets reduced. This limits the number of active participants and, in turn, the number of excess participants at the time each quota target is reached.
Quota target = 10
Max Overshoot = 2
Number of successful completes | Number of desks available | Maximum number of active participants |
0 | 12 | 12 |
1 | 11 | 11 |
2 | 10 | 10 |
| | | | | |
8 | 4 | 4 |
9 | 3 | 3 |
10 | 2 | 2 |
You can set the Max Overshoot to zero to keep costs to an absolute minimum. However, it will take longer to achieve the final complete as, by the end stage, only one respondent can be active at a time, which will amplify the effects of any drop outs.
You can make a judgement call on how seriously this might impact things by looking at your survey’s dropout data in Nfield.
Two important points to note when using Max Overshoot:
Max Overshoot can be a very beneficial feature for controlling costs and provides flexibility to balance cost against speed. However, it does make quota evaluation more complex and may put a big load on Nfield processors. We are therefore rolling it out gradually and are currently only enabling it upon request. To enable Max Overshoot, please contact your account manager. At one point, we will enable this feature for all domains.
If you have any questions about Nfield’s Max Overshoot feature, don’t hesitate to Contact Us.
Quota management is important in fieldwork management. It ensures you have a representative demographic distribution and a good control on the cost (sample fee, respondents’ rewards, Nfield complete fee, etc). It starts with a simple concept – You set a limit of how many completes (5 males + 5 females = 10 total) should be done to meet your quota requirement. In this article we introduce the quota basics and variations and how you manage your quota well.
We will use visual illustrations to show the quota concepts available in Nfield:
Let’s start by creating an example for the basics using Minimum Quota.
At the end of the fieldwork, when you are missing one final interview (the 10th respondent) needed for the quota target there’s a chance you may not get this straightaway (a female in this case, see the image below). It would be a waste to interview respondents and keep screening them out (3 males in this example) before the last female respondent is found.
When your quota requirements get more complicated (e.g. a female of 20-25 year old living in a small city who purchased a specific wine in the last year), we would need to screen out even more respondents. To have more flexibility in the quota design, you can make use of minimum targets.
In this example, we fixed 80% of the required gender distribution and left 20% of the sample free in the choice of gender. Anyone can take the final two spots (20%) of the target. The principle behind this¹ is to fulfil the minimum gender target (so one female spot in grey is reserved) and limit the total target by setting a maximum. So once the minimum targets are fulfilled, it is more flexible for accepting the last ones instead of screening them out.
Benefits of this use of minimum targets are:
Quota requirements can be complicated. You may have different quota cells, interlinking with each other. Some quota cells are more important than the others. Together with minimum targets, you can have good grip of your quota control.
We revise the minimum quota example with smaller minimum targets.
Nfield ensures 3 minimum males and 3 minimum females and has 4 spots to be filled in by anyone. In extreme cases, it can be 7 males or 7 females.
Having 70% males may seem too much. To add a control on this, maximum targets can be added on the quota cells.
To understand this, we now add a border (maximum target) on how many circles can sit in one gender row. 3 males and 3 females are minimum targets. They must be filled. The total maximum target is 10 (ten circles in total), meaning four circles/spots are available. However, only three circles can sit in one row (in the male row or the female row). Then you can have 3 to 6 males or 3 to 6 females, totaling 10.
Benefits of a mix between minimum and maximum targets are:
This allows the survey to use routing based on the least-filled quota or offer selections based on a list ordered by least-filled quota. It helps lagging-behind quota to catch up by giving them advantages on higher position in the answer list or on routing to their sections. The following scenarios may sound familiar.
The how-to is quite simple, with the use of command *GETLFQLIST (which literally means Get Least-Filled Quota List). It returns the quota list in its ascending relative filling percentage based on minimum target values. With the following example, the relative filling percentage is calculated like this: 3 completes on electric appliance kettle out of 5 targets, giving its relative filling percentage as 60%. The least-filled quota list will now show the order: Air conditioner (the least filled), refrigerator, kettle, and last television. You can then route your survey to show the section of the air conditioner.
The most needed quota item gets answered first, resulting in earlier completion of the quota and saving costs by using less sample.
Least-filled quota can apply to multiple quotas too. You may ask “Which electric appliances have you used yesterday?” or “Which brands you have purchased in the last months?” Respondents can give multiple answers. The questionnaire can route to show one/more section(s) of the least-filled answers (appliances/brands) of his/her answers. Or it shows a follow-up question with his/her answers in an ascending relative quota filling. The most demanding quota is at the top, which hopes to gear the respondent’s selection to that item.
In Nfield, you can toggle the “Multi” button on in the Quota page.
And multi quotas should always be at the root (thus the lowest in the nesting quota frame). It can be standalone, or nested under another quota cell, but not having other quota cells under it. As one respondent gives multiple answers, the counts would not sum up nicely. In the following example, the minimum target is 10 male and 10 female with a total of 20. These numbers are controlled gender quota variable, not in the minimum targets set for each appliance. The minimum targets in each appliance are used as the calculation of the relative filling which is 100% x (minimum target – number of respondents in that answer). Another important note is that the multi quotas do not contribute to quota check itself.
Then this *GETLFQList command can be used to return its result.
When the quota status is like the following (same as the example in least-filled quota), the relative quota filling in ascending order is air conditioner (20%, the least filled), refrigerator (57%), kettle (60%), and last television (67%).
See this in action:
With the spread of coronavirus (COVID-19) disrupting daily life all over the world, we’ve noticed the changes in human activity being reflected in Nfield surveys. As regions have gone into lockdown and people have been discouraged, or even ordered, to avoid contact with others, CAPI interviewing has become all-but impossible in some places. Where this has been the case, there has been a significant increase in Online surveys to compensate. To illustrate, we’re sharing usage patterns for Nfield CAPI and Online in our China, South Korea, Spain and Vietnam deployments, so you can see how survey execution has changed along the coronavirus timeline.
While we are, naturally, as concerned about the situation as everybody else, we are pleased to see that our customers have been switching between Nfield CAPI and Online without any problems. This is because we developed these two survey channels with the same scripting language and result format. Switching can therefore be done in just a few minutes, with minimal support needed from our helpdesk.
Nfield CAPI vs Online in China
At the time of writing, China remains the country most heavily impacted by coronavirus (COVID-19). This is reflected in a uniquely dramatic shift in survey channel usage. In normal times, CAPI very much dominates China’s survey activity. But with public spaces mostly deserted, and people being reluctant to interact with researchers, face-to-face interviews have almost completely ceased. Meanwhile, Online surveys have increased significantly to fill some, although not all, of the gap.
The correlation between Nfield usage in China and events on the coronavirus timeline clearly confirms how these are linked. A decrease in survey activity before long holidays such as Chinese New Year, which began on 25 January 2020, is common. Our graph shows an expected reduction in CAPI fieldwork leading up to this. Survey activity remained extremely low while the Chinese New Year holiday was extended to 2 February, due to the disease. As people gradually started returning to work in Beijing/Tianjin/Hubei/Sichuan, albeit from home, survey activity resumed on a very small scale. After the first ten days this increased to some extent, but almost exclusively via Online.
Nfield CAPI vs Online South Korea
As of 5 February, there were fewer than 20 confirmed cases of coronavirus in South Korea, although the gradual increase in neighboring China was starting to cause alarm in other countries. By 7 February we were seeing a drastic decrease in CAPI face-to-face interviewing, while use of Nfield Online grew to twice its normal amount. As widespread infection took hold in South Korea, Online survey usage tailed off again to normal levels. Meanwhile, CAPI diminished greatly, but not completely.
Nfield CAPI vs Online in Spain
Spain’s Nfield usage pattern is very similar to that seen in South Korea, although the early February switch from CAPI to Online happened sooner and more drastically than in South Korea. In Spain, a 3-day Online spike suddenly dropped off again on 13 February, after which there was a reduction in both CAPI and Online. CAPI continued to play a diminished role in Spain’s survey landscape until the last two days of the month.
By 5 March (a week after CAPI all-but disappeared from use), the Spanish government advised companies to send workers home to reduce contact. On 6 March, Spain ranked 7th in the world for the number of confirmed cases. We expect to see the impact of these measures in March volume reports.
Nfield CAPI vs Online in Vietnam
Thanks to prompt and decisive governmental action, Vietnam did a very good job of containing the spread of coronavirus and preventing it from getting out of control. Like China, Vietnam had a relatively long new year holiday. However, the Vietnam government declared coronavirus to be an epidemic at a very early stage, on 1 February, when the number of confirmed cases stood at 6. As a result, Vietnam only had 16 reported cases, with the last one declared on 13 February. Usage patterns for both Nfield CAPI and Nfield Online very quickly returned to normal when new cases stopped being reported.
A WHO official, called Park, told Al Jazeera¹: “The country has activated its response system at the early stage of the outbreak, by intensifying surveillance, enhancing laboratory testing, ensuring infection prevention and control and case management in healthcare facilities, clear risk communication message, and multi-sectoral collaboration.”
Hoping for a speedy recovery
At the time of writing, nobody knows how things will develop with coronavirus. As with the rest of the world, we are very much hoping the disease will be contained, cured and eradicated quickly. In the Netherlands, which is our home base, the first case was confirmed on 27 February. This was relatively late compared to other European countries. In 9 days, the number had risen to 128 cases. Everyone has to remain on high alert. We hope our customers worldwide and teams in the Netherlands, Spain and India are able to stay healthy and strong.
We proudly present our Nfield Top 15 Customers! We would like to take this chance to give them a round of applause and to recognize their project success with Nfield. Conducting projects in Nfield means they have also put security and data compliance to their top priority as we do.
Fully compliant practices and ISO 27001:2013 certification in our data collection solution Nfield means you can rest assured when it comes to data security. There is a strong security policy to ensure that your data are safeguarded. Nfield includes features to assist in the efforts to address GDPR controls enabling you to take care of consent management and other important privacy requirements.
These top 15 customers are selected based on their usage in 2019. And the winners are (in alphabetical order):
NIPO is delighted to announce that its Nfield Online and CAPI software solutions have earned the 7th position in Capterra’s newly released Top 20 Most Popular Survey Software report.
Capterra evaluates software based on product data, validated user reviews and independent research and testing. It also analyses online search activity to generate a list of market leaders who offer the most popular solutions. The resulting assessments therefore represent a solid all-round appraisal.
Nfield’s inclusion in the 2018-19 Top 20 is testimony to years of hard work developing solutions which truly satisfy user needs. This has been achieved through working closely with the Market Research industry to establish these needs, complemented with dedication to formulating the most robust, user-friendly and cost-effective solutions.
See the full Capterra Top 20 Survey Software report
NIPO develops Online, CAPI and CATI survey solutions specifically to serve the needs of professional market researchers. For over 20 years, we have been working closely alongside market research organizations to continually deepen and freshen our insights into their challenges, in order to create truly purposeful solutions.
This unique bond means we have robust practical knowledge of how to efficiently organize survey distribution of any scale. Which enables us to serve our customers with exceptionally well-thought-through products, particularly when it comes to tackling large scale national and global projects. Our unrivalled combination of deep industry understanding and high-level IT expertise means our customers benefit from survey software which is genuinely designed with their success in mind.
With more than 200,000 users around the world, NIPO supports many thousands market research projects every year.
There are probably many occasions when you want to follow through after a customer interaction by sending them a survey. But even though their details are all neatly logged in your CRM, getting the invitation out takes a lot of effort.
First you’ll need to export the data from your CRM and import it into your data collection system. With a few more clicks, you’ll finally manage to send the email. It’s a manual process which is too laborious to be done in real-time, so ends up happening in daily or even weekly batches. Which means you lose the benefit of acting immediately.
Wouldn’t it be better if all this was automated? So your survey invitation email gets sent at the very moment your CRM is updated with the latest status. No waiting. No time-consuming clicks.
The good news is you can make this happen. The even better news is you won’t have to write a single line of code! Zero programming knowledge required.
It all comes down to integrating your CRM with Nfield. When this is done, updating a customer case record triggers your CRM to send the relevant information to Nfield, from where the instruction to send an appropriate survey invitation is automatically issued.
So how do you achieve this integration? You’ll need a good tool, a bit of curiosity and a clear mind. Maybe some coffee too?
The following guide shows you how to integrate Microsoft Dynamics CRM with Nfield. If you have a different CRM, you will need a different method. That shouldn’t be a problem – just contact us to ask how!
To keep things really simple, we’re using Microsoft Flow. This has a large set of components (known as Connectors) which you use to connect systems together, e.g. your CRM + Nfield. Each connector allows you to set triggers, which alert the relevant connected system when changes occur for which you want actions to be taken. The functionality is pretty powerful. And what’s more, it’s FREE!!
Let’s look at an example of a Dynamics CRM Connector which has 3 triggers and 5 actions.
Trigger – this monitors your CRM for specified events, such as when a record is created/deleted/updated. You just have to associate these with the way you use your CRM. For example, when a new customer is added, you know a new record gets created. So if you want the adding of a new customer to trigger an action (see below), you select “When a record is created” to be the CRM trigger. Or in the scenario of a customer case being resolved, you’d select “When a record is updated” to be the trigger. In this instance, the instruction will also need to be modified with an IF condition to limit the trigger to resolved cases only.
Action – this is a change to be made in the Dynamics CRM, such as create/delete/update/filter/retrieve one or more records.
In the following example, we’re using the scenario of a customer case being resolved.
The flow begins with setting a trigger to look for case entities which have been updated. This sets off an action to read specific details of any relevant entities. These details (email address, name, reasons for contact…etc) can be used further on in the Flow.
Now you need to make the connection with Nfield. You won’t find an Nfield Connector listed in Microsoft Flow, but Nfield has an API which can be reached by setting up an HTTP connector. HTTP will be offered as an action.
The first step is to log into Nfield by filling in a form, as shown on the screen below. The formats and values you need to enter are all described in Nfield API documentation.
Once this is done, you can then continue to add more steps. In this scenario, we have added respondent data and created an email batch to send an email. As a bonus, we’re also triggering an SMS which will be sent using TextMagic.
Following-up with survey respondents by thanking them for their time, maybe with a gift or special offer, is a good way to build favorable relationships. Similarly, alerting relevant people on the commissioning side about any individual rating extremes enables them to take beneficial action. In both cases, speed is essential for positive impact.
Nfield Online users therefore often ask us if there’s an easy way to do this. Is it possible to automatically generate the sending of different emails, so appropriate messages instantly get delivered? Without doing any coding? We’re pleased to say the answer is “yes”!
You just need to set everything up via a simple external application. Let’s show you how.
Before we start, try our demo to experience how it works from the email recipient point-of-view. This example asks you to complete a one-question survey and then enter your email address. After you submit the survey, you’ll receive a thank-you email.
How easy was that?! Of course, if your survey is based on a pre-existing sample, the respondent will not need to enter their email address, as it will already be known.
There are a number of different ways to automatically generate these emails. We’re going to show you how to do it via an online platform called Zapier.
For those of you who are interested in the technicalities, this method uses an external API call in ODIN script. It’s a process which passes certain variables to an external web hook (a web address that receives data), and then sends an email.
So your first job is to build a webhook. Let’s go!
Before building your webhook, you might like to understand what a webhook is and how it works.
A webhook is a URL which is called from your server, when the conditions you have set to execute an instruction are met. The webhook then instructs the next application in your workflow to do its stuff. The webhook thereby acts as a connector between different systems to automate workflow. For our purposes, when your email triggering conditions are met, the Nfield Online server will call the webhook, which then instructs your email client to send the relevant communication.
Here’s how to set up your webhook via Zapier:
1) Create a new Zap and select “Webhooks by Zapier” as your Trigger App.
2) Select “Catch Hook” as the function
3) Save the URL provided. This will be your webhook to be called from Nfield.
4) Let the Zap know what information you are going to pass to it by setting the parameters. For example, these are likely to be Email, RespondentKey, Score and Name. Pay attention to the symbols which connect them. The first one has to be a question mark, then ampersands. Take a look at this example by opening it in your browser.
{URL copied}?email=xxx@nipo.com&respondentkey=12345&score=5&name=Doris
5) Test your Zap. Click “See more samples”. It will grab test results, which you can expand to check if they’re delivering the results you expect. If all is working as expected, select it and continue.
That’s it! Your webhook is ready.
The next thing you need to do is connect the webhook to your email client. We’ll use Gmail for this example. You also do this in Zapier, using the Gmail Zap.
1) When setting up the email Zap, you can insert the parameters passed via the webhook into the various email fields (To, Subject and Body). Take care to compose and format your message nicely. Note that HTML is supported in the Body field.
This is how it will look in your Zap.
In Nfield Online, you’ll need to add an “API Endpoints” (APIS) entry. Don’t let this technical term scare you! We’ve made it all really easy.
1) In default settings, go to APIS tab. Add a new Endpoint. Give it a name and enter the webhook URL you were given by Zapier. Add a description and save.
2) In the ODIN Script, we have the following to trigger this API call. Please download the complete script with this link.
*GETDATA result pair,ask "SendMail0927:email=*? Email;respondentkey=*? RespondentKey;score=*? Score;name=*? Name"
Now you can get a live link and try it out. Good luck!
Contact us to find out more about automating work processes using Nfield. Similarly do share your own examples that may benefit other Nfield users!
Different countries and industries often have their own specific regulations when it comes to data storage. To comply with this, market research companies need to give careful consideration to where their respondent data is stored.
For example, countries such as Russia and Singapore and industries such as finance and healthcare require personal and research data to remain within the country, sometimes even within local premises.
It’s therefore no surprise that data storage location is the primary concern for 53% of IT decision makers when it comes to cloud adoption, according to IDG’s 2016 Cloud Computing Survey.
To satisfy the need for compliant survey data storage, we have been working with Microsoft and other parties to develop a suitable solution for users of Nfield Online and CAPI.
Nfield Online and CAPI surveys are already deployed from four different Microsoft Azure cloud environments – Hong Kong SAR (serving Asia Pacific, except the Chinese mainland), Amsterdam (serving Europe and Africa), Virginia (serving the Americas) and Beijing (serving the Chinese mainland) – to facilitate speedy operation.
To enable data storage compliance alongside this, we have developed the ability to separate survey deployment from storage of respondent data. This means it is now possible, for example, to deploy a survey from the Hong Kong SAR Microsoft Data Center and store the respondent data in the Singapore Microsoft Data Center.
This locally compliant data storage is being achieved through utilizing the Azure cloud Infrastructure and, where this is challenging, setting up local facilities.
Whatever requirements Nfield Online and CAPI users have, we can quickly configure a suitable solution which strikes a balance between a whole raft of considerations, including security, investment, ease of maintenance and system monitoring, ISO 27001:2013 compliance, speed of delivery, customer preference and potential for growth.
Nfield local data storage applies to the following information:
Questionnaires and associated media files used during interviews remain stored within the survey engine in one Nfield’s four deployments.
Contact us to find out more about local data storage compliance and ask for a quote. Check here to see existing Azure locations. If no Azure location is available where you require one, ask us about other local data storage solutions.
Understanding employee engagement is invaluable for improving a company’s productivity, quality and profitability. In recognition of this, many companies conduct annual employee satisfaction surveys, the results of which can be used for anything from obtaining employee engagement scores to delving deep into underlying details for guiding future strategy.
However, because these important surveys are often outside of the realm of the marketing department, companies don’t always think of turning to professional market research companies to conduct them. Which means a big opportunity is often being missed.
Employee satisfaction surveys carried out in-house, by a department without specialist market research capabilities, are likely to be very limited in terms of the level and quality of insights they produce. This is partly due to use of less complex, non-professional survey solutions and partly due to a lower level of knowledge and expertise when it comes to designing surveys and interpreting the results.
A dedicated external market research company using Nfield Online can deliver insights of far greater depth, quality and tangibility. This is achieved thanks to aspects such as:
If you’re already using Nfield Online, you already have all you need to offer superior quality employee satisfaction surveys alongside everything else you do.
If you haven’t yet discovered Nfield Online, here are a few of the powerful capabilities it offers, for every type of survey:
And these are just the start, because our family of survey software also includes CATI and CAPI, so you can offer the very best service through every survey channel.
Find out more about Nfield Online’s capabilities or contact us for further information.
Request a demo to see how NIPO can help you meet your requirements with our smart survey solutions.