- Posts: 2
- Thank you received: 0
do you mean the semantic differential? Then there is a solution with the Array-question:
Just seperate the two aspects of the pair in one subquestion with | .
Of course you can activate the random order.
Pairwise comparisons are normally a different kind of beast.
steve_81 wrote: do you mean the semantic differential?
And with random the person seems to mean experimental designs.
Limesurvey won't support or generate experimental designs or generate questions on base of items which have to be compared pairwise.
I would like to see MaxDiff (Best-Worst Scaling) implemented in Limesurvey. But that won't happen.
When such time-consuming tasks are allowed to part of a survey, Best-Worstscaling or a Conjoint flavor is choosen.
Well, there are a few flavors of Conjoint. Pairwise comparisons are common in traditional Conjoint and adaptive Conjoint (ACA). Both are are no longer used that often.
steve_81 wrote: Once I dealt with a conjoint analysis, comparing several product combinations, but there wasn't any randomization
Randomization and the designplans are different beasts. Let's say controlled randomization.
When showing these pairwise comparison tasks, you want to make sure that:
1.) Every task is asked on different positions in the interview
2.) Every item of the task is not always on right or left position.
With conjoint it is often the case, that you cannot ask every combination at every position to every proband.
The software then allows you to create a individual set of tasks for every proband. Since you don't know how many probands you get, the software created e.g. 300 designs upfront and the 301th proband gets the first design again.
That way you can ensure that not the position of an item was responsible for importance attributed to the item.
This is quite sophisticated stuff, where companies charge 10.000 USD per year or more. But Limesurvey is not R
A few questions types to get a look like displayed in these demos would be a big step
The most common used maxdiff tool offers this as one questions design. It was the one that was used when the Best Worst Scaling was introduced. Today I would let the two columns stay on the right and save the time for the workaround.
But I haven't used Limesurvey for MaxDiff since I would miss the design of experiments. This would cost too much time and work in Limesurvey.
I can't seem to find any other better alternative for Best/Worst case since Array exclusion filter doesn't work with single answer options such as Radio or Dropdowns.
Not sure what you want to achieve. I don't see any problem with having only TWO answers when doing a MaxDiff ( AKA Best Worst Scaling). A classical pairwise task means to ask few to many single choice questions with two answer-items.
lemonlimebitters wrote: but this MaxDiff question type allows for only TWO answers to be selected from entire grid (one for Best and one for Worst case).
The aim with MaxDiff and pairwise tasks is creating a ranking of items (without a enforced scale).
The main work is to create a good design of experiment (DOE).
MaxDiff allows 5-7 items per question. That allows to capture a bit more information than via pairwise questions.