Hi all, I'm hoping to get some feedback from you about the feasibility of study design that I'm about to start running with my group. We're starting a data collection project where we need a group of workers to take on a series of 3-5 minute HITs spanning about five weeks. It's important for this project that we recruit people who will continue to take new HITs each week for the full period of time. We're hoping that the people we recruit will be able to spend at least 4 hours each week on our HITs, with a pay rate working out to $15/hr. We're planning to send out emails each time we release a new batch of HITs, increase the possible bonus for participation each subsequent week, and obviously accept all work done in good faith. But even with these measures, we're worried that participation may drop off after the first week or two. So, my main two questions are: Does five weeks sound like too long of a time for a select group to keep working on our HITs? Is there anything else we could do to increase the rate of people coming back week after week?
I think it's going to come down to how many HITs you are looking to release a week, and how much each HIT is going to pay. If you make it so that each one is worthwhile pay wise, people will do as many as they can. If they are really tedious, use a bad platform, or require some extra like voice/webcam people are going to be turned off by them.
I personally like the idea of knowing that I have some steady work for 5 weeks. But as was mentioned above, it depends on the hit. If people like them and the pay is good, they will keep doing them. But if there are problems with the hit, or they just aren't worth it, people will stop doing them.
If it pays well enough I don't mind 5 week. As far as keeping people coming back emails are crucial it also helps to release the hits during the day keeping in mind not all workers are in your time zone. As for bonuses have a bonus for each week is great but reserve some of the bonus to be received if the entire 5 weeks is completed. So if it is five weeks and your budget allows for $10 in bonuses total (just keeping math simple hear) A dollar bonus each week then a separate $5 bonus if the entire study or a certain % is completed apologies for any typos I missed dropped my laptop now the keyboard is being wacky
Requester name/ID? Not at all, as long as you make the expectations clear. Even cloud research doesn't say much beyond the obvious. https://go.cloudresearch.com/knowledge/conducting-longitudinal-studies Other than that, unless otherwise time sensitive, use a longer timer on the HITs so workers aren't pressured to complete the tasks as soon as they are posted.
- no, turkers can and will focus for months on end if the hourly is good - maintain a consistent hourly; make sure your HITs are easy to catch for qual'd people; keep your HIT durations as high as possible
One of the biggest turn offs that I have run into, and I believe other workers will agree with, is what feels like a bait and switch first task in a longitudinal series of HITs. When you get a 10 minute task that pays, say $2.50, and you complete it in 2 minutes you feel really good. Then the requester realizes that they "over paid" on the hourly and the next 10 minute task pays $0.50 because they decided that they to align real completion time back down to their $15 and hour rate people get really turned off. Even if the "hourly" is what was promised, workers are going to look at it like "why the hell am I going to do the same amount of work for 1/5 the pay". However, you have to be careful that you don't swing too far in the other direction. And put the first HIT up at what seems to be too low of a wage because your test group is completing it faster than expected. This will only get you desperate or new workers who don't know how, or can't, get good paying tasks. The first HIT of a series, when people don't have previous work by a requester to go off of, they only look at the estimate time on task and the how much the task is going to pay. Hourly is irrelevant to most people because it's a guess by the requester at best.
Thank you so much everyone! The advice has been really helpful, and I appreciate everyone's input. I'll definitely be taking all these points into consideration.
I did most of the steps mentioned to minimize attrition. More than 60% of the participants did not complete the task (returned the HIT). Now, I am left with only two participants who completed and submitted the task. My case might not apply to yours because my study requires a specific background.
was your possibly a case of people starting the hit and realizing they didn't have the required background
They were qualified and allowed to access the second part of the study based on their reported data. Besides, they knew the nature of the study and agreed to continue.
I'm running a pre-test for this series of HITs now, if anyone is interested in completing it for entry into the 5wk series (expected to start in 2 weeks): Requester: Alicia Parrish Requester ID: A2Y2BMK767GIAU Title: Statements about short texts (~6min) Description: Read a line from an unfamiliar text and write five sentences that relate to it (~6min) Remuneration: $1.50 HITType Id: 3CW8JHPWN0MAY5GWY127Z21V8HI0RW Qualifications: location in the US, HIT approval rate >= 98%, number of HITs approved >1000 I'm releasing these in small batches. If they're gone by the time you check but you're very interested in participating, just DM me with your ID and I can release a HIT for just you.
oh, I remember this. I think my feedback was something to the effect of: Spoiler "any content words repeated from the context" isn't as clear as it could be, and could be expanded upon.
Yes! This was helpful for us to know. When we start the full run, we've planned some extra explanation & examples. Really appreciate getting feedback like this.
Fellow requester here! The current CEOs of Cloud Research (Leib Litman and Jonathan Robinson) have recently published a book about conducting research online using MTurk. See here (Link to the book on Amazon website; or you can find it by going to the Cloud Research website to avoid clicking the link): https://www.amazon.com/dp/B086QTWBNC/?tag=thtv02-20 There's a ~16 page chapter (pages 198-216) that provides guidelines for conducting longitudinal surveys using MTurk. Might be useful for you, if you can get your hands on it.
do not think that 5 weeks would be extremely hard to get from a group of people as long as you have filtered for the right ones. Some people like consistent work if they are consistently on the sight like myself. Perhaps taking a survey of an individuals time availability first while asking them in the same survey about the willingness to participate in the detailed research for the detailed amount of time. This would make your pool smaller but more reliable.