As described by Perez et al., this is how a trial works:
1. A 7-consonant string is presented for 2.5 sec.
2. It disappears for 2 seconds.
3. A new string appears, and the subject must judge same/different by hitting a key.
Stimuli are either the same (20 trials), have a letter substituted (10 trials), or have a pair of adjacent letters swapped. So, you might see:
and two seconds later, see:
The logic of the test is thus really simple, but there are a few nuggets here with regard to stimulus creation that I want to show here.
First, at the beginning of the experiment, I created a list with the uppercase consonants with the following line. PEBL has a lot of functions to do list manipulation, but not many to manipulate text strings, so I'm going to work with lists of characters until the end, then simply convert into a string.
gLetters <- SubList(FileReadList("Consonants.txt"),1,21)
The file "Consonants.txt" is in PEBL's resource folder, and can be loaded directly without specifying where it is. It happens to contain both uppercase and lowercase consonants, so I pull the first 21. There are other similar text files like Uppercase.txt and Lowercase.txt.
So there are three kinds of trials we need to make. 1. same. 2. order-different, 3. item-different. I also want to record where the difference was, when it was there. For each condition, I want to create two letter strings: one called 'target' and one called 'compare'.
The 'same' condition is easy. I Pick seven letters and glue them together:
target <- ListToString(SampleN(gLetters,7))
compare <- target
For the 'item' condition, it is a bit trickier. I want to replace a random letter, but it needs to be a letter we haven't used yet. So, to do this, I'll start by sampling 8 letters, and pulling out the last seven of the letters to use as the target:
letters <- SampleN(gLetters,8)
base <- SubList(letters,2,8)
Now, I want to replace one letter from base with the unused letter. Since PEBL uses lists and not vectors, it provides no default way to perform 'list surgery' as it is called in LISP. I'm going to use the Replace() function to do the job. Rather than pick out a letter to change from a random location, I'm going to select the first letter in the list as the to-be-changed letter (using First()), then rotate the list a random number of elements so that the letter might be anywhere.pos <- RandomDiscrete(7)-1 ##rotate between 0 and 6 items
target <- Rotate(base,pos)
I want to use the Replace() function to change target into the comparison. It works by giving it a list and a replacement key list. In the key list, you put pairs of characters. When the first of the pair is found in the initial list, the function replaces it with the second item. So I'll make a key list containing the first character of the original target, and the unused character:
addA <- First(letters)
addB <- Second(letters)
key <- [[addB,addA]]
Now, I just need to invoke the Replace function, and change the list into a string:compare <- ListToString(Replace(target,key))
target <- ListToString(target)
The order condition works similarly. Except I essentially add two keys to the replace list, one to replace A with B , and one to replace B with A:
target <- SampleN(gLetters,7)
pos <- RandomDiscrete(6)
swapA <- Nth(target,pos)
swapB <- Nth(target,pos+1)
key <- [[swapA,swapB],[swapB,swapA]]
compare <- ListToString(Replace(target,key))
target <- ListToString(target)
In each condition, pos tells us the location of the string where there is a difference, target is the study string, and compare is the test string. The basic experiment just requires setting up some basic labels and instructions, defining the conditions, and then cycling through them with a Trial function that figures out the target and comparison as above.
Here is a screencast of what it looks like:
The UTC test had 40 trials, which means only 10 of each item and order. That small number of trials probably won't give enough precision to look at whether there is any difference in accuracy between the error conditions. So let me boost the N up a little and run it on myself. A little back-of-the-envelope power analysis suggested if I were lucky and there were big effects, I might be able to see them in 50 trials per condition. It took about 20 minutes for 150 trials, which is pretty data-rich. For serial recall, you'd be lucky to get a third of that. Here is what happened:
1. I didn't make many errors. Overall:
Condition Accuracy RT
same .94 2611 ms
item 1.00 1944 ms
order .88 2348 ms
It is probably fair to say that order swaps caused me more errors than item swaps. It is hard to read into the accuracy, because it is a total of 9 errors out of 150 trials. More importantly, accuracy for me was very high, and I might consider ways to make the test more difficult. Also, response times are tough to judge because of the changes in accuracy (maybe I tended to make slow errors, driving up the mean RT).
One thing that is interesting is to look at the RT based on the position of the difference. This can be done for both item and order trials. Dividing this way, we get pretty small Ns, so I'm not going to bother to try to create error bars. But it looks like an interaction across the list items:
It looks pretty sensible (order points were plotted between serial positions because they involved a swap). You only get an RT recency effect for item changes, not order swaps, which is interesting. It sort of suggests a serial comparison of the list from start to end, except big differences at the end can 'pop out', and a swap operator is not a big enough difference.
Look for the item-order test in Version 0.6 of the PEBL Test Battery