Tuesday, July 8, 2014

Overview of some of the new tests in the PEBL Test Battery 0.14

I have incorporated a bunch of new tests into the latest version of the test battery.  Here is an overview of some of the new additions and major changes to previous tasks.  By my count, we have added more than 20 new tasks since the last revision, bringing the total to around 100.  A semi-complete list is available here.


Note that some of the links below to the PEBL wiki are incomplete.  If you'd like to make help write this documentation, I provide the password to bypass the PEBL nag screen to anyone who will do so.





I've reorganized the scales\ directory a bit, and it now is a dumping ground for various personality scales and questionnaires.  I've now included the big-five personality questionnaire we got from Goldberg's public domain tests at ipip.ori.org. I've had several students use this questionnaire before, and it appears fairly reliable. You can choose to use any subset of the five scales as well, in the parameter setting feature. So, if you are only interested in introversion, you can just use those questions. Also, we include the handedness inventory, which scores people on a scale of +1 to -1 for right-handedness.  This is based on sort of outdated research from Oldfield (The Edinburgh Scale) from about 30 years ago (it does not include questions about which hand you use a mouse with, etc.), but the research that went into that scale is pretty substantial.  Finally, we include a version of the Berlin numeracy test, a very short (3-4 question) math test that is very reliable and shown valid across many populations.

I've blogged previously about the BST, and you can read more about it here.  This is a simple test for children that is stroop-like but non-verbal.  They need to respond to the shape while ignoring the color, but the color is consistently mapped onto left/right responses.






I've also blogged about this task before here.  The nice thing about this task is that we compute many of the randomness statistics for you.  In this task, a red circle flashes to indicate the rate at which random digits should be produced by pressing keyboard number line keys.








The traveling salesman problem is a classic computer science problem--find the shortest path between a set of points.  People like Zyg Pizlo have popularized it as a cognitive task.  The remarkable thing about how humans solve the task is that they provide really good optimal solutions in linear time; computer algorithms tend to provide better solutions, but the solutions times scale as a polynomial of the number of targets.  I've tested this in the past, and have added my implementation to the battery.

Navon famously wrote about the forest before trees, and used tasks with small letters configured in a large letter pattern.  This task started as a demonstration for an algorithm that could create a Navon figure--take an image and 'print' it with characters.  Prior to 0.14 release, I improved it to be a basic decision task.  It differs somewhat from Navon's original, and from other global-local tasks, but future revisions will probably include versions more like the most widely used versions. Currently, you need to make a response either to the global or local letter.

A couple different undergraduate students worked on projects in which they developed and tested this task.  The basic paradigm is somewhat like Posner's cueing task, but with a  3x3 grid.  The stimulus will appear in one of 8 locations, and will be preceded by a cue that is either in the same location, the same row, the same column, or neither.  We tested three different response modalities--touchscreen, mouse, and keypad.  Results are described here.


This simple task asks people to learn and recall pairings between words.  We include different types of word pairs, including related words, unrelated words, and name-word pairings.  







A group of students of mine wanted to use basic math skills as a stressor, and so we developed this test.  Subjects are given both simple and complicated math problems (you can edit a file to determine which problems to use), and you can test them for different durations.  By default, some of the problems can be very difficult (two-digit division problems), but it is a nice starting point for whatever math skills you want to test




I quickly implemented the Weather Prediction task, and a student of mine did a small study using this task over the last month.  The task asks you to learn the predictiveness of cues in predicting rain versus sun.  You see an array of up to four cards, the presence of which indicate rain versus sun. It seems very difficult, but it does not take too long to become relatively good at the task.





Luck and Vogel's task asks you to detect a change in a visual display containing a varying number of items.  Although there were dozens of versions of the task using different types of stimuli, we have implemented the most popular version here (colored shapes).





We have long had a popular continuous performance task modeled after Conners' task.  This is a simple go/no-go task that requires sustained attention.  Other versions of the task have existed for a long time, and we now have added one of the most popular variants--CPT-AX.  This requires limited memory and conditional decision making.  Instead of responding to the X, you only respond to the X when it follows an A.   This maybe is relevant to rule-set selection and may be impaired by certain types of brain injury or mental illness.


David Hegarty has contributed three variations on complex working memory tasks--reading span, symmetry span, and operation span.  To do this, he actually developed his own control language that PEBL interprets, and so it is pretty flexible (you can easily adapt the number of training trials, etc.)


Free recall is the one of the the most commonly used memory tasks.  In this sort of task, you see or hear a list of words, and you later must recall as many as you can.  This typically produces a stong recency effect (words at the end are recalled first), and sometimes a small primacy effect (words at the beginning are recalled a bit better).  By default, words are taken from the Toronto word pool, but you can specify your own list as well. 



The remote associates task is a word game where you see three words that are all related to another word, although they tend not to be related to one another.  In the task, you must figure out what that conjunction word really is.  (Answer on the left is 'card')






It has long been known that when we generate information, we remember it better than if we simply read it.  I've implemented a simple study showing this effect.





Before Don Norman was a usability guru, he became famous in cognitive psych for studying memory. This was one of his most famous studies, which attempted to look at decay in short-term memory.  In the task, you saw a sequence of numbers at a fast rate.  The sequence would abruptly stop, and the last digit would be the 'probe'.  You would have to remember back to the last time you heard or saw that digit, and respond with the digit that followed it.  This task has probably fallen out of favor among working memory researchers, but now that there is a version available, maybe it will be studied once again.


This is a classic 'false memory' paradigm--you can sometimes get people to remember words they did not see.  The setup is that you give people lists of words to remember that are all related to a single stem word (if the stem is sleep, you show them bed, pajamas, snore, dream, bed, etc.) But you don't show the stem word.  You can test up to 24 of the lists Roediger & McDermott used, in a free recall response paradigm.



Herman Ebbinghaus was one of the first researchers to systematically study memory.  Unfortunately, he studied nonsense words, leading to over a century of memory researchers who studied nonsense.  But the task is interesting and only rarely used nowadays.  I've previously discussed it here. You need to learn a specific list of nonwords (we also have the option of using short words). and you try to recall it until you get it correct. Later, you learn it again, and we see how much easier it is the second time around.   In this task, we use recognition-based list reordering responses rather than straight recall. On the screen on the left, you see the list and must click on it in the order you saw it in.  This task may have appeared in the previous battery, but I'm not completely certain.


This is another important short-term memory paradigm originating in the 1950s. In the task, you are presented a list of words, and have to recall them in any order shortly thereafter.  The presentation time and delay are varied to determine how memory decays over time.  This tried to resolve the memory decay vs interference debate, which still rages today.  I use the task as a demonstration for a learning and memory class I teach, but it could be used in laboratory settings as well.





Major revisions to tasks

In addition, several tasks have important additions or revisions:
  • Improved iowa gambling task
Peter Bull contributed a very nice modification to the iowa gambling task. This adds feedback and animation to make the losses and gains much more salient.








  • PAR scoring for BCST test
The notion of perseveration in the wisconsin card sort has undergone several revisions over the past half-century.  We have added code in the task to automate the scoring used by the PAR version of the test, which if you do it by hand is quick complicated and error-prone. 

  • Dot judgment task revised
The dot judgment task described here underwent some modifications to make the layout faster/better, and to provide better consistency across display sizes








Addendum

In fact, every task in the battery was revised in some ways for the current release. Some aspects include:
  • parameter setting.  Most tasks now have the ability to set control parameters outside the task, to help tweak the task to your needs.  I tried to expose only the most useful parameters, but others could in theory be added.  If there are aspects of a task you would like to have control over, let me know.
  • data saving.  I have regularized data saving.  Now, data will get saved in a data\ subdirectory of the task folder.  Each participant will have their owns subdirectory of data\, which will contain one or more files of data.  Also, we have added additional automated checks for reusing subject codes.  If the subject code is already in use, it will ask you if you want to just append to the current data file or select a new subject code.  This will hopefully allow better multi-session training studies.
  • Data merging tool.  To help make data analysis easier, I've added a simple tool that will look across all subject data directories and let you merge data files into one pooled file.

No comments: