We have built a package for R called aceR. This package will read in raw data generated by ACE Explorer and process the data to generate summary statistics for each participant. This page provides resources for using R and this package
Our analyses scripts use a free program called R. We recommend using R Studio as a free helper program to interact with R
We provide a processing template for use with Ace Explorer data to help get you started.
Please read the notes included in the template to determine if the defaults for the many functions to clean and process the data are appropriate to answer your research question or if modifications should be made.
Last updated 04/23/2021
For more instructions for using the aceR and then template, please see this video walkthrough and accompanying slide deck.
- ACE trial level column explanation – Description of the column names generated by load_ace_bulk function for ACE modules
- ACE data column explanation – Description of the column names generated by proc_by_module function for ACE modules
Last updated 04/21/2022
Which metrics should you use to analyze ACE data?
That’s up to you! We strived to include as much information as possible so that you can analyze the metric you are most interested in, including mean accuracy, mean response time, standard deviation of response time, d’, and more. Below, you can find our suggested metrics for each task, but different metrics may be appropriate depending on the research question or population.
- BRT: mean RT (dominant/non-dominant hand)
- Stroop/ Color Tricker: overall Rate Correct Score
- Flanker: overall Rate Correct Score
- Boxed: overall Rate Correct Score
- Task Switch/ Sun & Moon: overall Rate Correct Score
- Forward Span/ Gem Chaser: Max object span
- Backward Span/ Gem Chaser: Max object span
- Filter: K (per condition)
- TNT/ Triangle Trace: mean RT
- Compass: overall Rate Correct Score
- Mars UFO (Impulsive): mean RT
- Venus UFO (Sustained): mean RT
- Color Swatch: max delay time
- Face Switch: overall Rate Correct Score
- N correct trials/(mean RT * total number of trials)
This score can be interpreted as the number of correct responses per second of activity.
*K is a measure of working memory capacity as described by Luck and Vogel (1997):
- K=S(H – F)
where K is the memory capacity, S is the size of the array, H is the observed hit rate, and F is the false alarm rate.
Cost vs Overall Task Performance
For many tasks, the RT cost between conditions is traditionally used as the metric of interest (e.g., in Stroop, the difference between the congruent and incongruent condition). Condition cost is calculated for most tasks in the aceR processing code for analysis. However, there is some evidence cost scores may have lower test-retest reliability than overall performance scores (see Enkavi et al., 2019). Particularly in tasks in which conditions are intermixed (vs blocked), overall task performance (particularly Rate Correct Score) may be a better representation of an individuals’ performance.
New to R? Swirl is a great way to learn the basics
- Click here to see the swirl website
- Click here to download a quick Rscript to help you get set up with swirl
Got the basics, but want to know more?
We recommend R for Data Science by Garrett Grolemund and Hadley Wickham. R for Data Science is a free, online textbook that will guide you through using tidyverse to analyze data and includes walkthroughs that use sample data so you can follow along. A basic understanding of R will be needed to follow along this book.