2024-05-16 Meeting Notes
To discuss
- Parameter identifiability and trade-offs (more details linked here)
- The parameter space geometry is complicated
- Best-fitting parameters differ from the paper reported values
- I started from 100 different values
- Possible overfitting?
- Argument is about best-fitting parameters. But question is more about “reasonable parameters” (related to overfitting)
- how to determine what range of parameter values are consistent with the data?
- What is “too slow?”. Comparison with estimates from Popov & Reder (2020) & Ma et al (In press)
- (!) just model it trial by trial allowing resource to recover however much they can in the time between trials
- Caputring the primacy effect is to “blame”. Model with multiple contributions to primacy?
- In experiment 2, after 6 seconds gap there is no difference in SP4-6 between known and random chunks, which the model can only account for width full recovery.
- Should we model the first chunk?
- If so, how?
- How much modeling is enough
- I need to be convinced that most reasonable versions of the model require very slow rates
- Conclusions
- I finally understand the argument for freeing capacity. But now the argument seems one-sided. We do formal modeling to reject the resource account, and based on that we conclude that the other capacity account is more plausible without doing any modeling for it?
Models I’ve tried
- Model 1a: same as in the paper (with many different starting values)
- Model 1b: same as 1a but also accounting for the first chunk
- Model 1c: putting prior on parameters. Particularly on the rate parameter over a grid
- Model 2: including the encoding time for the recovery
- Model 3: strength = sqrt(resource_cost)
Conclusions so far: - Despite the complicated geometry and parameter trade-offs, recovery rates do need to be slow with this model group
Modeling left to do
- Experiment 3
- Apply model from Ma et al (In press) just replacing LF and HF with random and known chunks
- analytical likelihood, should be fast, maybe even bayesian
- can be applied on a trial-by-trial basis
- (!) just model it trial by trial allowing resource to recover however much they can in the time between trials
Next steps
Bootstrap participants and trials to and fit the model too get distribution
- if possible bayesian
Fit the Ma et al (In press) model to the trail level data with actual inter-trail intervals
What does the model predict - interaction for introduction
Framing - we are testing this model as a potential new comprehensive explanation. Do not embrace alternative conclusion just because (if) this models fails to pass