Free time & chunking modeling
  • Versions
    • Latest
    • v0.1
  • Source Code
  • Report a Bug
  1. Development notes
  2. 2024-05-16 Meeting Notes
  • Version 0.1
  •  
  • About
  • Development notes
    • Notes
    • 2024-05-16 Meeting Notes
    • Extra primacy parameter
  • Notebooks
    • Data
      • View the data structure
      • Exploratory data analysis
      • Subject-level data
    • Model 1: Original model
      • Main results
      • Parameter identifiability
      • Sensitivity to tau
      • Experiment 3
      • Exploring model predictions
    • Model 2: Include encoding time
      • Main results
    • Model 3: Non-linear recovery
      • Explore model predictions
      • Basic fits
      • Bootstrapping data and fits for parameter uncertainty estimation
      • Extra primacy parameter
      • Linear recovery as a random variable
  • Function reference
    • Aggregate Data
    • Perform bootstrapped estimation
    • Calculate the deviance of a model
    • Get data object from a file
    • Generate a bootstrapped dataset
    • get_data function
    • Inverse Logit Transformation
    • Logit Transformation
    • Calculate the overall deviance
    • Plot Bootstrap Results
    • Plot Linear RV Recovery
    • Preprocesses the data
    • Execute an expression and save the result to a file or load the result from a file if it already exists.
    • Serial Recall Model

On this page

  • To discuss
  • Models I’ve tried
  • Modeling left to do
  • Next steps
  1. Development notes
  2. 2024-05-16 Meeting Notes

2024-05-16 Meeting Notes

To discuss

  • Parameter identifiability and trade-offs (more details linked here)
    • The parameter space geometry is complicated
    • Best-fitting parameters differ from the paper reported values
    • I started from 100 different values
    • Possible overfitting?
  • Argument is about best-fitting parameters. But question is more about “reasonable parameters” (related to overfitting)
    • how to determine what range of parameter values are consistent with the data?
  • What is “too slow?”. Comparison with estimates from Popov & Reder (2020) & Ma et al (In press)
    • (!) just model it trial by trial allowing resource to recover however much they can in the time between trials
  • Caputring the primacy effect is to “blame”. Model with multiple contributions to primacy?
    • In experiment 2, after 6 seconds gap there is no difference in SP4-6 between known and random chunks, which the model can only account for width full recovery.
  • Should we model the first chunk?
    • If so, how?
  • How much modeling is enough
    • I need to be convinced that most reasonable versions of the model require very slow rates
  • Conclusions
    • I finally understand the argument for freeing capacity. But now the argument seems one-sided. We do formal modeling to reject the resource account, and based on that we conclude that the other capacity account is more plausible without doing any modeling for it?

Models I’ve tried

  • Model 1a: same as in the paper (with many different starting values)
  • Model 1b: same as 1a but also accounting for the first chunk
  • Model 1c: putting prior on parameters. Particularly on the rate parameter over a grid
  • Model 2: including the encoding time for the recovery
  • Model 3: strength = sqrt(resource_cost)

Conclusions so far: - Despite the complicated geometry and parameter trade-offs, recovery rates do need to be slow with this model group

Modeling left to do

  • Experiment 3
  • Apply model from Ma et al (In press) just replacing LF and HF with random and known chunks
    • analytical likelihood, should be fast, maybe even bayesian
    • can be applied on a trial-by-trial basis
    • (!) just model it trial by trial allowing resource to recover however much they can in the time between trials

Next steps

  • Bootstrap participants and trials to and fit the model too get distribution

    • if possible bayesian
  • Fit the Ma et al (In press) model to the trail level data with actual inter-trail intervals

  • What does the model predict - interaction for introduction

  • Framing - we are testing this model as a potential new comprehensive explanation. Do not embrace alternative conclusion just because (if) this models fails to pass

Back to top
Notes
Bootstrap fits to estimate uncertainty
Source Code
---
title: "2024-05-16 Meeting Notes"
format: html
---

## To discuss

- Parameter identifiability and trade-offs (more details linked [here](../notebooks/par_identifiability.qmd))
    - The parameter space geometry is complicated
    - Best-fitting parameters differ from the paper reported values
    - I started from 100 different values
    - Possible overfitting?
- Argument is about best-fitting parameters. But question is more about "reasonable parameters" (related to overfitting)
    - how to determine what range of parameter values are *consistent* with the data?
- What is "too slow?". Comparison with estimates from Popov & Reder (2020) & Ma et al (In press)
    - (!) just model it trial by trial allowing resource to recover however much they can in the time between trials
- Caputring the primacy effect is to "blame". Model with multiple contributions to primacy?
    - In experiment 2, after 6 seconds gap there is no difference in SP4-6 between known and random chunks, which the model can only account for width full recovery.
- Should we model the first chunk?
    - If so, how?
- How much modeling is enough
    - I need to be convinced that most reasonable versions of the model require very slow rates
- Conclusions
    - I finally understand the argument for freeing capacity. But now the argument seems one-sided. We do formal modeling to reject the resource account, and based on that we conclude that the other capacity account is more plausible without doing any modeling for it?

## Models I've tried

- Model 1a: same as in the paper (with many different starting values)
- Model 1b: same as 1a but also accounting for the first chunk
- Model 1c: putting prior on parameters. Particularly on the rate parameter over a grid
- Model 2: including the encoding time for the recovery
- Model 3: strength = sqrt(resource_cost)

Conclusions so far:
- Despite the complicated geometry and parameter trade-offs, recovery rates do need to be slow with this model group

## Modeling left to do

- Experiment 3
- Apply model from Ma et al (In press) just replacing LF and HF with random and known chunks
    - analytical likelihood, should be fast, maybe even bayesian
    - can be applied on a trial-by-trial basis
    - (!) just model it trial by trial allowing resource to recover however much they can in the time between trials

## Next steps

- Bootstrap participants and trials to and fit the model too get distribution
   - if possible bayesian

- Fit the Ma et al (In press) model to the trail level data with actual inter-trail intervals

- What does the model predict  - interaction for introduction
- Framing - we are testing this model as a potential new comprehensive explanation. Do not embrace alternative conclusion just because (if) this models fails to pass