Do you mean the ones from your white paper? The same ones that humans possess? How do you know this?
>> The key bit isn't the data augmentation but the TTT.
I haven't had the chance to read the papers carefully. Have they done ablation studies? For instance, is the following a guess or is it an empirical result?
>> For instance, if you drop the TTT component you will see that these large models trained on millions of synthetic ARC-AGI tasks drop to <10% accuracy.
Do you mean the ones from your white paper? The same ones that humans possess? How do you know this?
>> The key bit isn't the data augmentation but the TTT.
I haven't had the chance to read the papers carefully. Have they done ablation studies? For instance, is the following a guess or is it an empirical result?
>> For instance, if you drop the TTT component you will see that these large models trained on millions of synthetic ARC-AGI tasks drop to <10% accuracy.