Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Education is a lot more complex than most people understand it to be. Even within the field few people actually have any understanding of how the brain and learning interact. Everything from diet, to time of instruction, to setting, to materials, to methods, to testing is all important. A classic example of which is how do you build a useful test? Well if you want to know how much learning took place you need to test them both before and after instruction. But you also need to test people twice without instruction to see if the tests give information away. You need a mix of problem difficulty to find out if someone understands the basics and at the same time, to see if they understood the minor details. etc. You then need to look at the test not just in terms of overall score but also how well they did on each type of problem.

PS: Education might seem like a flaky field and much of the discussions are devoid of good science. At the same time there is a lot of great research that has been which has real and important implications.



On the contrary, I think math ed types want to keep the actual mathematicians out precisely because the mathematicians recognize how complex the field is. Every time I've been told "stay out, you don't know our field", it was because I pointed out complexity or plausible alternative explanations.

For example, a puzzle: why do SAT scores underpredict female performance in Calc 1, and overpredict grades in higher level courses? I suggested girls are more conscientious than boys, and that a study being proposed did not control for such factors. I even suggested a test of this hypothesis: compare grades in classes which are conscientiousness-weighted (25% homework, 10% attendance, tests questions are practice problems with numbers changed) to ability-weighted coursework (50% midterm, 50% final, test questions all new).

I'm not sure my idea was a good one. But rather than explain why either a) this had been tried, and didn't work, or b) why my test wouldn't work, I was simply told to leave the field to experts.

I've come up with plenty of dumb ideas in other areas, e.g., global existence of wave equations. The closest I've ever gotten to "stay out, you aren't an expert" was "Terry tried that and failed, ask him why before spending lots of time on it."

[edit: really confused why you are being downmodded.]


Honestly, SAT scores vs. Performance is the type of (mostly bad) research I am talking about. The SAT is not designed to test math ability. It's focus is on how likely a student is to finish their freshman year of college and it does that fairly well. You can do a lot of useless research in this area and it tells you next to nothing. If you want to predict a highschool students ability in advanced collage math classes you could design a test that did that. But, people have easy access to SAT data so that's what they look at.

If you want to see real and useful research look into how long the optimal study period. There are significant and useful study’s which suggest 2 hours of nonstop instruction is less useful than two one hour periods with a moderate break in-between. Yet how many collage lectures follow this approach.


"There are significant and useful study’s which suggest 2 hours of nonstop instruction is less useful than two one hour periods with a moderate break in-between."

The professor of the last class I took was very conscientious about taking a break at the 1 hour mark of an 80 minute lecture.


What makes SAT vs performance bad research? The SAT may be worse than some specialized test, but so what? Breast self exams suck in comparison to mammograms, but that doesn't make studies into breast self exams bad research.

The methodology is what makes research good or bad.


The fewer unknowns the better the data and the more accurate the experiment.

In terms of predicting how likely someone is to finish their freshman year of collage having a test where you can increase your score significantly with moderate levels of preparation is not a bad thing. However, the fact you can easily game the test means it is a less accurate indicator of a student’s innate capability. The fact you can retake the test creates yet another sort of bias. etc.

More generally, it's easy to focus on defects in the test, which are irrelevant in a larger contest and subject to change.


PS: Education might seem like a flaky field and much of the discussions are devoid of good science. At the same time there is a lot of great research that has been which has real and important implications.

My wife and I were discussing this recently. She is in an unusual program where she should graduate with both a masters in history and an education certificate with hopes to teach in history at the High School level after that.

This is both second hand an anecodatal, but from what she tells me the education courses she takes are less rigorous and generally easier than the history courses. But the reason for this is that the education course she is taking are vocational. They are trying to teach her how to get teaching done in a modern classroom, not the science behind it.

It is akin to learning to the difference between training to be a mechanic and a mechanical engineer. Both are very hard and important jobs worthy of respect, but they have very different focuses. The education courses are focused on learning the technique of teaching effective, it looks like most of the research on the theory of effective education is done in departments like psychology, sociology, etc.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: