5 Comments
Sep 11, 2023Liked by David Deming

"The test is clearly very abstract and does not correspond very well to real life."

I don't think we should undersell the test. When I was a manager two of the most important things I did were match workers to tasks, and teach other managers and workers how to match workers to (or choose) tasks.

Expand full comment

Software engineering involves a lot of matching workers to tasks, too. A core aspect of Agile software development, and of other ways that teams organize their efforts, is that each engineer has a todo list of upcoming tasks, and, working with others on the team, every day they update the todo list by marking its that are done and then extending it, reordering it, pruning it, or trading tasks with others. The trading tasks is like your matching task, and the reordering is related too -- its part of what causes me to choose a task is I think especially appropriate for me (i.e. it's less likely to be a good task to trade later). So lots of comparative advantage.

Expand full comment

Great post. I often find myself spending a decent amount of time on deciding what not to do/focus on. Because I know if I make a bad decision or choose to work on something that turns out to be fruitless there are major negative consequences.

Expand full comment

I would be really curious to see if the results from the assignment game correlate more highly with SAT/ACT scores than they do with the other measures mentioned in the paper. While the SAT/ACT do reward students for academic skills (math, reading comprehension, English conventions), they also reward them for making good decisions (and avoiding common cognitive biases). Here are just a few of the biases that affect students on these tests:

- Halo effect (the first part of the answer is good, so the student doesn't read the rest of it)

- Availability heuristic (a student substitutes an easier question, like "Do I remember this word from the passage?", for the actual question, like "What is the function of the second paragraph?")

- Sunk cost (students are loathe to move on from questions they can't answer)

- Anchoring (the first plausible answer distorts their sense of the right answer as they evaluate other choices)

High performing students often develop (or have already developed) strategies for avoiding these errors:

- Taking notes

- Going back to find information

- Thinking of their own answer first

- Delaying intuition/answer formation until they've gathered relevant information

- Writing down their steps in math

- Eliminating bad choices

- Skipping questions that have taken up too much time (based on a predetermined threshold)

Expand full comment

My guess is that the AG happened to be a lower-variance IQ test than the one that you gave to the participants in your study. Something you should try to sort out in future research.

Try giving people the same tests a couple of months apart. If my hypothesis is correct, the variation in results on the IQ test will be higher, meaning that it is noisier.

Also, try giving people more comprehensive IQ tests with higher reliability, and see if AG still is as powerful as in the original paper.

"We paid people for their performance on 4 cognitive tests, including the Assignment Game but also a traditional nonverbal IQ test, a numeracy test, ...

"People who score higher on the Assignment Game (AG) have significantly higher incomes. That by itself isn’t surprising. But it turns out that the positive relationship between the AG and income holds even after controlling for IQ, the other cognitive tests, education, demographics, and current occupation. When we put all tests on the same standardized scale, the association with income is more than twice as large for the AG than it is for IQ."

Expand full comment