DonorChoose, a non-profit that match teachers who need supplies to those willing to donate, has hired a data scientist. I don't know what the fuck a data scientist is--I think these people specialize in data-mining, not hypothesis testing--but that is probably a good idea. Non-profits should self-evaluate.
The problem is that DonorsChoose evidently wants to draw policy conclusions from its terrible data. For instance, they might want to know if donors who get supplies add more "value" (read: points to test scores) than donors who do not get supplies.
I have no idea how they did this but my guess is that they compared teachers who went on DonorsChoose to request supplies to those who did not. Or they compared teachers who got funding to those who did not. In either case the selection problem jumps off the page. Teachers who select to request supplies are probably harder-working and better than teachers who don't. Similarly, donors probably select projects that tend to have more merit than those that don't. And, to make matters worse, teachers probably are better able to articulate what they need and why its valuable and, as a result, get funded more often.
So what will any of these comparisons show? That either DonorsChoose funds better teachers or its funding help. But that isn't what we read in the news.