In a comparative study of national literacy published by Connecticut State University, the United States of USA! USA! USA! ain’t doing so good.
http://www.ccsu.edu/wmln/rank.html
The United States comes out 7th overall, which doesn’t sound too bad until you actually read the report.
The U.S. is:
- 9th in money spent on education
- 12th in reading newspapers
- 12th in test scores (after “normalization” with other systems)
- 23rd in households with computers
- 30th in libraries
The amount of money thrown at education doesn’t matter: it should be the outcomes. As the study’s Methodology section admits: “There are virtually no meaningful correlations between the input measures and the output measures [for education].” So why were input measures given undue weight in the study? Similarly, test scores don’t matter: it’s what students actually know — which is often not much. Likewise, the number of computers in a household doesn’t matter if all family members do with them are tweeting, watching porn, streaming movies, or downloading music. The ranking of American libraries, while bad enough, is actually elevated by the number of university facilities, while at the community level libraries are poorly, grudgingly, and disgracefully funded. Newspaper rankings are also inflated by the number of local papers (not their quality) that exist solely for advertising revenue, while in smaller countries papers with national circulation are stronger and of better quality. When, for example, was the last time you saw a fuilleiton section in your local newspaper — or any real international news in it? No matter how many USA Todays, New York Posts, and National Enquirers exist, Americans still can’t find Brazil on a map.
Meaningful outcomes? These were given short shrift in this “study.”
And a final question:
How do weighted rankings of [ 9, 12, 12, 23, and 30 ] amount to a composite ranking of 7?
Comments are closed.