Tuesday, February 14, 2006

 

List Overload Pt. 2

When it comes to lists, I think there's more of a love/hate dichotomy among music fanatics than you're implying. I fully agree with your take on the "Maxim-ization" of many print media -- magazines have conditioning their readers to process information in the form of bite-sized lists (essentially nothing other than an 100+ page sequence of 30-100 word blurbs, perfect for reading in 479 sittings) and I'm not particularly thrilled about that. But a lot of music fans get burned out on lists fairly quickly. 99% of them get burned out quicker than I do, so my attitude shouldn't represent anything close to the norm.

Sports fans (and sportswriters!) can never agree on what an MVP is. Is it the player with the most impressive stats? The most noteworthy player on the winningest team? The player whose team would have been lost without him or her (how can this be quantified, if at all?). Similarly, music fans can never agree on what, for example, a "Best Albums of 2005" list should represent. Composing a top ten of the year is nothing but a brief snapshot in time for some people. It's the list of the music they liked best during the week they wrote the list, with the full understanding that in the week before (or the week after), the components of that list would (and/or *should*) surely change. I try to make my lists more permanent than that. I try to evaluate, as honestly as possible, how I felt about everything I heard during the course of the year, integrated over the entire year. To me, if I looked at my top ten of 2005 one year from now and realized that I hated almost everything on that list, then I would feel that I failed myself in some way. But a lot of other fans wouldn't be bothered by that at all (with respect to their own lists).

To complicate matters, music fans are fond of pooling their lists into bigass polls, in the hope of ... what exactly? Coming up with the most objective Best Of list possible? If everybody uses different criteria in evaluating the material on their individual ballot (as discussed above), how can you attain "objectivity" by averaging all those lists into an amalgamated whole? It's like the old fable about the emperor of China's nose -- you ask a large number of people what they believe to be the length of his nose and take the average of their answers -- but nobody knows what they're supposed to be evaluating, so once you average all their responses, has anything at all been learned about the emperor's nose?

To put it another way -- suppose five people collaborate on writing an essay or newspaper article. After going through several drafts, where each contributor gets their chance to edit what everybody else has written. The final, submitted copy won't reflect any one person's writing style or opinions. It'll be a never-before-seen hybrid version of the ideas of the individual contributors and chances are, if you asked each of them in turn, they'd all say that they aren't happy with the final product and would prefer to make changes that express those ideas in a style closer to their own. Or they'd prefer to emphasis certain sections of the article more than others. And so on. In this sense, reading through five drafts written by five different people can be more illuminating than the final, hybridized product (albeit far more time consuming). The same can be true of the data gleaned from individual lists versus a generalized commentary on the full, averaged results of a large music poll.

Comments: Post a Comment



<< Home

This page is powered by Blogger. Isn't yours?