Nobody has done more to advance the cause of school library research than Keith Curry Lance
Without a doubt, he’s the Johnny Appleseed of school library research. But Keith Curry Lance is far too modest to accept this, or any other, accolade (Dr. Data? The Prince of Predictors?). “I just happened to be in the right place at the right time,” says Lance. That place was the Colorado Department of Education, which Lance has been affiliated with since 1985, right after he received his doctorate in sociology from the University of North Texas. Back then, the library community had a fresh interest in statistics, the technology was in place to easily do large-scale research, and school librarians desperately needed to demonstrate their contributions to student achievement. “The most I can take credit for is seizing some opportunities when they presented themselves, but you couldn’t orchestrate the opportunities,” Lance says. “That part was truly lucky.” Lucky or not, Lance’s first Colorado study, published in 1993, was groundbreaking. It documented school library expenditures as a key predictor of academic achievement and identified other important predictors, like staffing levels and collection size. Suddenly, it seemed every state needed its own “Colorado study,” with 15 conducted to date, about half of which Lance has been directly involved in. (For more on this research, see
School Libraries Work! published by Scholastic and available at
http://scholastic.com/librarians/printables/downloads/slw_2006.pdf.) Lance’s work has changed the school library world in ways big and small, political and personal. “A lot of school librarians come up to me after presentations and thank me for this research,” he says. “They’re grateful to have something in the way of 'evidence’ they can use to demonstrate their impact.” Hearing that Lance was retiring from the Colorado State Library (but not, we were relieved to learn, from library research altogether),
SLJ sat down with him to look back at his career and listen to his thoughts on the profession’s future.
I heard that an interview on National Public Radio led you to do the Colorado study. In 1987, Bill Bainbridge, the president of School Match, a vendor of school data, was being interviewed on National Public Radio, and he indicated that School Match had all of the data from federal and state governments about schools. The interviewer asked, “Do you have any kind of data about how well kids do in school?” He said, “Yes. We have data on National Merit Scholarship Tests.” And the interviewer asked, “What has the most to do with how well kids do on that test?” And without any prompting from anyone in the library world, Bainbridge said, “Oh, there’s no question about it. It’s how much you spend on the school library.” So, I—along with other people—called him. I said, “I understood this is a national study. Any chance you could just do this for our state?” Well, to make a long story short, it wasn’t really possible to do a state-level study with the amount of data they had. So I decided we needed to have a real study done about this. It took three attempts to get the first Colorado study funded.
Did you have an inkling of how important the study would become? I was completely clueless. It never occurred to me that anyone outside of Colorado would care about this study. I was flabbergasted by the reaction to it.
The school library community has stressed the importance of collaboration, but your research suggests that leadership is more important. What we found is that leadership doesn’t have a direct correlation with test scores, but it does have an indirect correlation. Schools where librarians spent more time in leadership activities—like meeting with their principals, going to faculty meetings, serving on committees—were likely to spend more time in collaboration. They’re more likely to plan with teachers, to co-teach, to tutor, to do in-services for the teachers themselves. That’s why I’ve said for a long time now that
Information Power [ALA, 1998] got the three themes in the wrong order. It talks about collaboration, leadership, and technology, and in the order it actually happens, it’s leadership, collaboration, and technology. If you want to get to collaborate, you have to step into those leadership shoes first and establish yourself as a leader that somebody would want to collaborate with.
One of the things that’s impressed me about you is that you took your research on the road, to share with others. I’d love to take credit for that. I’d love to say that I did that with great consideration and forethought and planning, but frankly, I started to speak about these studies when people asked me to. I’m the first to admit that most of these reports read like doctoral dissertations, and that’s not the sort of thing most people get their heads around very willingly. I’m guessing that most of the states that invited me to speak felt that the research would be a lot more comprehensible to people if they could hear somebody talk about it.
Daniel Callison, of Indiana University, says library advocates often use weak or moderate correlations from studies like yours to show that media specialists make a dramatic difference. What’s your response to that? I don’t disagree with it at all. I think that any thinking person who looks at the business of public education understands how incredibly complicated the factors are that influence how well a kid does in school. It’s complicated dramatically by socioeconomic factors: how well educated the parents are; how much they support the child’s education; how well funded the school is. How do you really measure the quality of teaching? How do you measure the student’s own motivation to learn? There are an almost infinite number of factors that explain how well a kid does in school. This is my quarrel with a lot of education research, especially all the magic bullet programs that schools spend a lot of money on. No one thing could possibly exert an overwhelming influence relative to everything else on how a kid does in school. I wouldn’t believe any study that said that 75 percent of the variation in test scores could be explained by anything. So I’ve always been very reluctant to characterize the strength of our findings in terms of weak, moderate, or strong because they’re relative to me, and it doesn’t make sense to use those terms about something as complex as public education. When I look at the kind of experimental studies that the Department of Education promotes right now—the one that obviously comes to mind is Accelerated Reader’s, but there are lots of other brand-name interventions—my guess is that if a school took any intervention that wasn’t actually destructive and put massive resources behind it and focused the attention of the entire school on it, how could it not have a positive impact? What is so much better about having a strong school library is that every day it’s part and parcel of the school. So if you can get that right and you can make it work well with everything else in the school, then you don’t need these brand-name interventions as much. I would add that I don’t for a second think that our findings have been misrepresented in an intentionally misleading way. It’s just the nature of the kinds of statistical analysis that we’ve done that for people who aren’t well versed in statistics, it’s difficult to talk about it and it’s difficult to talk about it without accidentally saying something that someone else takes as misleading.
Why are you concerned about using standardized tests to measure student achievement? Standards-based test scores alone are an extremely narrow definition of academic achievement. But any single test would be. I think the problem with standards-based testing is that such high stakes ride on one test. There’s a challenge for us to step up to the table, because my guess is the tests aren’t going away. The tests are now a very big business, and very big businesses tend to look out for themselves. Perhaps there’s an opportunity here for people in the school library media field to work more closely with people in the testing field to develop tests that are better indicators for the things that we’re interested in, what we feel responsible for teaching. Perhaps it’s one of those “if you can’t beat them, join them” situations. Of course, the irony is that I feel absolutely certain that [it’s because of high-stakes testing] that we have a body of research that covers at least 15 states. The only reason that happened is because everyone felt the Colorado study didn’t count. There had to be a similar study showing that kind of impact on their own state’s brand-name test scores.
Are there plans for more Colorado-like studies? When we went into Illinois back in 2004 or 2005, we announced that that was the last of the “Colorado studies” that we were going to do. I think I captured the sentiment of a lot of people in that conversation between Danny Callison and me [“Enough Already?”
School Library Media Quarterly, 2005]. If 15 studies aren’t enough to convince somebody that school libraries have an impact, then it’s high time we found another approach to addressing that issue. We did that in Indiana to some extent. We did a somewhat more qualitative study there, rather than surveying libraries about numbers. We surveyed principals, teachers, and librarians and asked them very qualitative questions about their working relationships with each other. Very happily the answers to those qualitative questions also correlate with test scores. So we’ve sort of taken off in a new direction. We’re hoping to do more studies of that sort and hoping that as other people come up with new ideas about how to approach demonstrating the impact of school libraries, those kinds of studies will be funded and replicated as well.
I’ve heard that a third Colorado study is in the works. Is that true? Well, that issue has been complicated by a number of factors, not the least of which is my retirement. We’ve experimented with collecting different kinds of data about how school librarians spend their time and how they interact with teachers. Frankly, the jury’s still out on how comfortable we are with the validity of that data. If you look at the way the standards-based testing world is focusing their research, they’re looking at a deeper level of granularity on tests. They’re not looking at overall test scores. They’re not even looking at test scores on a particular subject. They’re looking at how kids are doing on specific test items, because in this high-stakes environment, if you’re being evaluated as a principal or a teacher based on your students’ scores, you’re not going to worry about the questions they’re getting right. You’re going to worry about what they’re getting wrong. And you’re going to do everything you can to improve their performance on those specific questions. So we need to do the same thing. We need to focus on whatever items we can find in standards-based tests that reflect what we’re teaching and show the impact we’re having on that. I’m sorry to say, it seems that the ante has been upped, not lowered, on standards-based testing.
The Department of Education (DOE) sure has created a challenge for us with its emphasis on scientifically based research. After the Department of Education took its position on scientifically based research, the National Research Council quickly published a response [Scientific Research in Education, 2002]. The problem with the DOE’s approach is that it’s excessively narrow [see “The Trouble with the Gold Standard,” pp. 54–55]. The original implication of its bias toward controlled randomized trials is that it was the only way to control for competing causes. First of all, that’s not true, as the National Research Council’s report indicated. One of the things you hear said about our studies is that they are “just correlational” and not cause-and-effect studies. I don’t agree with that entirely. It’s not quite fair to call them simple correlational studies. We did use appropriate statistical methodologies to control for competing causes, and we deserve credit for that. But the Department of Education doesn’t just want controlled randomized trials. They want large-scale controlled randomized trials. We looked at this in Colorado. It’s simply not possible in our state to do a large-scale, controlled, randomized trial, because we’re simply not that big of a state. Once you cross a state boundary, you have to engage multiple states to come up with enough cases to call a study large scale—and what’s going to be the indicator of achievement? We’ve got to come up with our own indicator of achievement. Data on test scores on the National Assessment of Educational Progress, the testing that the federal government does regularly, is not available to researchers like us. So if we’re going to have those kinds of test results, we’re probably going to have to come up with our own test and administer those tests ourselves. That’s a very daunting, time-consuming, labor-intensive, and staggeringly expensive undertaking. I think we have to look at it more from the standpoint of the National Research Council report which says no, controlled randomized trials are not the only game in town. There are all kinds of other methodologies. You have a lot of options. Find the one that works for you.
Where do you think school library research should be heading? This is going to make a lot of people laugh who know how I’ve spent the last 22 years, but I think one of the things we need to do very badly is start looking for alternatives to surveys. This applies not only to school library research, but every other kind of library research. One issue is survey fatigue. A lot of times, with, I’m sure, the best of intentions, people design really bad questionnaires and administer their surveys in very ill-advised ways. As if the best of surveys isn’t a challenge to a respondent to complete, we throw all kinds of other obstacles in their way by making it as difficult as possible.
Can you give me an example of that? I could give a lot of examples, but I’d like to keep my friends. It’s very easy when you’re writing a survey to get completely wrapped up in your own enthusiasm and not remember where most people are coming from. I’ll grant you, most of the lessons I’ve learned about how to do them well I’ve learned by repeatedly doing them poorly. I’m also a big proponent of available data. There are tremendous amounts of data that are collected by all kinds of organizations about libraries. I always urge people to consider every possible source of available data before they assume they need to collect data. The other thing we need to do more of is really strong qualitative research. Qualitative research to a lot of people has a nonscientific ring to it. But qualitative research can be just as scientific [as quantitative research]; it can be just as informative; it can be just as illuminating.
Are school librarians getting the wrong message? For a couple of decades, we’ve focused pretty strongly on collaboration as a central message. I don’t disagree with that message. But going on the data that I’ve seen from the states I’ve studied, I think we have to acknowledge—actually, lots of practitioners have acknowledged this—that the kind of collaboration we envision in an ideal world simply isn’t likely to happen in a lot of settings. I don’t think we can afford to put all of our eggs in that one basket. There are a lot of things that teacher-librarians can do to make a difference, even if they don’t constitute the kind of high-level, pervasive collaboration that we wish was going on. We risk frustrating people by continually urging them to do something that they’re really not in a position to do. Those leadership activities that we looked at can exert an important influence on the school. But first and foremost, we have to stand by the need for teacher-librarians to be qualified as both teachers and librarians, because if they’re not, they’re not going to be allowed to do what we want them to do. One thing I’m pretty sure of is that if you’re in a public school and you want a teacher to collaborate with you, they’d better perceive you as a teacher or it’s just not going to happen.
What’s the current state of the profession? I think we’re probably in a better place in recent years, since the focus has shifted a bit away from collaboration to what we’re collaborating about—namely, information literacy or ICT [information and communication technology] literacy or inquiry-based learning. Perhaps that’s the sort of thing new research should start focusing on. When collaboration isn’t possible, or isn’t possible at the levels we desire, what are the alternatives? How can we ensure that students leave school having learned how to learn? Having learned how to know when they need information? Where to find it, and how to know if it’s any good or not? That should be plenty of territory to keep another generation of researchers busy.
Doug Achterman is a library media specialist at San Benito High School in Hollister, CA.
Add Comment :-
Be the first reader to comment.
Comment Policy:
Comment should not be empty !!!