How do we define and measure whether a student is literate?

Readers may find meaning in texts quite different from what a test designer, teacher, or researcher might consider to be important, according to UGA education professor Peter Smagorinsky. (AJC File)

University of Georgia professor Peter Smagorinsky always gives us something to think about it in his guest columns for AJC Get Schooled. No exception today with this fascinating discussion of how we define and measure literacy.

He begins with a question: How can a country have 95 percent literacy rate in one ranking, yet land on the very bottom in another? (This also speaks to the problem with international educational rankings and comparisons: Who is being measured and how?)

This is great discussion fodder.

By Peter Smagorinsky

It is very common to hear people refer to “literacy” as a desirable human capability. “Literacy rates” are often reported to measure how advanced a society is, and literacy is often treated as something people either have, or do not have. But what exactly does “literacy” refer to? How do we know literacy when we see it? On the surface—the level at which policy seems to work—literacy is a simple and unambiguous concept. Yet, in reality, literacy is complex and contested.

I’ll use the Mexican context, where I contribute to a literacy program in Guadalajara, to illustrate. On the one hand, according to Statistica, Mexico has a 95 percent literacy rate. Very impressive! However, UNESCO reports Mexico ranks 107th of 108 countries in reading proficiency. How can literacy in one nation be both near-universal, and ranked among the world’s lowest?

The problem follows from differences in what people mean when they refer to the term “literacy.” What does it mean to be literate? By what means is literacy measured and determined? Is a person either literate, or not? Or can a person be sort of literate, or half literate, or mostly literate, or fully literate? Can a single form of measurement be uniformly applied to all of the world’s people for comparative purposes in assessing literacy rates?

Do national, ethnic, and regional contexts mean that a centrally developed means of measurement tends to position some people better than others in the evaluation, and consider everyone else to be in deficit? Is literacy something that can only be considered available via engagement with an alphabetic form of script? If so, how are character-driven scripts, such as those in Asian calligraphy symbol systems, interpreted in measures of literacy? Is it possible to develop a valid and reliable way of determining comparative literacy rates when different nationalities use different symbol systems to construct their texts?

These are all perplexing questions, complicated by the ways in which contexts shape how literacy is interpreted from place to place, situation to situation, culture to culture. A context-sensitive perspective suggests there is no way in which one test can validly compare literacy rates when nations employ different symbolic forms for representing thinking in texts, or even speak in languages that lack similar structures and meaning systems.

How, then, can international rankings be determined? What is this thing we call literacy, a construct that is employed to produce rankings of the degree to which a nation may be considered advanced and prominent on the world stage?

I will confine my attention today to the ways in which literacy rates tend to be measured. On U.S. standardized tests, reading achievement measures assume that everyone agrees on what it means to “comprehend” a written text. Comprehension is typically measured by students’ ability to answer multiple-choice questions devised by researchers or assessment specialists in response to a given passage.

This narrow means of determining comprehension is, however, problematic for many reasons. Primarily, it assumes the questions included on reading assessments are uniquely capable of producing information about what students do and do not understand about what they have read.

However, readers may find meaning in texts quite different from what a test designer, teacher, or researcher might consider to be important, a fact that has recurred in my own research and that of many others. This meaning often comes through a student’s empathy with literary characters’ emotions and experiences, something not available through multiple choice questions posed by someone else, because such responses tend to be informed by personal experiences, and not reducible to correct answers.

Literature is written to be ambiguous and open to interpretation. What of “informational” texts of the sort prized in the Common Core State Standards? If such texts had inherent, testable meaning, then Supreme Court justices would always agree on how to interpret the very explicit text of the Constitution. Yet their ideologies inform their reading to produce different understandings of the law, as always becomes an issue in the ways in which conservative and liberal administrations make their appointments and vote on confirmations.

This standardized means of measurement further assumes every reader reads the test items in the same way. This assumption is inattentive to human variation. Yet in a world driven by the need for standardization, standardized humans are assumed in the ways in which the tests are constructed and considered valid and reliable.

Virtually any investigation into cultural differences and individual human variation, however, demonstrates the futility of accepting that assumption. Standardized test items do not map onto human diversity, instead giving an advantage to those whose cultural experiences correspond best to those of the test designers. Most other people are doomed to deficit interpretations of their ability to engage fruitfully with the written word.

Finally, each of these conceptions relies on an autonomous view of literacy, i.e., one that takes literacy out of its social and cultural context and views it as a discrete skill. Similarly, these tests assume the texts used on reading tests are autonomous in that all meaning resides in the text itself, rather than in how readers not only decode words but encode them with meaning.

This assumption is central to U.S. policies governing how students and teachers are assessed and rests on an easily disconfirmed belief that texts themselves have meaning independent of readers’ constructive activity.

President Bill Clinton helped to accelerate the current emphasis on testing when he declared, “We must do more . . . to make sure every child can read well by the end of the third grade.” This belief framed the Reading Excellence Act originating in his administration and taken up with increasing frenzy in subsequent presidencies’ Departments of Education.

This act promises to “provide professional development for teachers based on the best research and practice” and a testing apparatus to produce “accountability.” Yet reading specialists have profound differences on what constitutes the “best research and practice,” as evidenced by the highly contentious and divisive Reading Wars over both federal funding and the stature and wealth that follow from a federal endorsement.

Given that researchers cannot agree on which evidence suggests a person can read, which research most usefully identifies this ability, and which instruction is most likely to produce it, “literacy” does not provide consensus among people considered to be experts. No wonder Mexico is both highly literate and widely illiterate, depending on the source consulted.

We are incapable as a profession or nation of agreeing on what it means to be literate. Reducing the concept of literacy to answering multiple choice questions is, in my view, a big part of the problem, given how such tests are fraught with misconceptions, and how they advantage people similar to the test-developers.

I think the whole movement toward standardization is badly misguided, given its reductive tendencies and glorification of statistics, no matter how misrepresentative they are of complex phenomena. As long as citizens allow this farce to continue, students and teachers will continue to be mismeasured and punished because oversimplification is so much easier to achieve than real efforts to help students develop fluency with a multifaceted act like reading.

 

Reader Comments 0

7 comments
Jessica F.
Jessica F.

Professor Peter Smagorinsky touches on topics and ideas that I have questioned myself in the years that I have taught Reading. Education is focused on standardized testing; however, everything we are doing in our class isn’t standard at all. We are constantly differentiating for our students, as our professional developments have told us is a best practice, yet at the end of the year students are required to take the same test in the same manner. I couldn’t agree more with Professor Smagorinsky’s when he talks about how we judge a student’s understanding of a text through how they relate to a character; however, this isn’t something that can be measured by a multiple-choice test. For example, just today my lesson was on Perspective/Point of View. My students and I talked about how experiences impact our feelings and thoughts. Most of the students enjoy jumping on a trampoline. They explained how it was exciting jumping high and enjoyed being able to do flips easily, but one student in my class did not agree with everyone. He fell off the trampoline a few years back and broke his arm. He no longer has a desire to jump on a trampoline because he is fearful he will get injured again. If something as simple as jumping on a trampoline can make an impact, imagine all of the other experiences we undergo and how they impact a story that we read.

Just as Professor Smagorinsky states “This standardized means of measurement further assumes every reader reads the test items in the same way”. That is not the case. Each student is too unique in their background, culture and experiences to be tested by the same measure. In my opinion, standardized testing is not a true indication of whether a student is literate. There are too many factors that a multiple-choice test cannot take into consideration. So, how do we test a child to understand their level of literacy if a multiple-choice test cannot give us that answer?  

Kathy Brown
Kathy Brown

Interestingly, I had a psych teacher make some of the same arguments regarding testing kids for “abnormal or learning disability” . He used to test kids for a school system and found some kids could not answer some questions because of “cultural” exposure to various subject matter in any given question.

otherview
otherview

All that is well and good as a theoretical discussion in academia but my thought is that Bill Clinton was concerned with having functional literacy when a child gets out of high school.  Can he read standard English? Does he have a basic understanding of what he reads? 

Sure, I understand the cultural references that skew results.  How many inner city students are familiar with regattas? Ideally, they'd be exposed to a lot of vocabulary that they, or any student from any background, was unfamiliar.

Yes, I recognize problem with standardized tests and teaching to the tests.  

On the other hand, making statements that may be accurate, you can't measure literacy for example, seems to me to be an excuse for a 12th grader being unable compose a simple sentence or being unable to read an on level short story with some understanding, even acknowledging that different cultures could affect the emphasis that a student could put on different aspects of the story.

jerryeads
jerryeads

Nicely done as always, Peter. 

I might suggest that you might have discussed literacy not as a dichotomy - which perhaps was a choice on your part for the purpose of this piece, given I know you don't think that way.

Unfortunately, many others might see it that way, in no small part because of the artificial dichotomy created by the arbitrary and capricious setting of "cut scores" on minimum competency tests such as the trash dumped on Georgia by the state ed. dept. (State testing has one thing in common with the space program: Low bid.)  We might also note that absolutely NO data exist that demonstrate the real world validity of such cut scores. If it weren't for the pass/fail nature of the tests, does someone who "passes" with a score of 526 fare better in life than someone who "fails" with a score of 524? 

I'm now retired, but once upon a time I did get to spend a goodly amount of time with David Weikart and his crew, who produced the Perry Preschool Project, and about a decade doing research with Steve Barnett, who is the economist who developed the analysis of the Perry data that showed that GOOD (not just any) early childhood education returned seven bucks for every buck spent on the kids in those programs. He now and for a long time has headed the National Institute for Early Education Research at Rutgers.

One of the many things I learned during those years is that we make a lot of assumptions in trying to figure out what works in education. One reason is hardly anyone has the wherewithal to undertake long-term work like David did. The two years David's teachers spent with those kids STILL show positive differences almost five decades later with who are now very mature adults. 

So why can't we invest in following students for a few decades to see whether high-scorers do better in life (better jobs, stronger families, fewer arrests, etc. etc. etc.)?  My guess is that the con artists who run the testing game (yes, I was once on of them) DO NOT WANT ANYONE TO KNOW that their junk makes no difference at all. BUT: wouldn't it be nice to find out if I'm wrong?

Thanks Pete. Keep up the great work.

BurroughstonBroch
BurroughstonBroch

Most of today’s academic testing uses multiple choice questions, typically five choices per question, which are cheap and quick to grade but do not illustrate the student’s knowledge. If testers return to essay-type testing they will better be able to determine whether a student is literate. My grandfathers left public school at 14 like most did in 1908-1910, and both were very literate. My father left public school at 16 in 1937 and went to work and university night school; my mother graduated high school after the 11th grade and went to college; both were very literate. Contrast them to many of today’s public school graduates who have been in school for 12 years and can hardly read, write, and solve badic sums.

Milo
Milo

LOL. Georgia can't even determine when a teacher is literate.

readcritic
readcritic

@Milo Perhaps literacy could include the ability to follow directions and be self-disciplined. With that, the rest will follow. Today's students, schools, and administrators fail to expect and support discipline. Teachers are the only ones who face criticism and get disciplined.