California’s Reading Wars

California’s Reading Wars: A Short History Lesson
Jill Kerper Mora

A short history lesson is in order here. Remember the Reading Wars here in California back in the late 1990’s and early 2000’s? Remember Marion Joseph and Bill Honig? Remember the claims that reading achievement test scores were evidence that California students weren’t getting enough phonics instruction and that explained their low test scores? And remember how the only identifiable variable that was statistically correlated with low reading test scores was students’ classification as English language learners? IOW, there was no statistical analysis to suggest that methods of reading instruction had a possible causal relationship with California’s students’ standardized reading achievement test scores other than English language proficiency. But who/what did Honig and Joseph blame for low test scores? Whole Language. See Lemann (1997).

Of course, Marion Joseph, much like Emily Hanford and the SoR Movement, blamed teacher educators for low reading achievement as well. That’s why we got the Reading Instruction Competency Assessment (RICA). Marion Joseph really believed that she was going to embarrass the CSU’s literacy faculty with the first round of test scores from RICA because, she predicted that most of our teacher candidates would fail the exam. Her low opinion of literacy professors would be vindicated. Therefore, she would be able to prohibit the teaching of the Whole Language approach in our Multiple Subjects Credential Programs, while simultaneously shutting down our UC and CSU bilingual credential programs with passage of Proposition 227. Whole Language and Bilingual Education together were the chosen scapegoats for the English-only Movement.

Because of this history that we share as bilingual educators, we need to talk about the attack on Whole Language and its research base because a renewed attack on WL is at the core of the “disproven theory” straw man argument. History lesson is over. The SoR Movement has chosen the Whole Language approach as its scapegoat. Since it appeared to work before, the theoretical/research base of WL has taken on that role as a straw man/boogie man for their propaganda campaign. Later I’ll give an overview of why we bilingual educators understand and utilize the research base of what became known as Whole Language.

One of the primary purposes of research is to give educators and policy makers data to identify the problem that struggling readers have so that we can find (match) solutions to the problem. If reading comprehension is the problem reflected by test scores, we must consider whether the causal factor in comprehension is English language proficiency is the problem or a lack of decoding skills (allegedly due to poor or lacking phonics instruction) is the problem. This is especially important in our efforts to promote biliteracy through well-implemented dual language programs for California’s linguistically diverse student population.

Science of Reading advocates frame the problem in terms of reading achievement test scores. NAEP, PISA, state level achievement tests, etc. This is problematic because test scores measure comprehension at a particular grade level. Test scores don’t/can’t isolate a variable of poor or ineffective phonics instruction or lack of phonics instruction. Most of the studies’ methodology that SoR relies on are experimental studies of programs or instruction like in the National Reading Panel Report. Oftentimes, perhaps usually, these studies set up a straw man description of the characteristics of the “opposing” instructional approach. However, it is important to keep in this fact in mind: There is no empirical (scientific) research evidence that establishes a causal or even correlational relationship between reading achievement test scores and approaches to instruction. There are several reasons for this reality (Baumann & Stevenson, 1982).

First, standardized reading achievement tests measure how well students comprehend what they read. Consequently, reading test scores are comprehension scores. There are many reasons why the population of students may not comprehend the passages that they read when answering questions on a test. Reading achievement test scores are not diagnostic. A test score cannot determine why a student was unable to comprehend the text on the test. A lack of knowledge and skill in decoding text is not the only reason some students do not comprehend what they read, so test scores cannot be correlated to approaches to instruction that are more or less effective at teaching students to comprehend text. See for example, see Stephen Krashen (2002) discussion of California’s standardized reading achievement test scores from 1987-92 that were wrongly blamed on Whole Language instruction. Professor Krashen proposes that in fact, Whole Language addresses the very problems that may be associated with low levels of reading comprehension among California’s linguistically diverse student population.

A second reason why standardized reading achievement test scores cannot establish a causal relationship between instructional methods and approaches is the factor of students’ demographic characteristics. Reading achievement test score data reporting data may not disaggregate the test population by student characteristics associated with lower levels of reading achievement on the basis of known risks of lower achievement for rational causal relationships that are non-instructional variables, like English language learners. In the period of the most intense battles in the reading wars in California, the state had experienced dramatic growth in the category of students classified as English Language Learners. These students, who by definition do not understand and speak English equivalent to their native English-speaking age and grade level peers, made up 41% of the California student population. The majority of these students were in the early elementary grades. (McQuillen, 2000). According to the Department of Education’s Language Census Report (2003), 41% of California’s students are currently classified as ELL or have been reclassified as fluent English proficient (FEP). One in every three kindergarten students is classified as ELL. Among ELL students, 85% are native Spanish speakers.  Data from multiple years of language assessment using the California English Language Development Test (CELDT) indicated that it takes an average of six years for these students to reach full proficiency in academic English (California Legislative Analyst’s Office, 2004). When test score data was analyzed by disaggregating the reading performance scores of students classified as English Language Learners from the population of students who are native English speakers who are proficient in English, reading achievement showed a statistical normal curve distribution of scores within the native speakers of English population. No inferences about causal factors other than English language proficiency can be drawn from these data on reading achievement.

A third reason that alleged causal relationships between modes of literacy instruction and standardized reading achievement test scores must be rejected is because this: It is impossible for any research methodology to clearly identify a direct and explicit causal relationship that can be defined as an independent and dependent variable for research purposes between how and what teachers teach and what students learn based on standardized test scores. See McQuillan (2000), Harris et al., (2011) and Kim (2008) for further discussion of this reality.

Politics, not Science, Explain the Reading Wars

Professor P. David Pearson (2004) predicted the challenges to the whole-language movement because of its focus on empowering teachers as decision makers over curriculum and instruction:

Politically, I predicted that its commitment to grassroots decision making—a commitment requiring that everything must be done to preserve as much power and prerogative for individual teachers (who must, in turn, offer genuine choices to individual students)—would doom it as a policy initiative. In an atmosphere in which accountability systems driven by externally mandated high-stakes tests lay just over the horizon, I wondered whether policy makers, or parents for that matter, would be willing to cede that level or prerogative to a profession that in terms of its capacity to deliver achievement, seemed to be asleep at the wheel. My overarching question was whether whole language could withstand the pressure of curricular leadership, with implicit responsibility for whatever trends in achievement ensued. My suspicion was that it was better situated as a guerilla-like movement that made occasional sorties into the policy world to snipe at those in curricular power.


Baumann, J. F., & Stevenson, J. A. (1982). Understanding standardized reading achievement test scores. The Reading Teacher, 35(6), 648-654.

California Department of Education, (2004) Language Census Report.

California Legislative Analyst’s Office (2004). A look at the progress of English Learner Students. Sacramento, CA: Author.

Harris, P., Smith, B. M., & Harris, J. (2011). The myths of standardized tests: Why they don’t tell you what you think they do. Rowman & Littlefield Publishers.

Kim, J. S. (2008). Research and the reading wars. Phi Delta Kappan, 89(5), 372-375.

Krashen, S. (2002). Whole Language and the Great Plummet of 1987-92: An urban legend from California. Phi Delta Kappan, 83(10).

Lemann, N. (1997). The reading wars. The Atlantic Monthly, (November), 128-134.

McQuillan, J. (2000). Mis-READing the data: Why California’s SAT-9 scores don’t make the case for English immersion. NABE News, 23 (4), 16-17, 23.

Mora, J.K. (2002). Caught in a policy web: The impact of education reform on Latino students. Journal of Latinos and Education, 1(1), 29-44.

Pearson, P. D. (2004). The Reading Wars. Educational Policy, 18(1), 216-252.

Smith, F., & Goodman, K. S. (2008). “On the psycholinguistic method of teaching reading” revised. Language Arts, 86(1).