Nonpartisan Education Review / Articles, Volume 2 Number 1
Access this article in pdf format
The 1998 reauthorization of the Higher Education Act requires states to report annually to the U.S. Department of Education the number of prospective teachers at each of their teacher training institutions who pass the state’s tests for licensure. However, the law left decisions on what licensure tests to require, what to assess on them, and their passing scores up to each state.
This paper provides an analysis of the descriptions of the subject tests assessing reading instructional knowledge that prospective elementary teachers in this country take for licensure: those offered by Educational Testing Service, a variety of those provided by National Evaluation Systems, and the one offered by American Board for Certification of Teacher Excellence. I examined these descriptions to determine whether the tests appear to address three major components of a research-based approach to reading pedagogy (instruction in phonemic awareness, phonics, and vocabulary knowledge), the weights attached to knowledge of these three components, and the quality of the sample questions they provide. In order to estimate the percentage of test items addressing these three components on each test, I drew on information on the websites of the three major organizations that develop teacher tests as well as of the states that have contracts with NES. To judge by the topics mentioned in the profile for the tests that states require for elementary licensure and the weights attached to the sections of the test containing these topics, most of ETS’s tests devote a tiny proportion of their content to these three components. These tests are used by over 35 states for licensure. The profiles of the tests developed by NES for its client states for elementary licensure range from some that are similar to the ETS tests to some that substantially address these three components.
I also analyzed the profiles of the tests required for licensure as a reading teacher, reading specialist, early childhood teacher, or special education teacher. This extended analysis was undertaken to determine the extent to which professional preparation programs may be accountable for teaching these four other groups of educators what they need to know to support or supplement the reading pedagogy provided by elementary classroom teachers. To judge by the online information provided by the testing companies, tests for licensing reading teachers and reading specialists range from a few NES tests that seem to assess these components fully to other NES and ETS tests that seem to address them quite meagerly. Alarmingly, the tests most states require for licensing special education and early childhood teachers do not address these components at all. In addition, ETS offers a set of pedagogical tests of “principles of teaching and learning,” required by many states for the initial license of all teachers in addition to a subject test, that, to judge from its sample questions, seems to denigrate non-constructivist approaches to pedagogy.
The findings of this study suggest that even a drastic revision of currently deficient licensure tests for prospective elementary teachers to ensure they are taught a research-based approach to reading pedagogy will not be sufficient to guarantee the use of such an approach. What is needed is systematic revision of all licensure tests for those who teach children or who supervise or supplement the work of those who do to make sure that they all promote a research-based approach to reading pedagogy.
Why American Students Do Not Learn to Read Very Well:
The Unintended Consequences of Title II and Teacher Testing
II.Title II in the Higher Education Act
III.The knowledge base underlying reading instruction
IV.Subject tests for prospective teachers
A. Licensure tests for elementary teachers assessing reading instructional knowledge
1. Results of the analysis
2. Why the indifference to the knowledge base for reading instruction?
3. The influence of elementary licensure tests
B. Licensure tests for educators who supervise or supplement the work of elementary teachers
1. Licensure tests for reading teachers
2. Licensure tests for reading specialists
3. Licensure tests for early childhood teachers
4. Licensure tests for special education teachers
V. Pedagogical tests for beginning teachers
VI. Summary and Conclusions
Appendix A: Profile of Eleven Licensure Tests for Elementary Teachers Assessing Reading Instructional Knowledge
Appendix B: Profile of Five Licensure Tests for Reading Teachers
Appendix C: Profile of Six Licensure Tests for Reading Specialists
Appendix D: Profile of Three Licensure Tests for Early Childhood Teachers
Appendix E: Why the Massachusetts Tests are Different
The 1998 reauthorization of the Higher Education Act included a new provision that required states to report annually to the U.S. Department of Education the number of prospective teachers at each of their teacher training institutions who pass the state’s tests for licensure. However, it also allowed each state to decide what licensure tests it would require, what it would assess on them, and their passing scores.
So far, there has been no systematic study of the content of the subject tests used by states for licensure. Thus we do not know whether Title II’s requirement does in fact hold schools of education accountable for teaching prospective teachers of elementary children how to teach reading effectively and ensure qualified teachers of reading.
In the context of efforts to improve the reading skills of children from K to 8 through Reading First and annual testing from grades 3 to 8, and to ensure that all teachers are qualified to teach their subjects, this paper offers two analyses: (1) an analysis of the available information on the content of those tests now required of prospective elementary teachers that assess their reading instructional knowledge, and (2) an analysis of the available information on the content of current licensure tests for reading teachers, reading specialists, early childhood teachers, and special education teachers. The second analysis was undertaken in order to provide a more complete picture of the extent to which professional preparation programs may be accountable for teaching these other educators the reading instructional knowledge they need for supporting or supplementing the work of elementary teachers in self-contained classrooms.
II. Title II of the Higher Education Act
In large part, the provision requiring teacher testing was provoked by nationwide publicity about the results of the first teacher tests in Massachusetts in 1998, mandated by the Massachusetts Education Reform Act (MERA) in 1993. Close to 60% of the first cohort of test-takers failed the tests. The fall-out ignited a firestorm. In response, the Massachusetts Board of Education adopted a policy that allowed its department of education to put on probation any institution whose teacher training programs had less than a 80% pass rate on the state’s teacher tests in several successive years. Even though these Massachusetts tests were supposed to have been pilot tests, the high failure rate on them appalled the nation and led the U.S. Congress later that very year to establish the first accountability measure for higher education in U.S. history. In a few years, schools of education in every state were to be judged by their graduates’ pass rates on their own state tests.
Until 1998, many states had not required prospective teachers to take any licensure tests, even though passing a licensure test is required for admission to almost every other profession. By 2005, all states had a test of prospective teachers’ basic reading and writing skills in place and often a test of basic mathematics skills as well. Most also required a test of subject knowledge. As of 2006, for their subject tests, over 35 states use the PRAXIS II tests developed by Educational Testing Service (ETS). Over a dozen states, including such populous ones as California, Illinois, Michigan, New York, and Texas, contract with National Evaluation Systems (NES) for all or almost all of their tests; NES tests are tailored to each state’s own licensing regulations, professional teaching standards, and K-12 standards. About six states allow prospective teachers using an alternative route to take the licensure tests developed by American Board for Certification of Teacher Excellence (ABCTE) in lieu of their required ETS or NES tests. Because each state sets its own passing score, no comparison of teacher test results across states is possible, even among states using the same test.
As basic as reading instruction is for children’s academic success, Title II did not specify where an elementary teacher’s reading instructional knowledge should be assessed (on a test of its own, for example, or as part of a more comprehensive subject test), nor what it should cover. Title II’s complete deference to the states makes a comparison of tests assessing reading instructional knowledge more difficult to carry out than, say, a comparison of tests of mathematical knowledge for high school teachers. There is much more consensus on what the latter tests might cover even if the passing scores vary across states. By contrast, there is little agreement on what should be covered in a test of reading instructional knowledge. In addition, many components of elementary reading instruction are also a “language art” in grades K-6, and the inclusion of all language arts components on a test of reading instructional knowledge depends on the extent of coverage desired by a state. As generalists, elementary teachers also teach a host of other subjects each day. As a result, the particular constellation of components comprising the reading instructional knowledge assessed on any one subject test for these teachers is arbitrary.
Not only may the content of what is assessed vary across tests, so, too, may the test format. One subject test may consist only of multiple-choice questions, another of essay questions, and a third a mixture of both. Moreover, one subject test may be wholly dedicated to reading instructional knowledge and stress the major components of reading in the elementary grades. Another subject test equally dedicated to reading instructional knowledge may address a larger span of grades or only the secondary grades. Sometimes the knowledge base for reading instruction is assessed on a general elementary curriculum test along with topics in mathematics, history, geography and science, and possibly arts and physical education, resulting in skimpy attention to each component of reading instruction.
III. The knowledge base underlying reading instruction
A clear body of knowledge underlies most of the areas of the elementary curriculum subsumed under reading instruction, and this knowledge is derived from a large, credible, and consistent body of research and scholarship conducted over the past 100 years. It is more than reasonable to expect prospective elementary teachers to demonstrate their understanding of what this research base supports (and does not support) on a subject test for licensure, whether or not this knowledge is reflected in the “best” practices or approaches to reading pedagogy currently taught in education schools to prospective teachers. It is important to note that reading instructional knowledge is assessed by all testing companies on subject tests, not on tests of teaching skills.
This body of knowledge begins with research findings about the role of phonemic awareness, knowledge of sound/letter relationships, fluency, vocabulary knowledge, and general comprehension of written language in the development of reading skill. And it includes how these components of reading skill should be taught. As the evidence indicates, most children must receive systematic instruction in phonemic awareness for distinguishing the sounds in words and in phonics for identifying printed words; use textbooks with vocabulary controlled by spelling patterns to practice the phonics skills they are taught from lesson to lesson, in addition to listening to and reading quality literature; regularly read aloud to demonstrate fluency; practice enough to acquire decoding skills to the point of automaticity; and receive systematic instruction through the grades to develop their knowledge of word meanings.
Table I shows the key research findings on these five components of beginning reading instruction as set forth in a monograph prepared for the U.S. Department of Education and distributed to the schools for the Reading First initiative. For further details on the research base underlying Reading First, readers might consult, among other works, key books by Jeanne Chall, one of the foremost reading researchers in this country, including Stages of Reading Development, Second Edition (1996), Learning to Read: The Great Debate, Third Edition (1996), and her last book, The Academic Achievement Challenge (2000).
Beyond the five areas of reading instructional knowledge highlighted by Reading First, there is much more for those who teach beginning reading to learn before they begin teaching. This knowledge includes an understanding of the differences between literary and informational texts in purpose, content, and structure, based chiefly on literary scholarship, as well as of the differences in the skills needed to understand each text type. It includes knowledge of the features and history of the vocabulary used in written English, especially in academic materials, based on scholarship in linguistics or philology. It includes research-based knowledge of the relationship between language development and beginning reading instruction. It includes knowledge of the research base underlying spelling instruction, the flip side of phonics instruction that may be used to reinforce it or to teach phonics when direct instruction in phonics is forbidden (unless, of course, direct instruction in spelling is also forbidden). It extends to an understanding of the research base underlying writing instruction and the kinds of writing activities that can be used to develop reading skills. It encompasses an understanding of structural complexities in English sentences and of English punctuation conventions because the latter assist in understanding English sentence structure and hence the meaning of English sentences. And it includes knowledge of the research-based differences between boys and girls in their general reading interests, as well as knowledge of the growing body of scientific research on the differences in their neurobiological development as it affects learning to read and write.
IV. Subject tests for prospective teachers
This paper examines the available information on licensure tests. As with any high stakes test, what they cover, how they cover this content, the weights assigned each category on the test, and what is in the sample questions the testing company provides are likely to have a strong influence on what the test-taker studies in preparation for the test. Tests of reading instruction for licensing teachers are unlikely to be different. They tell future elementary teachers, reading teachers, reading specialists, early childhood teachers, and special education teachers what should be taught in the name of reading instruction and how.
A. Licensure tests for elementary teachers assessing reading instructional knowledge
Profiles of eleven tests assessing reading instructional knowledge required of prospective elementary teachers for an initial license are in Appendix A. The profiles indicate their major categories and their weights. Four of these tests are offered by ETS as part of the PRAXIS II series. A large majority of the states require one or more of these subject tests for an elementary license and sometimes for an early childhood license as well. Six are used by states that contracted with NES for their development. Because the most populous states as well as a few smaller states contract with NES, I include those required by California, Illinois, Massachusetts, Michigan, New York, and Oklahoma to cover a variety of NES states. I include the test for elementary teachers offered by ABCTE because some states allow it as an alternative to the required test.
Below is my estimate of the percentage of each of these tests seemingly addressing three of the basic components of beginning reading instruction—the development of phonemic awareness, phonics knowledge or decoding skills, and vocabulary knowledge—each of which is supported by a large, consistent, and credible body of research evidence. I selected these three components because of their central importance to reading instruction in the elementary school and because these areas have been neglected in the preparation of elementary teachers for decades (as suggested by their emphasis in the Reading First initiative). All three components are easy to identify if they are mentioned at all.
∙Multiple Subjects Exam (for Elementary Education), ABCTE: About 6.4% of this test addresses phonemic awareness and phonics instruction—in the first section of the first category. About 3% may address vocabulary development.
∙PRAXIS 0011 (Elementary Education: Curriculum, Instruction, and Assessment), ETS: About 2% may address phonemic awareness and phonics skills. Decoding is mentioned in one category. About 1% addresses vocabulary development. However, ETS staff informed me that 8 of the 110 items address these three components, for a total of 7%.
∙PRAXIS 0012 (Elementary Education: Content Area Exercises), ETS: To judge by the examples given, this test may not address phonemic awareness and phonics skills at all. The one example given for the language arts is on the writing process. Attention to vocabulary development appears only in a sample response.
∙PRAXIS 0014 (Elementary Education: Content Knowledge), ETS: About 2% may address phonemic awareness and phonics skills. They may be part of “reading instructional strategies.” About 1% may address vocabulary development.
∙PRAXIS 0201 (Reading across the Curriculum: Elementary), ETS: Based on the website description, about 11% may address phonemic awareness and phonics skills—in Category V, Category IV, and an exercise. About 5% may address vocabulary development. Another 17% may address these components if they are part of the focus of the constructed-response question on reading materials and instruction, for a total of 33%. However, according to ETS staff, the test was recently revised and now contains 13% on phonics, 7% on vocabulary, and 2% on phonemic awareness, for a total of 22% of the multiple-choice items, plus the percentage from a constructed-response question, for a total of 39%.
∙California RICA, NES: Almost all of Category II focuses on phonemic awareness and phonics skills, and about one/third of Category IV addresses vocabulary development. Since there is also a constructed-response question keyed to each of these two categories, it is possible that 45-50% of the test addresses the three areas.
∙Illinois 110 (Elementary/Middle), NES: About 5-6% of the test addresses these three areas, in the first of its 22 sections. I have assumed that each section receives equal weighting.
∙Michigan 83 (Elementary Education), NES: About 2% of the test (one section in Category I) addresses phonics instruction and vocabulary development. There is no mention of developing phonemic awareness.
∙Massachusetts 90 (Foundations of Reading), NES: All four sections in the first category (35% of the test) focus in some way on phonemic awareness and phonics skills. Another 9% addresses vocabulary, in the second section of the test. At least one constructed-response question addresses the first category. Thus, a total of 54% of the test addresses these three areas.
∙New York 02 (Multi-Subject Test: Grades PreK-9), NES: About 12% addresses these three areas, in the first of the eight sections in Category I and in all of Category VIII.
∙Oklahoma 50 (Elementary Education Subtest I), NES: Two of the 17 sections address the three components for 10% of the test. The constructed-response question addresses reading and is worth 15%. Although it may address more or other than these components, about 25% of the test could address these three components.
To judge by these percentages, there is a huge variation in the importance these tests attach to an understanding of the implications of an alphabetic writing system for reading instruction and the role of vocabulary knowledge in developing reading skill. The tests differ markedly in what they expect a new elementary teacher to know. The list below, based on the percentages above, provides the total percentage of the test score that may be accounted for by test items on the three Reading First components. I also note the number of states that either require the test or make it one of two or three test options. (Keep in mind that many states require more than one PRAXIS subject test for this license. If they require PRAXIS 0012, which consists of four essay questions, they require another test as well, probably because this test may not address reading instruction at all, and possibly because a test consisting only of essay questions tends to be less reliable than a test with mainly multiple-choice questions.)
PRAXIS 0011: 7% (17 states) Massachusetts 90: 54%
PRAXIS 0012: 1% (7 states) Michigan 83: 2%
PRAXIS 0014: 3% (22 states) New York 02: 12%
PRAXIS 0201: 39% (1 state) Oklahoma 50: 25%
California RICA: 45-50% Illinois 110: 5-6%
1. Results of the analysis
As these percentages indicate, states contracting with NES tend to expect prospective elementary teachers to acquire more reading instructional knowledge in these three critical areas than do states using ETS tests. The two states trying to ensure a high level of knowledge in these areas (California and Massachusetts) do so by requiring a dedicated test of reading pedagogy in addition to a general, or multi-subject, test. Oklahoma also has a high percentage because its multi-subject test consists chiefly of reading and the language arts. On the other hand, at least two NES states (Michigan and Illinois) have no higher expectations than those states using ETS’s elementary tests. A state gets what it asks for if it uses NES.
The percentages on these tests tell us even more, given the strong possibility that passing scores are not set so high as to fail a majority of those who take the tests for the first time. Clearly, prospective teachers taking the NES test required in Michigan and Illinois, or any of ETS’s elementary tests, need not worry if they have learned little about phonemic awareness or phonics or decoding—two basic components of beginning reading. Nor do they need to be anxious about how little they may have learned about the nature of the vocabulary of the English language and the variety of approaches needed for developing vocabulary knowledge—the basic element in reading comprehension at every educational level in every subject area. These aspects of reading instruction receive such minimal attention that test-takers could fail every question on these topics and still pass the test no matter where the passing score is set. Put another way, there are no negative consequences for their professional preparation programs if education faculty have included little about these three components in their methods courses and insisted that reading vocabulary be taught only “in context” or on the basis of “prior knowledge,” injunctions that appear in ETS’s test descriptions whenever vocabulary is mentioned at all. Why could the examinees fail all the questions addressing these three areas and pass with flying colors given where a passing score is likely to be set? Compensatory scoring allows it.
This empirical evidence of professional indifference to a crucial part of the knowledge base for reading instruction on the PRAXIS elementary tests and on some NES state tests would not be so troublesome if states compensated for the miniscule attention to reading instructional knowledge on them by requiring all prospective elementary teachers to take, in addition to a multi-subject test, a test emphasizing these crucial areas. But this does not seem to be the case in almost all of the states. Based on information from state departments of education in 2004 for a review of state standards in the English language arts and reading (Stotsky, 2005), only four states (California, Massachusetts, Oklahoma, and Tennessee) require a separate reading test for licensing prospective elementary teachers. (Although Oklahoma’s test is not totally dedicated to the assessment of reading, it may be perceived that way by its department of education.) Virginia also now requires a separate test of reading instructional knowledge, for prospective elementary, early childhood, and special education teachers, but the full test description is not yet online.
2. Why the indifference to the knowledge base for reading instruction?
The most reasonable explanation for the indifference to the knowledge base for reading instruction on the ETS tests and the tests of some of the states contracting with NES, as evidenced by the low percentages, is the apparent influence of a particular philosophy of teaching and learning. Analysis of the descriptions of the topics for the ETS tests and the Michigan NES test, as well as the sample test questions offered, indicates the dominance of a whole language approach to reading instruction, a non-research-based approach to teaching and learning that not only promotes itself exclusively but often goes out its way to demonize (always unnamed) research-based approaches, including teacher-directed learning. An alert candidate for licensure in Michigan or in a state requiring a PRAXIS elementary subject test for licensure will have few doubts about the correct answer to each question. To answer correctly most questions on matters of curriculum and instruction, test-takers need little more than a cursory understanding of the beliefs and claims in a whole language or constructivist approach to teaching and an ability to use a process of elimination. That is because most of the right answers for items on curriculum and instruction have little to do with research-based knowledge; they reflect unsupported and unsupportable statements of belief or simply plain common sense, as examples below will show.
What are the tenets of a whole language approach? In contrast to what is in Table I, which was assembled by two highly regarded reading researchers for the Reading First initiative, whole language advocates believe there are few if any skills teachers of beginning reading need to develop explicitly or systematically. Skill in reading, they claim, develops naturally as language and cognition develop, with language and cognition maturing together independently of direct instruction. They analogize learning to read and write to the natural process of learning to listen and speak, asserting that beginning readers learn to read through their effort to derive meaning from written language just as they have learned to understand oral language. In a “literacy-rich” learning environment, they claim, children will induce on their own the alphabetic principle underlying the written code in the same way they induce the syntactic structures of their native language, use a word’s context to identify it, and acquire the meaning of difficult words naturally through multiple exposures to them in varied contexts. These educators want primary grade children to read only “authentic” literature from the beginning and to spend a good part of their reading instructional time reading silently, presumably to concentrate on comprehension, as opposed to reading aloud to enable the teacher to determine how accurately and fluently they are decoding the text. That this approach is dominant in our schools of education is suggested by the analysis of 61 current course syllabi in reading by Steiner and Rozen (2004) and by the analysis of the syllabi and reading requirements for another 80 reading methods courses across the country for a report released by the National Council on Teacher Quality (2006). However, there is no large, credible, and consistent body of scientific evidence from anthropology, linguistics, or experimental psychology to support whole language educators’ beliefs and claims.
Here is one example that illustrates the unprofessional if not devious nature of many of the test items promoting whole language. On PRAXIS 0201 (and on PRAXIS 0202, a test for teachers of secondary reading), one of the current sample questions is as follows:
According to research, effective vocabulary instruction integrates new information with the familiar. Students are most likely to achieve that integration by
(A) using a dictionary
(B) developing a semantic map
(C) analyzing word structure
(D) memorizing words
The Answers page indicates that B is the best answer and goes on at length to explain why. “A semantic map is a visual representation of ideas and the relationships among them. It usually has a key word or concept at the center, with other information radiating outward. It may be used before, during, and after reading to represent what students already know about a topic, to keep ongoing notes, to reorganize information, and to review and enhance information because of new information gained.” After such a description, what new teacher would be courageous enough to indulge in any kind of focused vocabulary instruction, including dictionary use and analysis of word structure.
What the test-taker is unlikely to know is that while the first sentence in the stem may be based on research findings (and is only one of many assertions that could be made about effective vocabulary instruction), the second sentence as completed by B isn’t. There is no body of research comparing the use of semantic maps to word analysis or dictionary use, each of which could in theory integrate new information with the familiar in a particular lesson. So we don’t know which of these strategies is the best to use for that purpose. B is just a guess. What reviews of research on graphic organizers like semantic maps do tell us is that successful learning outcomes are contingent on such variables as grade level, point of implementation, instructional context, and ease of implementation. They also tell us that, across grade levels, graphic organizers are least useful for elementary students as opposed to university populations, result in little gain in student comprehension when used as a pre-reading activity, and may not be effective at all without direct and explicit instruction by the teacher (Hall & Strangman, 2004).
A prospective teacher who prepares for the Michigan test or a PRAXIS test by carefully studying the sample questions will have little doubt about what to believe about the nature of reading and the approach to use for reading instruction. To answer a sample constructed-response question on PRAXIS 0201, test-takers currently are explicitly directed as follows: “Applying your knowledge of reading as a complex process of constructing meaning, explain how your chosen teaching strategies would effectively integrate multiple areas of reading instruction.” The message is clear: any mention of strategies that focus on developing specific skills, whether or not in isolation, will receive no credit.
Two sample questions for the Massachusetts Foundations of Reading Test (90) show how to focus on knowledge about decoding, not pedagogical beliefs.
Which of the following statements best explains the difference between phonemic awareness and phonics skills?
(A) Phonemic awareness is the general understanding that spoken language can be represented by print, while phonics requires knowledge of particular letter-sound associations.
(B) Phonemic awareness is the ability to associate sounds with letters, while phonics refers to knowledge of common spelling patterns.
(C) Phonemic awareness involves a general understanding of the alphabetic principle, while phonics includes letter-blending skills.
(D) Phonemic awareness is the ability to distinguish individual speech sounds, while phonics requires knowledge of letter-sound correspondence.
Which of the following describes the most likely source of phonics difficulties for English Language Learners whose primary language is alphabetic?
(A) Other languages tend to use letter combinations to represent individual phonemes.
(B) The letters of the English alphabet may represent different phonemes in other languages’ writing systems.
(C) Other languages rely more heavily on the use of context cues in decoding than English does.
(D) English contains words that have been adopted from many other languages.
The correct answer for the first question is D, and for the second, B. Neither question attempts to promote specific pedagogical strategies or demean others. In contrast, a bias against the value of teaching decoding skills results in misleading prospective teachers about how to teach reading. Consider a sample “scenario” it offers and the question following it for PRAXIS 0011.
A small group of second-grade students are reading a story together orally. One of the children has difficulty reading the word “sparkled.” To make sure that all the students understand the word, the teacher asks the student to read the rest of the paragraph aloud. Then, when the student has finished reading, the teacher asks the group how the character in the story felt as she spoke and what her eyes did to show her excitement.
The teacher is helping her students use which of the following word-attack strategies?
(A) Phonic clues
(B) Context clues
(C) Configuration clues
(D) Morphemic clues
The correct answer to the question, according to ETS, is B, but one arrives at this answer chiefly through a process of elimination. What we really see in this scenario is a teacher who has been completely muddled by whole language doctrine. The child who had difficulty reading the word “sparkled” may well have understood the meaning of the word (it is a word whose meaning is apt to be known to an eight-year old child). In addition, nothing in the scenario suggests its meaning was unknown to the rest of the class. But instead of remedying the child’s deficiency in decoding, the teacher treats the child’s reading problem as a comprehension problem. Worse yet, she wastes the whole class’s time to understand a word everyone may have understood to begin with. Perhaps the teacher didn’t recognize that the child’s problem was a decoding deficiency. But instead of a question about the teacher’s misguided pedagogy or lack of knowledge, ETS provides a question that validates it. One can only conclude that this muddled scenario was intended to convey a message: focus on meaning even if the problem is decoding. And turn what may be individual children’s needs into class needs even if there is no evidence that the whole class has the need.
I say this because this approach to pedagogy seems to run through many of ETS’s sample items and the sample constructed-responses it provides. For example, in a sample constructed-response to a scenario-type question on PRAXIS 0012 about a poor piece of student writing, the anonymous writer of the response suggests that to develop this student’s writing she would encourage the teacher to increase his vocabulary. This could be done, she goes on to say, “by way of a brainstorming activity with the entire class.” The writer does not suggest that the teacher might also offer the class some “describing” words that go far beyond what the students may be able to generate on their own but might have heard or read. ETS has rated this particular sample response very highly; its “message” to prospective test-takers will therefore be very clear.
3. The influence of elementary licensure tests
To judge the scope of the influence of these subject tests, it is useful to review how many states require them for licensing elementary teachers. According to the requirements listed for each state on the ETS PRAXIS website, 22 states either require PRAXIS 0014 for licensure or make it one of the options. At least 17 either require PRAXIS 0011 or make it one of the options. The seven that require PRAXIS 0012 also require at least one other PRAXIS test. One state (Tennessee) requires PRAXIS 0201 as well as several other PRAXIS tests for licensure, while another state (Maryland) makes PRAXIS 0201 one of two options.
Given that ETS drew on the standards for training teachers developed by the International Reading Association and the National Council of Teachers of English and on the advice of its members for the content of its tests of reading instructional knowledge, one might easily predict serious limitations in reading instruction in the elementary grades in the states using ETS tests. When one adds the states whose NES-developed tests seem to resemble ETS’s tests (most likely because the department of education in those states drew on the members and documents of these two organizations for advice on test design and construction or asked NES to), the scope of the deficiencies in reading instruction extends beyond the over 35 states that use ETS tests for licensing elementary teachers. The basically flat scores over time on the grade 4 and grade 8 reading tests given by the National Assessment of Educational Progress (one has gone slightly up, the other slightly down) seem to confirm that prediction.
Because their faculty and their professional organizations have largely shaped the content of their state’s licensure tests, directly or indirectly, there is good reason to assume that teacher preparation programs in reading instruction in most institutions of higher education across the country reflect to a greater or lesser extent what is assessed on their state’s test for prospective elementary teachers. If most students have been taught from kindergarten through grade 4 by teachers who have deliberately not been taught in their methods courses the knowledge base for beginning reading instruction, it is miraculous that we have as many children as we do scoring at the Proficient and Advanced levels on the NAEP tests in reading by grade 4. And if students in grade 4 to grade 8 have been taught by teachers who have been told in their methods courses that any attention to vocabulary that does not draw on “prior knowledge” or context is a flagrant violation of an “integrated” or “whole language” approach to learning (which it is) and therefore something to be strenuously avoided, then it is even more miraculous that we have as many children as we do scoring at the Proficient and Advanced levels on the NAEP tests in reading by grade 8.
B. Licensure tests for educators who supervise or supplement the work of elementary teachers
This section examines the tests required of four groups of educators who are expected to supervise, support, or supplement the work of the elementary classroom teacher in a self-contained classroom: reading teachers, reading specialists, early childhood teachers, and special education teachers. An analysis of the licensure tests for these four groups of educators provides a fuller and clearer picture of reading instruction in America’s elementary classrooms.
1. Licensure tests for reading teachers
In states that license reading teachers, a reading teacher may teach reading as a subject in the same way that a science teacher teaches science, period after period in a subject-divided day. That is, she may be a subject matter specialist who teaches a group of children their regular reading lessons day after day. She may also work as a “literacy coach” in some states. A few states also use this test for licensing a reading specialist. Each state sets its own guidelines. It is important to note that not all states require a teacher who works as a classroom subject matter specialist or as a coach to be licensed. And in no state is a prospective elementary teacher required to obtain a license as a reading teacher in order to obtain an initial license as an elementary teacher. However, a reading teacher is not a reading specialist. The reading specialist (discussed in the next section) is chiefly a clinician or chiefly an administrator, depending on the test a state uses.
I examined five reading tests, none of which is required for a license as elementary teacher. Their profiles are in Appendix B. One is offered by ETS, one is now being developed by ABCTE, and the others were developed by NES for three of its client states. I used the three criteria to judge them that I used to judge required elementary licensure tests (development of phonemic awareness, phonics skills, and vocabulary knowledge). As I did for the tests required for elementary licensure, I applied these three criteria to the lengthy descriptions offered for each test online.
∙ PRAXIS 0200 (Introduction to the Teaching of Reading), ETS: About 2% of this test may address instruction in phonemic awareness and phonics. The category on text structure mentions “graphophonic” cues. About 2% may address vocabulary.
∙Reading Test for K-6, ABCTE: About 24% of this test, now under development, addresses the first two components. About 14% addresses vocabulary development.
∙Illinois 177 (Reading), NES: About 4-5% of the test appears to address these three areas, in one of its 12 sections. I have assumed each section receives equal weighting.
∙Michigan 05 (Reading), NES: About 5% of the test addresses these three areas, mentioned in one section in Category III and in one section in Category IV.
∙New York 65 (Literacy), NES: About 45% of the test seems to address these three areas—in four of the five sections in Category II and all of Category VIII.
The total percentage of test items addressing the three components is as follows:
PRAXIS 0200: 4%
Michigan 05: 5%
New York 65: 45%
Illinois 177: 4-5%
ABCTE Reading: 38%
As can be seen, tests for licensing reading teachers range from those that strongly address these three components (New York and ABCTE) to those that barely address them (Illinois, Michigan, and PRAXIS 200). This means that teachers who earn a reading license in Illinois or Michigan may have obtained little research-based reading instructional knowledge from their licensure programs, yet are able to serve as a subject matter specialist or possibly a “literacy coach” for other teachers. Sample questions for PRAXIS 0200 illustrate how the whole language philosophy shaping this test leads to items that demonize or propagandize. For example,
Of the following statements, which is the most consistent with a whole-language approach to instruction?
(A) Teaching specific hierarchical subskills in the language arts develops skills essential to learning in the content areas.
(B )Instruction that focuses on student participation in a discourse community is critical for developing good reading and writing skills.
(C) Teaching students phonetic decoding strategies facilitates student comprehension and composition of text.
(D) Direct instruction is necessary to develop the greatest proficiency in comprehension.
(E) Teacher-centered questions are an integral part of developing students’ communication processes.
Prospective test-takers studying this sample item will likely infer that the correct answer has to be B (as pretentious as it is) by a process of elimination. While one might argue that prospective teachers ought to know what whole language supporters believe, that does not justify referring to components of a research-supported approach in statements that are distorted or exaggerated in some way to render them ridiculous. Not one of the wrong choices is a reasonable statement, even though a reasonable statement about skills teaching would still be inconsistent with a whole language approach (and therefore still wrong). If prospective test-takers have any doubts about the desirability of a whole language approach, the Answers page helpfully informs them that “The whole-language approach promotes the development of reading and other communications skills in a social, communicative network. Therefore, B is the best answer.” There is, of course, no empirical basis for this statement. The whole language approach may try to promote reading development in a social network, but there’s no experimental evidence that it achieves its goal better than another approach might.
The correct answer to another sample question for PRAXIS 0200 tries to imply that children’s knowledge develops best in the absence of a specified curriculum or content provided by a teacher.
According to Kamii’s curriculum theory, children build knowledge principally through the
(A) process of trial and error
(B) generation of appropriate responses to stimuli
(C) use of their five senses to differentiate information from emotions
(D) imitation of the behaviors of adults or other children
(E) formation of mental relationships by interacting with objects, people, and events
The Answers page indicates that E is the best answer. “Building on the work of Piaget, Kamii has stressed the importance of manipulative materials and diverse experiences as parts of the early childhood curriculum in order to build students’ mental relationships and concepts.” An alert test-taker studying these sample items will note how the act of teaching seems to be characterized (actually caricaturized) in B and may well conclude that children’s knowledge is not developed by direct teaching. Note that the Answers page uses Piaget’s name to lend authority to Constance Kamii, someone whose work is not in the field of reading at all. A cynic might well believe that her name is being promoted because she has co-authored, among other things, an article on mathematics education (Kamii & Dominick, 1998) in which it is seriously proposed that teaching children the standard algorithms of arithmetic is dangerous to their health.
2. Licensure tests for reading specialists
More serious problems lurk at both the supervisory level and the clinical level in states that require a specific license for those who supervise reading instruction at the building or district level or who engage in clinical or consulting work in reading. The problem can be seen in the description that ETS offers for a test called Reading Specialist (0300). The test is described as intended for both a supervisory position related to the teaching of reading in K-12 and for a position beyond a regular classroom setting as a reading clinician, consultant, or resource person. These are very different positions to hang on the same test and license in light of the vast differences in professional knowledge and skills between a clinician and an administrator. Herein lies the problem with many licensure tests for reading specialists.
The requirement of a specific license for supervising reading instruction at the school or building level is fairly new, and the use of the reading specialist test to address this position, blurring the lines between a clinician and an administrator, has been a matter of dispute in the field. In one corner of the dispute are those who see a reading specialist as a trained clinician, as someone who should be licensed for work in the schools (like a speech pathologist), and they view the use of a reading specialist test and license to address a host of matters not essential to a clinician as a way to water down the clinical knowledge and training such a specialist should have in working with students with serious reading problems, often on a one on one basis. In the other corner are those who ascribe greater importance to an administrative position for a reading director (and want a specific administrator program for training them in an education school, or at least some program-specific coursework), and who may be less convinced of the need for well-trained reading clinicians in a public school. At stake for the public are two concerns: whether well-trained reading clinicians will be available only in private settings, and whether directors of a specific subject should be required to hold a license for that administrative position, such as a license for science director, a license for mathematics director, etc. To my knowledge, most if not all states do not require K-12 subject directors to hold a subject-specific license. Indeed, K-12 curriculum directors in many school districts often supervise multiple disciplines and could not possibly be expected to hold a license in each one (which is possibly why some states do not offer a reading specialist license at all). The differences in content and weights across the PRAXIS 0300 test and the reading specialist tests required in five NES states illuminate these professional issues in reading. See Appendix C for profiles of these six reading specialist tests.
Wherein lie major differences among these tests for a reading specialist? Judging what is in the lengthy descriptions on their websites with the same criteria used for the elementary and reading tests, they lie most visibly in the percentage of the test that may address phonemic awareness and phonics skills and vocabulary development. As the following estimates indicate, almost 30% of two of the tests designed for a reading specialist (Massachusetts and Texas) addresses these three components. The gap is huge; a very small percentage does so in the others.
∙PRAXIS 0300 (ETS): About 3% of the PRAXIS 0300 test may address phonemic awareness and phonics instruction. About 2% might address “deliberate vocabulary instruction.”
∙Illinois 176 (NES): About 4% of this test seems to address these three components—in Category II, Section 7 (assuming that all sections are weighted equally).
∙Massachusetts 08 (NES): About 20% of the test addresses phonemic awareness, phonics instruction, and vocabulary development, all in Category I. One of the constructed-response questions addresses these areas, too, for a total of about 30%.
∙Michigan 92 (NES): About 4% or less of this test can be judged to address these three components—mentioned in one section in Category III and in Category IV.
∙Oklahoma 15 (NES): Two of the 11 sections in Category II seem to address the three components, and the constructed-response question is anchored to this category. But given how many other items are in this category, it is unlikely that more than 5-10% of the test will address these three components.
∙Texas 151 (NES): About 29% of the test addresses these three components—in four of the eight sections in Category I.
The total percentage addressing these three components is as follows:
PRAXIS 0300: 5%
Illinois 176: 4%
Massachusetts 08: 30%
Michigan 92: 4%
Oklahoma 15: 5-10%
Texas 151: 29%
Another telling difference can be found in references to the research base underlying professional knowledge for these graduate-level licenses (none is available for an undergraduate). The only reference to research in the PRAXIS 0300 test is in one of the 12 bulleted topics in Category IV and is quite a meager one. It reads as follows: “Demonstrate an awareness of how to access literacy research and disseminate it across grade levels.” In contrast, one of the three sections in Category IV in the Massachusetts test addresses in detail the “interpretation, evaluation, and application of reading research.” In addition, two of the four sections in Category III address “research-based instructional strategies, programs, and methodologies” for “promoting early reading and writing development” and for “consolidating and extending reading and writing skills.” The Texas test is filled with references to the research base in its various sections. And while at first blush it seems that the PRAXIS 0300 test gives a strong weight to diagnosis and assessment (27%), two areas at the heart of clinical training, only three of the six bulleted topics in this category (13.5%) deal with matters of test construction, type, use, and interpretation. The other three deal with communicating with teachers, students, and families.
What graduate students learn in programs leading to a reading specialist license will reflect in different ways the two different philosophies in the field. The differences show up strikingly in the sample questions each test provides. For example, the following question about decoding appears on the PRAXIS 0300 test:
Which of the following views about decoding is most consistent with an integrated language arts approach to teaching reading?
(A) Sound-symbol relationships should be taught or reinforced within the context of meaningful literature.
(B) Phoneme-grapheme relationships should be taught in a particular sequence.
(C) Decoding skills are not helpful to the reading process, so students should learn to read by reading
(D) New words should be introduced in lists every day so that students will learn more words than they encounter.
(E) Stories with controlled vocabulary should be used so that students can decode easily and thus read with fluency.
What is being assessed here is not professional knowledge but a belief. The Answers page notes that the best answer is A. “Generally, teachers who demonstrate a whole language philosophy believe that strategies, including how to use phonics, can and should be modeled and taught in the context of reading real, complete texts.” The answer is not wrong. It states what whole language teachers believe. It is a “view” that goes with an integrated language arts approach to teaching reading. What the test-taker may not know is that there is no body of evidence to support the content of the belief. Although one might argue that prospective teachers should know what a whole language approach consists of, for a licensure tests one should also see sample questions on what “views” about decoding are consistent with a skills approach to teaching reading and supported by research. No questions of this nature appear on any of the tests dominated by a whole language approach. There is no evenhandedness in the sample questions provided.
In addition, some sample items on whole language dominated tests require nothing more than the ability to read English prose and common sense to answer—certainly not a graduate course in anything. Here are two examples accompanying the description of Michigan’s test for a reading specialist:
Which of the following story elements would middle school students be more likely to find in contemporary realistic fiction than in any other narrative genres?
(A) Heroism and admirable behavior demonstrated by primary characters
(B) Plots that feature suspense or mystery
(C) Experiences and problems that could occur in their own lives
(D) Settings with unusual or exotic qualities
Which of the following statements best exemplifies the relationship among word identification, fluency, and reading comprehension?
(A) When students have a clear purpose for their reading, they can more readily identify the information that they need to find in a text.
(B) When students can use syllabication to determine the pronunciation of unfamiliar words, they are more likely to grasp the broader concepts of paragraphs they read.
(C) When students read from one word to the next at a slow pace, it is more difficult for them to make use of multiple cues to determine the meaning of a text.
(D) When students are not familiar with the organization of a text, they must construct meaning based on their knowledge of individual words.
A good high school student reading carefully could figure out that C is the correct answer for both questions. On the other hand, the following questions that accompany the Texas test draw on professional knowledge.
A reading specialist could informally assess a student’s phonemic awareness by asking the student to:
(A) Identify the sound he/she hears at the beginning, middle, or end of a spoken word (e.g., “What sound do you hear at the end of step?”)
(B) Listen to a tape-recorded story while looking at the book, then answer several simple questions about the story.
(C) Identify the letters in the alphabet that correspond to the initial consonant sounds of several familiar spoken words.
(D) Listen to the teacher read aloud a set of words with the same beginning sound (e.g., train, trap, trouble), then repeat the words.
A first-grade teacher regularly assesses the reading skills of students in the class. Which of the following assessment results most clearly suggests the need for an instructional intervention to strengthen word identification skills?
(A) A student uses phonetic spelling when writing many common single-syllable words.
(B) A student relies heavily on background knowledge to construct meaning from a text.
(C) A student has trouble decoding common single-syllable words with regular spellings.
(D) A student has trouble dividing spoken words into syllables or morphemes.
A is the correct answer to the first, and C for the second. Similar sample questions accompanying the description of the Massachusetts test also draw on professional knowledge, not beliefs.
Which of the following statements best explains why most young children require explicit instruction to help them distinguish the phonemes in spoken words?
(A) The correspondence between sounds and symbols in English is only partially regular.
(B) Many students have had limited exposure to environmental print before entering school.
(C) In normal speech, the phonemes in a word are co-articulated, or blended together.
(D) Written language makes use of some phonemes that rarely occur in oral language.
A first-grade teacher is helping a student decode the word stop. Which of the following teacher prompts illustrates the synthetic approach to phonics instruction?
(A) “Can you think of other words that end with the letters op? What sound do the letters make in those other words?”
(B) “Say the sounds made by the individual letters s, t, o, and p. What word do you get if you blend those sounds together?”
(C) “Look at this sentence: The stop light is red. Can you decode the second word by using clues from the rest of the sentence?”
(D) “Does the word look like any other words that you know how to read? Can you sound out this word by comparing it to those other words?”
The correct answer for the first question is C, and for the second, B.
How might the content of a licensure test for reading specialist matter practically to a school? First, if a state’s training programs are geared to a test dominated by whole language dogma, there will likely be serious deficiencies in their graduates’ understanding of research-based programs, materials, and methods for reading instruction, and the instructional programs they promote in their districts will likely exhibit those same deficiencies. Second, if their programs minimize the professional knowledge base for the work a reading clinician should do, their graduates may not know how to give appropriate help to children with reading difficulties.
The Title I Director in a large school district in Southeastern Massachusetts recently told a mutual friend in October 2005 she is “outraged that she can’t find anyone to hire as reading specialist because the candidates don’t know anything about reading, and that when she interviewed and asked several what the five components of reading are, most didn’t know the answer, let alone know how they would address the five components.” Her Reading First teachers, she said, know more than the candidates for the position that would supervise the teachers.
This situation is not surprising. The reading specialist license available in Massachusetts before the revision of the licensing regulations in 2000 (on which the current test is based) was oriented to a whole language approach (though not entirely), leaving many specialists licensed by the test without a firm foundation in research-based knowledge of reading instruction, depending on the graduate program they completed. This is suggested by the sudden drop in the pass rates when the original test was replaced by a revised test. From 1998 to the last test administration of the original test in July 2003, the pass rates for first-time test-takers tend to be well over 70%. In contrast, from July 2003 to August 2005, pass rates on the revised test, which contains a high percentage of items on phonemic awareness and phonics skills and vocabulary development, hover around 55%. The state’s licensure programs for reading specialist had clearly not prepared their students to address a good part of the research-based knowledge assessed on the revised test; these programs has been approved on the basis of earlier licensing regulations. For readers interested in why the Massachusetts tests for reading specialists, as well as for teachers of elementary children, differ from most other available tests, see Appendix E.
3. Licensure tests for early childhood teachers
What is especially startling in the test requirements for each state listed on the ETS website is the pitifully small number of states that expect prospective teachers of early childhood (typically spanning preschool, kindergarten, and the primary grades) to demonstrate reading instructional knowledge on a licensure test. Why should this group of teachers be of interest here? Appendix D contains the profile of three tests for the early childhood license, to provide a basis for comparison and discussion.
Where does the national disaster in reading achievement begin in teacher preparation programs? Right here, in early childhood licensure programs, to judge by what is covered on these tests using the same criteria as before.
∙PRAXIS 0020 (ETS): This test contains nothing on the first two components of reading instruction; 1% of the test may address vocabulary development.
∙Illinois 107 (NES): Two of the five sections in the first category address these three components. Assuming that the 14 sections of this test are equally weighted, and allowing 7% per section, this adds up to about 14% of the test.
∙Texas 101 (NES): Three of the 11 sections in the first category address the three components of reading, or 13% of the whole test.
What states require for the early childhood license, of course, is not the responsibility of ETS, NES, or any other testing company. Many states listed on the ETS website seem to require no subject test at all for this license, and thus it is not clear if prospective teachers for this license in these states learn anything about the knowledge base for reading instruction. They may, but there’s no accountability for it on a licensure test. Only eight states listed on the ETS website require PRAXIS 0011 and/or PRAXIS 0014 (two elementary tests) for licensure in early childhood (perhaps because the tests address the academic content taught in the elementary school as well as beginning reading instruction). Many other states on the ETS website do require PRAXIS 0020 or the PRAXIS Principles of Learning and Teaching test (discussed below) for those who teach these grade spans. But ETS’s descriptions of these latter tests contain no reference to the components of beginning reading instruction. Worse yet, their content seems to reflect the philosophy of those advising ETS on the design of a subject test for teachers of elementary children, only carried to an extreme.
There is variation in what the states that contract with NES include on their content tests for the early childhood license. As described above, the Texas EC-4 test (101) does address phonemic awareness and decoding skills, which may amount to about 13% of the whole test, more than the percentage devoted to these topics on the ETS tests for elementary teachers. But this percentage is still not a great deal, especially since there seems to be nothing on developing a reading vocabulary. The Illinois test (107) addresses all three components, which may amount to about 14% of the whole test, again not a great deal for the kindergarten teacher who today needs to know much more about beginning reading instruction than ever before.
It should not go unremarked upon that PRAXIS 0020 (the early childhood subject test) addresses no subject area content (as well as no reading instructional content), while most of the Illinois and Texas tests for those who plan to teach kindergarten focus on subject area content (as does the subject test for early childhood teachers in Massachusetts). Although some educators do not believe that teachers of young children should have an academic focus for their curriculum, the reason that tests for an early childhood license should address subject matter knowledge as well as reading instructional knowledge, however skimpily, is the span of grades they legally cover. If an early childhood license covered no higher grade than kindergarten, teachers with this license could still be expected to have some background knowledge in arithmetic, history, and geography as well as reading instruction. But many early childhood licenses cover grade 2 or higher. Early childhood educators and their professional associations have pushed successfully for an expansion of the grade levels covered by this license, and the vast differences in what is assessed by tests for the early childhood license reflect the dilemma created by the enormous span of grades and ages it may cover in many states.
4. Licensure tests for special education teachers
The details on the tests required for a special education license suggest that yet another group of teachers are not being prepared with research-based knowledge of reading instruction. So far as I can determine, fewer than six of the states listed on the ETS website as using a PRAXIS test for their elementary license require a test for a special education license that includes an assessment of reading instructional knowledge. This is also not a testing company’s responsibility because each state sets its own requirements for licensure. The large number of states that rely mainly on the PRAXIS II tests for their subject tests do require a subject test for a special education license. But the PRAXIS II subject tests for special education teachers, according to their descriptions, contain little if any subject matter or reading instructional content. The situation is the same for many NES states (e.g., Illinois, Michigan, and New York).
It is possible that licensure programs for special education in all states require some coursework in reading pedagogy. One can only hope so. But prospective special education teachers are clearly not being tested on that knowledge on their licensure test if the state uses a PRAXIS II test in special education. In contrast, California requires all prospective special education as well as elementary teachers to take its state-specific reading test. Both Virginia and Massachusetts require future special education teachers and early childhood teachers, as well as elementary teachers, to take its state-specific reading test. Since special education teachers work with children with learning disabilities who by definition have reading difficulties, the national picture on the extent to which these teachers are prepared to deal with any academic aspects of the school curriculum for these children, whether they co-teach with a regular classroom teacher or work in a resource center, seems incredibly bleak, to judge by the tests most states require for licensing them.
V. Pedagogical tests for beginning teachers
Some states now require a licensure test of teaching skills of all beginning teachers, in addition to a subject matter test. New York has developed its own test. Other states require one of the tests ETS offers as part of a relatively new PRAXIS series called Principles of Learning and Teaching. This set of tests is designed to assess “what a beginning teacher should know about teaching and learning.” According to the ETS website, 18 states require these tests, usually in addition to a subject test. While a few of the states listed on the ETS website as requiring these tests indicate that they are to be used for the second level of licensure (i.e., after a new teacher has begun teaching), the others require the grade-relevant test for initial licensure. There are four tests in this set of tests, one for early childhood (0521), one for grades K-6 (0522), one for grades 5-9 (0523), and one for grades 7-12 (0524). Each consists of 24 multiple-choice questions and four “case histories” that are each followed by three short-answer questions scored on a scale of 0 to 2. Test content is organized in four categories:
I. Students as Learners (33%, 22% of which is based on short-answer questions)
II. Instruction and Assessment (33%, 22% of which is based on short-answer questions)
III. Teacher Professionalism (22%, 11% of which is based on short-answer questions)
IV. Communication Techniques (11%, based solely on short-answer questions).
The description of these tests and their sample questions clearly suggest that they are designed to promote a constructivist pedagogy. To begin with, nowhere is the research or knowledge base for reading instruction mentioned in the three pages describing the topics covered on these tests (not even for the tests for K-6 or 5-9). Indeed, in its description of these tests, ETS admits that the principles for learning and teaching on this test have “theoretical foundations” only.
Sample constructed-response and multiple-choice questions and sample responses and question answers all promote a constructivist pedagogy or discredit alternative pedagogies or both. For example, the following question and choices follow the description of PRAXIS 0522:
Which of the following kinds of instruction is frequently cited as the opposite of discovery learning?
(A) Simulation games
(B) Expository teaching
(C) Mastery learning
(D) Schema training
As ETS does on all its Answer pages for all its sample test questions for all its tests, the Answer page carefully explains why the best answer is B. “The method of teaching most often seen as the opposite of discovery teaching is expository teaching. Discovery learning allows students to explore material on their own and arrive at conclusions. In expository teaching, students are presented with subject matter organized by the teacher.” Not only is this an odd definition of an uncommon phrase (“expository teaching”), it is an indirect slap at direct instruction and leaves anyone familiar with mastery teaching in the dark about why it didn’t qualify as the best answer.
A sample test item that accompanies all four test descriptions subtly discredits any approach to instruction other than a constructivist approach. It does so in the answers to the questions that follow two paragraphs, which are presented as being “taken from a debate about the advantages and disadvantages of a constructivist approach.” Here are the two passages and the two questions following them.
Why constructivist approaches are effective
The point of constructivist instruction is to have students reflect on their questions about new concepts in order to uncover their misconceptions. If a student cannot reason out the answer, this indicates a conceptual problem that the teacher needs to address. It takes more than content-related professional expertise to be a “guide on the side” in this process. Constructivist teaching focuses not on what the teacher knows, but on what and how the student learns. Expertise is focused on teaching students how to derive answers, not on giving them the answers. This means that a constructivist approach to teaching must reasoned to multiple different learning methods and use multiple approaches to content. It is a myth that a constructivist teacher never requires students to memorize, to drill, to listen to a teacher explain, or to watch a teacher model problem-solving of various kinds. What constructivist approaches take advantage of is a basic truth about human cognition: we all make sense of new information in terms of what we already know or think we know. And each of us must process new information in our own context and experience to make it part of what we really know.
Why constructivist approaches are misguided
The theory of constructivism is appealing for a variety of reasons—especially for its emphasis on direct student engagement in learning. However, as they are implemented, constructivist approaches to teaching often treat memorization, direct instruction, or even open expression of teacher expertise as forbidden. This demotion of the teacher to some sort of friendly facilitator is dangerous, especially in an era in which there is an unprecedented number of teachers teaching out of their fields of expertise. The focus of attention needs to be on how much teachers know about the content being taught. Students need someone to lead them through the quagmire of propaganda and misinformation that they confront daily. Students need a teacher who loves the subject and has enough knowledge to act as an intellectual authority when a little direction is needed. Students need a teacher who does not settle for minimal effort but encourages original thinking and provides substantive intellectual challenge.
The first passage suggests that reflection on which of the following after a lesson is an essential element in constructivist teaching?
(A) The extent to which the teacher’s knowledge of the content of the lesson was adequate to meet students’ curiosity about the topic.
(B) The differences between what actually took place and what the teacher planned.
(C) The variety of misconceptions and barriers to understanding revealed by students’ responses to the lesson.
(D) The range of cognitive processes activated by the activities included in the lesson design and implementation.
The author of the second passage would regard which of the following teacher behaviors as essential for supporting student learning?
(A) Avoiding lecture and memorization
(B) Allowing students to figure out complex problems without the teacher’s intervention
(C) Emphasizing process rather than content knowledge
(D) Directly guiding students’ thinking on particular topics
First, one must note the way in which the passages are titled—“Why constructivist approaches are effective” (not the “advantages” of such an approach) and “Why constructivist approaches are misguided”—implying that there is a research base supporting constructivist approaches (even if critics think they are misguided) and pre-empting any challenge to this assertion. One must also take note of the subtle way that two assertions are made to appear as a contrast to each other, implying that student achievement is the concern of the constructivists, not their critics. The constructivist has presumably said that “Constructivist teaching focuses not on what the teacher knows but on what and how the student learns,” while the critic of constructivism has presumably said that “the focus of attention needs to be on what the teacher knows.” What an artful way to turn reality upside down. What teacher would fail to see constructivism as the clear winner in this debate, based on these two paragraphs? (Needless to say, no reference is provided for this debate, if it actually took place.)
C is the correct answer to the first question. But D, the supposedly correct answer to the second question, is the clincher that will convince prospective test-takers studying these sample questions (as well as school supervisors and those making the decision to require these tests for any level of licensure in the state) that non-constructivist teaching is undesirable on ethical and civic grounds. D is in fact not an answer to the question posed. Nowhere does the passage say or imply that critics of constructivism want teachers to directly guide “student thinking on particular topics.” If anything, it implies the exact opposite in its final sentence. But since the other choices make no sense at all, the reader will conclude by the process of elimination that D must be the right answer. And the Answers page explains why, to remove any test-taker’s doubts. “The best answer is D. The second author maintains that students require teacher guidance and a direct expression of the teacher’s expert content knowledge in order to learn most effectively. Choices A, B, and C are not consistent with this approach to teaching. Direct guidance of student’s thinking is consistent with the second author’s approach.” In other words, critics of constructivism support indoctrination.
There are probably several reasons for the dishonest way D has been worded: first, to make sure that anyone reading the second passage wouldn’t be carried away by the last sentence in the passage and come down on the side of the critics; and second, to make the test-taker recoil from any desire to be on the side of the critics. (After all, D could have been worded to reflect what the critic of constructivism does say in the last sentence.) What normal American teacher would want to be seen as an indoctrinator if that is how a teacher will be described who thinks students should be taught how to read carefully in order to understand what an author has written, rather than as someone who inculcates democratic values by letting students decide for themselves the meaning of what an author has written?
VI. Summary and conclusions
My overall purpose in this study was to determine the extent to which licensure tests for both those who teach elementary children and those who supervise or supplement their work assess what is known from sound research about the teaching of three major components of reading instruction: the development of phonemic awareness, phonics, and vocabulary knowledge. These three components of a research-based approach to beginning reading pedagogy, for decades ignored, devalued, or distorted in most teacher preparation programs, are easy to identify in a test description if they are mentioned at all. Table 2, a summary table, lists all the licensure tests whose descriptions were analyzed for this study, together with my estimate of the percentage of the test items on each test addressing these three components.
Table 2. Estimated Percentage of Test Addressing Research-Based Knowledge on Development of Phonemic Awareness, Phonics, and Vocabulary Skills in Test Descriptions*
Licensure Tests for Prospective Elementary Teachers Assessing Reading Instructional Knowledge (see Appendix A)
Percent in Three Areas
PRAXIS 0011 (Elementary Education: Curriculum, Instruction, and Assessment), ETS (17 states)
PRAXIS 0012 (Elementary Education: Content Area Exercises), ETS (7 states)
PRAXIS 0014 (Elementary Education: Content Knowledge), ETS (22 states)
PRAXIS 0201 (Reading across the Curriculum: Elementary), ETS (1 state)
Multiple Subjects Exam (for Elementary Education), ABCTE
California RICA, NES** (1 state)
Illinois 110 (Elementary/Middle), NES (1 state)
Michigan 83 (Elementary Education), NES (1 state)
Massachusetts 90 (Foundations of Reading), NES*** (1 state)
New York 02 (Multi-Subject Test: Grades PreK-9), NES (1 state)
Oklahoma 50 (Elementary Education Subtest I), NES (1 state)
PRAXIS 0200 (Introduction to the Teaching of Reading), ETS
Reading Test for K-6, ABCTE
Illinois 177 (Reading Teacher), NES
Michigan 05 (Reading), NES
New York 65 (Literacy), NES
PRAXIS 0300 (Reading Specialist), ETS
Illinois 176 (Reading Specialist), NES
Massachusetts 08 (Reading Specialist), NES
Michigan 92 (Reading Specialist), NES
Oklahoma 15 (Reading Specialist), NES
Texas 151 (Reading Specialist) NES
PRAXIS 0020 (Early Childhood Education), ETS
Illinois 107 (Early Childhood Education), NES
Texas 101 (Early Childhood-4), NES
PRAXIS 0353 (Education of Exceptional Students: Core Content Knowledge), ETS
PRAXIS 0351 (Special Education: Knowledge-Based Core Principles), ETS
Illinois 155 (Learning Behavior Specialist I), NES
Michigan 63 (Learning Disabled), NES
New York 60 (Students with Disabilities, CST), NES
PRAXIS 0522 (Principles of Learning and Teaching: Grades K-6), ETS (18 states)
PRAXIS 0523 (Principles of Learning and Teaching: Grades 5-9), ETS
PRAXIS 0524 (Principles of Learning and Teaching: Grades 7-12), ETS
New York 90 (Elementary Assessment of Teaching Skills), NES (1 state)
* The number of states requiring test for prospective elementary teachers is in parentheses after its title.
** Required of all prospective elementary and special education teachers in the state.
*** Required of all prospective elementary, early childhood, and special education teachers in the state.
I first analyzed the online profiles of current tests assessing reading instructional knowledge required by the states for licensing elementary teachers. I examined the contents, section weights, and accompanying sample questions when they were available for all ETS tests assessing reading instructional knowledge, for the test offered by ABCTE, and for a sampling of tests developed by NES for individual states with which it has a contract, estimating for each test the percentage of the total test seemingly addressing these three components. To judge by the topics mentioned in the profile of each test and the weights attached to the sections of the test containing these topics, the overall picture leaves much to be desired. ETS’s tests for elementary teachers appear to devote a tiny proportion of test content to these three components (although the actual test items on reading on the revised PRAXIS 0201, required so far by only one state, are an important exception). This can be explained for the most part by the acknowledgment on its website that the design of the content of ETS’s tests assessing reading pedagogy and the pedagogical approach highlighted if not promoted in their profiles and sample questions were guided by the professional standards promulgated by the International Reading Association and the National Council of Teachers of English. These standards reflect the whole language approach advocated by many if not most of the reading and English education faculty in our education schools; they clearly do not reflect a research-based approach to reading instruction.
The profiles of the tests assessing reading instructional knowledge developed by NES for the states that use its services range from some that are similar to the ETS tests to some that clearly address a substantial amount of research-based reading instructional knowledge (as does the ABCTE reading test under development). Apparently, the reading faculty in education schools and their professional organizations have not always had a strong influence on the licensure tests prospective teachers take in a state contracting with NES for its tests.
At present, over 35 states require at least one subject test offered by ETS for licensing elementary teachers, while over 12 states require a state-specific test developed under a contract with NES for prospective elementary teachers. Thus, one clear result of the mandate in Title II of the Higher Education Act is the nationwide use, with only a few exceptions, of state tests for elementary teachers that reflect the heavy hand of two professional organizations (National Council of Teachers of English and the International Reading Association) that actively discourage a research-based pedagogy for reading and to which most reading educators belong. Most of these tests in effect encourage teacher training programs to maintain their long-standing opposition to a research-based pedagogy for reading. The tests also serve to point new teachers away from the programmatic requirements for Reading First. This grim conclusion is supported by the results of a recent report prepared for the National Council for the Accreditation of Teacher Education (Rigden, 2006). This report offered an analysis of five tests offered by ETS and three tests developed by NES for three different states. It found little alignment between four of the five ETS tests and the requirements for Reading First. Clearly, the Reading First initiative fights an uphill battle to retrain elementary teachers who are mistrained in reading pedagogy in their preparation programs and then, in most states, licensed by tests that validate their mistraining.
In order to understand the total licensure picture affecting elementary reading instruction, I also analyzed the tests for licensing those educators who supervise, support, or supplement the work of the elementary classroom teacher: reading teachers, reading specialists, early childhood teachers, and special education teachers. It seems that elementary teachers across the country (depending on the tests required by their state) may be supported or supervised by other educators who have been licensed by tests assessing little if any of the knowledge base for reading pedagogy and often discrediting the pedagogy required for sound teaching. ETS’s subject tests for early childhood and special education appear to contain no items addressing this knowledge. Its original tests for a reading teacher license and a reading specialist license appear to contain an extremely small number of items addressing this knowledge. In addition, ETS’s pedagogical tests of the principles of teaching and learning, increasingly required (by states using ETS’s tests) of all teachers for their initial license or in their first year of teaching, denigrate non-constructivist approaches to teaching and learning in all subjects and at all grade levels. Clearly, the majority of states are increasingly buying into an interlocking network of licensure tests and teacher evaluation instruments promoting an empirically unsupported educational philosophy that should have been declared a failure and abandoned in light of the massive federal, state, and private funds that have been allocated to efforts to improve students’ reading skills in the past three decades.
Since teacher licensure is a state responsibility, each state needs to undertake its own critical examination of whatever group of tests it requires for those who teach elementary children or who supervise or support those who do, regardless of the test developer. It should first determine whether and to what extent they reflect the research-based knowledge underlying sound reading instruction. Second, each state should examine the pedagogy embedded in the tests of general pedagogical knowledge that it may require along with a subject test, as well as the pedagogy promoted in the observational instruments it may also require schools to use to assess a new teacher’s classroom performance. If the tests of general pedagogical knowledge used for licensing teachers, or the observational instruments used for rehiring new teachers, coerce them into adopting a whole language or constructivist approach to teaching and learning, these tests and instruments will undermine the benefits of sound reading methods courses and soundly constructed tests assessing research-based reading instructional knowledge. Third, each state should examine its own professional teaching standards; these standards should reflect this knowledge. Finally, the tests the state uses should be revised, if necessary, to assess that knowledge.
It is clear that Title II’s requirement of nothing more than a state-determined passing score on a licensure test for prospective teachers of elementary children or their coaches, supervisors, or other colleagues has apparently not made most schools of education accountable for teaching them what they need to know in order for them to provide a research-based approach to reading instruction. Instead, it may, by default, be undermining the efforts of Reading First, an important programmatic piece of the No Child Left Behind Act of 2001, and contributing to the basically flat scores on the grades 4 and 8 reading tests given by the National Assessment of Educational Progress for over 30 years.
While it might be argued (and has been) that a state’s licensure tests for elementary teachers, reading teachers, and reading specialists should reflect what is taught in its education schools or current practices in the schools, this is an argument that not only flies in the face of the public interest today, it also reflects a profound misunderstanding of the nature of licensure tests in most professional fields. They generally do not assess knowledge of current practices in the field; rather they assess the basic knowledge needed for sound practice as determined through research and scholarship. Knowledge of current practice is expected to be gained from clinical experiences such as apprenticeships. The licensure test for Physician’s Assistant, for example, assesses not doctor/patient relationships but such items as knowledge of anatomy and how to calculate drug dosage given patient weight, age, and other variables, all based on scientific research. The bar exam for prospective lawyers assesses legal knowledge, not courtroom practices. Subject tests for licensure for those who teach reading in our public schools should be no different. If a state’s professional training programs do not provide the knowledge base that effective practitioners need, then the state needs to close down or replace its training institutions.
Title II should be amended to provide criteria for the content of all the licensure tests taken by prospective elementary, reading, early childhood, and special education teachers as well as reading specialists to ensure that these tests assess research-based reading instructional knowledge and thereby promote, not sabotage, the requirements for Reading First. These amendments might also recommend model tests as determined by the Institute for Educational Sciences, with financial incentives for states that develop or use sound tests. Requiring all states to use sound national standards or criteria in the development or choice of tests they require prospective teachers to take for licensure might be the most useful step Congress could take to raise the academic achievement of the students in our public schools.
Chall, J. (1996). Stages of Reading Development, Second Edition. Fort. Worth, TX: Harcourt Brace. First published in 1963 by McGraw-Hill.
Chall, J. (1996). Learning to Read: The Great Debate, Third Edition. Fort Worth, TX: Harcourt Brace. First published in 1967 and in 1983 by McGraw-Hill.
Chall, J. (2000). The Academic Achievement Challenge: What Really Works in the Classroom? NY: Guildford Press.
Hall, T. & Strangman, N. (August 2001). Graphic Organizers. CAST: National Center on Accessing the General Curriculum. http://www.cast.org/ncac/index.cfm?I=3015
Kamii, C., & Dominick, A. (1998). The Harmful Effects of Algorithms in Grades 1-4. In The Teaching and Learning of Algorithms in School Mathematics, 1998 Yearbook. Washington, DC: National Council of Teachers of Mathematics.
National Council on Teacher Quality. (May 2006). What Teacher Preparation Programs Aren't Teaching About Reading--and What Elementary Teachers Aren't Learning. Washington, D.C.: National Council of Teacher Quality.
Rigden, D. (2006). Report on Licensure Alignment with the Essential Components of Effective Reading Instruction. Prepared for the National Council for the Accreditation of Teacher Education.
Steiner, D., with Rozen, S. (2004). Preparing Tomorrow’s Teachers: An Analysis of Syllabi from a Sample of America’s Schools of Education. In F.M. Hess, A.J. Rotherham, & K. Walsh (Eds.). A Qualified Teacher in Every Classroom? Appraising Old Answers and New Ideas. Cambridge: Harvard Education Press.
Stotsky, S. (2005). The State of State English Standards. Washington, DC: Thomas B. Fordham Foundation.
Appendix A: Profile of Eleven Tests for Elementary Teachers
1. PRAXIS 0201: Reading across the Curriculum: Elementary Test (ETS)
This test is for persons “completing teacher training programs with at least two or three courses in reading who are planning to teach at the elementary level or persons who are currently teaching and have the option of taking this test in lieu of state-mandated course work.” The content is based on “categories and competencies developed by the Professional Standards and Ethics Committee of the International Reading Association.” The test consists of 60 multiple-choice questions and three constructed-response questions involving the application of ideas and practices to reading instruction. The test is organized in six categories, with 64 sections in all. The website offers ten sample questions. The six categories and their weights are as follows:
I. Theory of Reading as a Process; Language Acquisition and Early Literacy (10%)
II. Reading Materials and Instruction; Reading Environment (15%)
III. Reading Comprehension (10%)
IV. Assessment of Reading (6.5%)
V. Vocabulary, Spelling, and Word Study (8.5%)
VI. Problem Solving Exercises (50%), a category containing three constructed-response questions addressing “analysis of student work and behavior; reading materials, instruction, and environment; and reading comprehension” (17% each)
2. PRAXIS 0011: Elementary Education: Curriculum, Instruction, and Assessment Test (ETS)
This test is designed for prospective teachers of elementary students who have completed a bachelor’s degree program in elementary or middle school education or have prepared themselves through an alternative certification program. The test consists of 110 multiple-choice questions and is organized in six categories, with dozens of items under the first category alone. The website provides three sample questions for the first category. All six categories and their weights are as follows:
I. Reading and Language Arts Curriculum, Instruction, and Assessment (35%)
1. Curriculum components
2. Instruction (divided into reading and writing)
II. Mathematics Curriculum, Instruction, and Assessment (20%)
III. Science Curriculum, Instruction, and Assessment ((10%)
IV. Social Studies Curriculum, Instruction, and Assessment (10%);
V. Arts and Physical Education Curriculum, Instruction, and Assessment (10%)
VI. General Information about Curriculum, Instruction, and Assessment (15%)
3. PRAXIS 0012: Elementary Education: Content Area Exercises (ETS)
This test is designed to measure how well prospective teachers of elementary school students can respond to extended exercises that “require thoughtful written responses.” For example, “an exercise might cover instructional approaches using trade books to teach reading/language arts in a first-grade classroom.” The test consists of four essay questions and is organized in four content categories:
I. Reading/language arts (25%)
II. Mathematics (25%)
III. Science or Social Studies (25%)
IV. Interdisciplinary Instruction (25%)
4. PRAXIS 0014: Elementary Education: Content Knowledge Test (ETS)
This test is designed for prospective teachers of elementary school children. It consists of 120 multiple-choice questions and is organized in four categories. The first category addresses five topics: understanding literature (7.5%); text structure (1%); reading instruction (7.5%); writing instruction (6%); and communication skills (2.5%), with each topic containing multiple items. (The weight following each topic refers to its percentage on the whole test.) The website provides six sample questions to address the first category. The four categories and their weights are as follows:
I. Language Arts (25%
II. Mathematics (25%
III. Social Studies (25%)
IV. Science (25%).
5. Multiple Subject Exam for Elementary Education Certification (ABCTE)
A staff member of ABCTE provided the following information on this test for prospective teachers of grades K to 6 and indicated that it will soon be on ABCTE’s website. The test consists of 125 multiple-choice questions and is organized in four categories. The first category is divided into four major sections, most containing several items. No sample questions are on the website yet. The four categories and their weights are as follows:
I. Reading and English Language Arts (32%)
1. Alphabetics (6.4%)
2. Fluency (3.2%)
3. Comprehension of Texts (11.2%)
4. Oral and Written Language Development (11.2%)
II. Mathematics 28%)
III. Science (20%)
IV. Social Studies (20%).
6. California RICA: Reading Instruction Competence Assessment (NES)
This test is required of all prospective elementary and special education teachers in California. The test consists of 70 multiple-choice questions and five constructed-response questions. The test is organized in four categories, and the constructed-response questions are keyed to each of these categories. Four focus on educational problems and instructional tasks, and the fifth is a case study. The four categories are divided into sections, each containing multiple items. There are 14 questions for the first category, 21 for the second, 21 for the third, and 14 for the fourth. These 70 questions are worth 60 points in all, or 50% of the test. The five essay questions are worth 60 points, or 50% of the test. Many sample questions are offered on the website.
I. Planning and Organizing Reading Instruction Based on Ongoing Assessment
II. Developing Phonological and Other Linguistic Processes Related to Reading
III. Developing Reading Comprehension and Promoting Independent Reading
IV. Supporting Reading Through Oral and Written Language Development
V. Focused Educational Problems and Instructional Tasks
VI. Case Study
7. Illinois 110: Elementary/Middle Grades Test (NES)
This test is required for prospective teachers of elementary school and the middle grades in Illinois. The test consists of 125 multiple-choice questions and is organized in five categories, with 22 sections in all, each containing multiple items. The first category, Language Arts and Literacy, contains five sections. No weights are provided on the website. The Study Guide provides 20 sample questions. The five categories are as follows:
I Language Arts and Literacy
IV. Social Sciences
V. The Arts, Health, and Physical Education
8. Michigan 83: Elementary Education Test (NES)
This test is required for prospective teachers of elementary children in Michigan. It consists of 100 multiple-choice questions and is organized in six categories. The first category is divided into 12 sections, with multiple items in each section. The Study Guide provides 10 sample questions. The six categories and their weights are as follows:
I. Language Arts (24%)
II. Mathematics (20%)
III. Social Studies (15%)
IV. Science (15%)
V. The Arts (13%)
VI. Health and Physical Education (13%)
9. Massachusetts 90: Foundations of Reading Test (NES)
This test is required for prospective teachers of early childhood (preK-2), grades 1-6, and children with moderate disabilities from preK-8, in addition to a second subject test covering other major subjects taught in the elementary school (mathematics, science, history, geography, writing, grammar, and children’s literature). The test consists of 100 multiple-choice questions and two constructed-response questions, one of which addresses reading skills. It is organized in four categories, with ten sections in all, each containing multiple items. The Study Guide provides 10 sample questions.
I. Foundations of Reading Development (35%)
1. Phonological and phonemic awareness (8.75%)
2. Concepts of print and the alphabetic principle (8.75%)
3. Role of phonics (8.75%)
4. Word analysis skills and strategies (8.75%)
II. Development of Reading Comprehension (27%)
1. Vocabulary development (9)
2. Comprehension of imaginative/literary texts (9)
3. Comprehension of informational/expository texts (9)
III. Reading Assessment and Instruction (18%)
1. Formal and informal assessment methods (9)
2. Multiple approaches to reading instruction (9)
IV. Integration of Knowledge and Understanding (20%), a category consisting of two broad essay questions, each worth 10%
10. New York 02: Multi-Subject Test: Grades PreK-9 (NES)
This test is required for all prospective teachers from PreK-9 in the state of New York. It consists of 90 multiple-choice questions and one constructed-response question. It is organized in eight categories, with many sections overall. The category on the English Language Arts contains eight sections, each containing multiple items. Category VIII, on the Foundations of Reading, contains one constructed-response question. The Study Guide provides 9 sample questions to address these two categories. The eight categories are as follows:
I. English Language Arts (21%)
II. Mathematics (18%)
III. Science and Technology (13%)
IV. Social Studies (15%)
V. The Fine Arts (8%)
VI. Health and Fitness (8%)
VII. Family and Consumer Science and Career Development (7%)
VIII. Foundations of Reading: Constructed-Response Assignment (10%)
11. Oklahoma 50: Elementary Education Subtest I (NES)
This test is required for all prospective elementary teachers in Oklahoma. The multiple-choice questions are worth 85% of the test, and the one constructed-response question, which addresses reading, is worth 15%. The test is organized in three categories, with 17 sections in all, each containing multiple items. The first two categories, Reading and Language Arts, contain 11 of the 17 sections. The Study Guide provides 5 sample questions for these 11 sections. The three categories, with my estimate of their weights, are as follows:
I. Reading (44%)
II. Language Arts (27%)
III. Social Studies (28%)
Appendix B: Profile of Five Tests for Reading Teachers
* PRAXIS 0200: Introduction to the Teaching of Reading Test (ETS)
The Introduction to the Teaching of Reading Test, as described by ETS, is for examinees who have completed an undergraduate elementary or secondary education program that included about three to four semester hours on the teaching of reading. PRAXIS 0200 is taken chiefly for a license or endorsement as a reading teacher.
The test consists of 100 multiple-choice questions and is organized in five categories, with 12 sections in all, each with multiple items. The first category addresses theoretical approaches to the construction of meaning and the interrelatedness of the language processes. The second addresses various “considerations” of a text ranging from its structure and syntactic complexity to story grammars and different types of “cues.” The third addresses specific pedagogical strategies such as reciprocal teaching, age-appropriate strategies (unspecified), ways to group children and organize a classroom, content area reading, study skills, and types of assessment. The fourth category addresses ways to use the arts for motivating reading and writing. The fifth focuses on knowledge of strategies and materials that are assumed to be useful in developing reading and writing skills in children with specific ethnic, socioeconomic, and cultural characteristics. The website provides 12 sample questions. The five categories and their weights are as follows:
I. Reading as a Language-Thought Process (15%)
II. Text Structure (20%)
III. Instructional Processes in the Teaching of Reading (40%)
IV. Affective Aspects (10%)
V. Environmental/Sociocultural Factors (15%)
* Test for Teachers of Reading, K-6 (ABCTE, under development)
A staff member of ABCTE provided the following information on this test and indicated that it will soon be on ABCTE’s website. Now under development, this test is for an elementary or special education teacher in grades K to 6 who works regularly in a classroom setting. It addresses more advanced knowledge of reading instruction than does the Multiple Subject Exam. The test consists of 125 multiple-choice questions and is organized in eight categories, each containing many items. No sample questions are available yet.
I. Evaluation of reading programs and recommended pedagogy (7%)
II. Phonemic awareness (12%)
III. Phonics (12%)
IV. Fluency (12%)
V. Vocabulary and concept development (14%)
VI. Understanding of informational texts (15%)
VII. Understanding of literary texts (15%)
VIII. Differentiated instruction (12%)
* Illinois 177: Reading Teacher Test (NES)
This test is not required for prospective teachers of elementary, special education, or young children in Illinois. It is required for a license or endorsement as a reading teacher. The test consists of 125 multiple-choice questions and is organized in four categories, with 21 sections in all, each section containing multiple items. No weights for the categories are provided on the website. The Study Guide provides 20 sample questions. The four categories are as follows:
I. Language, Reading, and Literacy
II. Reading Instruction
IV. Professional roles and Responsibilities
* Michigan 05: Reading Test (NES)
This test is not required for prospective teachers of elementary, special education, or young children in Michigan. It is required for a license or endorsement as a reading teacher. The test consists of 100 multiple-choice questions and is organized in six categories, with 30 sections in all, each section containing multiple items. The Study Guide provides 20 sample questions. The six categories and their weights are as follows:
I. Meaning and Communication (19%)
II. Genres and Craft of Literature and Language (15%)
III. Skills and Processes (18%)
IV. Instruction (18%)
V. Assessment (15)
VI. Professional, Program, and Curriculum Development (15%)
* New York 65: Literacy Test (NES)
This test is required for a license or endorsement as a teacher of reading. It consists of 90 multiple-choice questions and one constructed-response question. It is organized in four categories, with 20 sections in all, each containing multiple items. The Study Guide provides 20 sample questions. The four categories and their weights are as follows
I. Foundations of Literacy (23%)
II. Reading Instruction and Assessment (45%)
III. The Role of the Literacy Professional (22%)
IV. Reading Instruction and Assessment. Constructed-Response Assignment (10%)
Appendix C: Profile of Six Tests for Reading Specialists
* PRAXIS 0300: Reading Specialist Test (ETS)
This test consists of 120 multiple-choice questions and is organized in four categories, with 45 sections in all. The website provides 12 sample questions. The four categories and their weights are as follows:
I. Theoretical and Knowledge Bases of Reading (15%)
II. Application of Theoretical and Knowledge Bases of Reading in Instruction (45%)
III. Application of Theoretical and Knowledge Bases of Reading Diagnosis and Assessment (27%)
IV. Reading Leadership (10%)
* Illinois 176: Reading Specialist Test (NES)
This test consists of 125 multiple-choice questions. It is organized in four categories, with 23 sections in all, each containing multiple items. No weights are provided on the website. The Study Guide provides 20 sample questions. The four categories are as follows:
I. Language, Reading, and Literacy
II. Reading Instruction and Assessment
III. Reading Research and Curriculum Design
IV. Professional Responsibilities and Resource Management
* Massachusetts 08: Reading Specialist Test (NES)
This test consists of 100 multiple-choice questions and two constructed-response questions (for Section V). It is organized in five categories, with 20 sections in all, each containing multiple items. The Study Guide provides 11 sample questions. The five categories and their weights are as follows:
I. Reading Processes and Development (32%)
II. Reading Assessment (16%)
III. Reading Instruction (16%)
IV. Professional Knowledge and Roles of the Reading Specialist (16%)
V. Integration of Knowledge and Understanding (20%)
* Michigan 92: Reading Specialist Test (NES)
This test consists of 100 multiple-choice questions and is organized in six categories, with 30 sections in all, each section containing multiple items. The Study Guide provides 20 sample questions. The six categories and their weights are as follows:
I. Meaning and Communication (16%)
II. Genres and Craft of Literature and Language (14%)
III. Skills and Processes (16%)
IV. Instruction (20%)
V. Assessment (14%)
VI. Professional, Program, and Curriculum Development (20%)
* Oklahoma 15: Reading Specialist Test (NES)
This test consists of multiple-choice questions and one constructed-response question. It is organized in four categories, with 24 sections in all, each containing multiple items. The multiple-choice questions are worth 85% of the test, and the constructed-response question, anchored to Category II, is worth 15%. The Study Guide provides 10 sample questions. The four categories are as follows:
I. Foundations of Reading
II. Instructional Practices
III. Assessment, Diagnosis, and Evaluation
IV. Role of the Reading Professional
* Texas 151: Reading Specialist Test (NES)
This test consists of 110 multiple-choice questions and is organized in four categories, with 14 sections in all, each containing multiple items. The study guides provides 17 sample questions. The categories and their weights are as follows:
I. Instruction and Assessment: Components of Literacy (57%)
II. Instruction and Assessment: Resources and Procedures (14%)
III. Meeting the Needs of Individual Students (14%)
IV. Professional Knowledge and Leadership (14%)
Appendix D: Profile of Three Tests for Early Childhood Teachers
* Praxis 0020: Early Childhood Education Test (ETS)
According to ETS, this test is intended primary for “examinees who have completed their undergraduate preparation and are prospective teachers of preschool through primary grade students.” The test consists of 120 multiple-choice questions and is organized in six content categories:
I. Nature of the Growth, Development, and Learning of Young Children (31%)
II. Factors that Influence Individual Growth and Development (12%)
III. Applications of Developmental and Curriculum Theory (12%)
IV. Planning and Implementing Curriculum (29%)
V. Evaluating, Reporting Student Progress and Effectiveness of Instruction (12%)
VI. Understanding Professional and Legal Responsibilities (6%)
* Illinois 107: Early Childhood Education Test (NES)
This test consists 110 multiple-choice questions and is organized in three categories, with 14 sections, each containing multiple items. The Study Guide provides 20 sample questions. The first category, Language and Literacy Development, contains five sections. The three categories are as follows:
I. Language and Literacy Development
II. Learning Across the Curriculum
III. Diversity, Collaboration and Professionalism in the Early Childhood Program
*Texas 101: Early Childhood-4Test (NES)
This test is required of prospective teachers from kindergarten to grade 4. It contains 110 questions that are organized in five categories, with many sections in all, each containing multiple items. The first category contains 11 sections. The categories and their weights are as follows:
I. English Language Arts and Reading (40%)
II. Mathematics (15%)
III. Social Studies (15%)
IV. Science (15%)
V. Fine Arts, Health, and Physical Education (15%)
Appendix E: Why the Massachusetts Tests are Different
History of the current Massachusetts tests of reading instructional knowledge
Why are the Massachusetts tests of reading instructional knowledge so different from those offered by ETS and in some of the NES states, especially since the documents on which they were based predated passage of the No Child Left Behind Act and the requirements of Reading First?
The pass rates for the first cohort of prospective teachers to take the Massachusetts teacher tests led the board of education to ask its department of education in 1999 to revise the prevailing licensing regulations in order to increase academic requirements for prospective teachers wherever necessary. With the development of the first sets of preK-12 standards in mathematics, science, history and social science, and the English language arts and reading between 1995 and 1998, licensure regulations could now specify more clearly than before the academic background a prospective teacher needed for teaching any of these subjects.
The first set of teacher tests given in 1998 did not include a separate test of reading for future teachers of young children, elementary children, and children with moderate disabilities. Each group took only one license-specific subject matter test for licensure. The content of these tests by law had to be based on the prevailing licensing regulations as interpreted by department staff, higher education faculty, and practicing teachers licensed in these areas. Not one of these subject matter tests emphasized the development of reading skills. Only two of the 18 objectives on the first early childhood test, only three of the 31 objectives on the first elementary test, and only one of the 15 objectives on the first test for teachers of children with moderate disabilities (this is a generous interpretation) addressed reading. Given the construction of these three subject matter tests, not one test would have compelled the education faculty in institutions offering licensure programs in these three areas to concentrate on the research-based knowledge needed for teaching reading in their programs. And, even if many of their students failed the test, there was still no compelling reason to spend more time on the knowledge base needed for reading instruction in the education courses they offered. Because of the compensatory scoring system used for these teacher tests, no one section of the test needed to be mastered, and indeed prospective teachers could fail all the items in, for example, reading and mathematics and still pass, given where the cut scores had been set.
The department’s proposal in 1999 (approved in 2000 by the board of education) to require both a test of reading instructional knowledge and another subject matter test for prospective elementary and early childhood teachers was not opposed by teachers in the field or by education faculty in the state’s higher education institutions. (An amendment to the regulations in 2003 extended the requirement to prospective special education teachers.) Everyone, especially elementary school principals, agreed that all new teachers of elementary children needed much more background knowledge for reading instruction than they had received in their training programs.
In collaboration with NES, the department asked a number of well-regarded reading education faculty and nationally known reading researchers in Massachusetts, as well as licensed teachers, to assist in the development of the major categories, objectives, and weights for this test. A draft of the test’s objectives was sent to all institutions of higher education in the state with elementary and early childhood programs for comment (about 40) and discussed at many meetings with higher education faculty. The final draft was signed off by a large committee that also reviewed and approved the pool of test items for the first test. (As required by NES’s planning documents for all tests, these same procedures were followed to develop the reading specialist test.) At no point did the department receive serious criticism about the requirement of a separate reading test for these two licenses, or about the outline of test objectives and their weights.
The only serious problem that has emerged with the test has been its cut score and the failure rate. The cut score set by the first standard-setting committee after the first administration of the test in September 2002 (taken by a small, atypical group of test-takers who were not yet required by the 2000 regulations to take the test for licensure) was so low that almost every test-taker passed the test on that administration and on the next three administrations of the test. Test-takers could easily fail important objectives on the test but, because of compensatory scoring, pass the test. A year later, at the commissioner’s request and upon recommendation from NES, a new standard-setting committee was formed, and it set the cut score higher, leading to a much higher failure rate for first-time test-takers on subsequent administrations of the test (from September 2003 on). In general, test-takers must answer correctly some questions on all sections of the test in order to pass. One immediate result of this higher failure rate for first-time test-takers was a dramatic increase in test preparation workshops at institutions with licensure programs in these two areas.
But some faculty members in early childhood were more concerned about the differential effects of this higher cut score: a much higher failure rate for prospective early childhood teachers than for prospective elementary teachers. How much coursework in reading instruction they were actually getting across institutions is unknown, but several early childhood educators had already expressed concern to the department in 2002 that the newly required Foundations of Reading Test expected too much of prospective teachers who could not teach beyond grade 2 under an early childhood license. After many discussions with its members, the state professional association for these early childhood faculty members agreed in 2003 that their students should be required to take the same reading test taken by elementary teachers (licensed to teach from grade 1 to grade 6) because of complaints that these faculty members had received about their students’ lack of preparation in reading from elementary school principals who had hired them after graduation.
Educators in charge of early childhood licensure programs also have other considerations in mind that are national in scope. There is currently a nationwide effort to increase both the qualifications of pre-school teachers and their numbers. The pool of individuals interested in teaching pre-school and well qualified to do so has been shrinking. One policy some educators would like to see funded (massively) by the state and federal government is to require pre-school/day care staff with high school diplomas to enroll in a relevant AA degree program at a community college and to encourage or require those who already have an AA degree to complete two more years of undergraduate work for a BA degree in a licensure program in early childhood, once transition details can be worked out. Armed with a BA degree and a teaching license (which one cannot get without a BA degree in most states), they could then, it is assumed, command a much better salary and offer an employer higher academic qualifications. A rigorous test of reading instructional knowledge may well be perceived as a roadblock by cash-starved public or small private colleges that offer an early childhood licensure program.
On the other hand, since the early childhood license in many states spans the pre-school years through grade 2 or 3, it is not unreasonable for the public to expect someone receiving a license to teach grades 1, 2, and 3 to know how to teach reading (and arithmetic) well. Many pre-school teachers who have gone on to get their BA degree and an early childhood license have transferred as quickly as they could into the public schools for the higher salaries they can get there. How the academic needs of school-age children will be safeguarded by state and federal public policies created to address the need for more and better qualified pre-school teachers remains to be seen. A test for the early childhood license that enables its holders to teach up to grade 3 and that does not assess knowledge of reading instruction or the other subjects taught through grade 3 fails as a safeguard of all children’s best interests.