By Malbert Smith Ph.D., Jason Turner and Steve Lattanzio, MetaMetrics®, July 2015
The North Carolina Center for Afterschool Programs (NC CAP) recently released their 2014 Roadmap of Need report detailing each county’s ranking by multiple essential indicators. The annual report ranks North Carolina (NC) counties on the basis of a wide range of variables— from economic factors to health and wellness indicators to educational achievement. The report offers a compelling look at NC county by county and provides an excellent starting point for considering those counties with the most need. Last year, in response to the 2013 report, we wrote ‘The NC CAP Roadmap of Need Supports the Importance of the Read to Achieve Act’. In that paper we utilized the raw data to analyze educational attainment across NC districts. In response to this year’s NC CAP report, the 2014 Roadmap of Need, we pursued a slightly different analysis, focusing on 3rd grade reading performance over the last two years as NC has raised achievement levels. We compared each district’s performance on the North Carolina annual End-of-Grade assessment (NC EOG) across three dimensions: change in percent proficient, change in EOG scale score and change in Lexile® measure.
By Jason Turner, Malbert Smith Ph.D. and Steve Lattanzio, MetaMetrics, October 2014
The latest Gallup public confidence survey indicates that confidence in public education has slid to an all-time low of 26%. Yet, U.S. student performance is actually improving across a variety of metrics. In this analysis, NAEP Long Term Trends, PIRLS, TIMSS, graduation rates, AP enrollment and post-secondary matriculation were examined. Across all of these essential markers, U.S. student performance has been trending upward for several decades. So why is public confidence estranged from the successes of empirical performance?
Several factors might be at play. First, our society has grown increasingly cynical; confidence in most public institutions has dropped. Second, over recent years people have become aware that many of our international peers outrank the U.S. in terms of academic achievement. Third, the pronounced backlash against the Common Core State Standards (CCSS) and the negative media attention has helped fuel the impression that public education has taken a wrong turn. Lastly, 21 states have implemented 3rd grade reading initiatives designed to ensure that all students are reading on grade level by the end of 3rd grade. Many of those policies require retention in the event that students are not reading on grade level, which might seem punitive and draconian to parents.
By Malbert Smith Ph.D., Jason Turner and Steve Lattanzio, MetaMetrics, April 2014
The North Carolina Center for Afterschool Programs (NC CAP) recently released their 2013 "Roadmap of Need" report detailing each North Carolina county's ranking by multiple essential indicators. The report ranks NC counties on the basis of a wide range of variables - from economic variables to health and wellness indicators to educational achievement. In this paper, variables were identified within each county that were most predictive of college and career readiness. Since 3rd grade reading performance correlates so highly with college and career readiness (ACT performance), the use of periodic formative assessment could allow NC educators the opportunity to identify reading difficulties and intervene even earlier. Furthermore, due to the disproportionate amount of reading growth that occurs in the early formative years - between kindergarten and 3rd grade - policy and legislation with an early emphasis on reading development essential. Given the critical importance of 3rd grade reading performance in preparing students for life beyond high school, NC's Read to Achieve Act represents a vitally important marker in our effort to ensure that every student in NC graduates college and career ready.
By Matt Copeland, Lexile Curriculum Specialist, MetaMetrics Inc. and David Liben, Senior Content Specialist, Student Achievement Partners (SAP), October 2013
As schools across the country continue their implementation of standards for college and career readiness, educators find themselves re-examining and reconsidering the complexity of the texts they ask students to read. And rightly so. Research “makes clear that the complexity level of the texts students read are significantly below what is required to achieve college and career readiness” (Coleman & Pimentel, 2012).
One of the questions most frequently asked is how educators and parents can find valid and reliable measures of text complexity to address this need. Luckily, there are a number of quick and easy methods an individual can use to find a Lexile® measure of a text. With this information, educators and parents can better match their students to texts with the appropriate level of complexity.
By Malbert Smith III, Jason Turner and Steve Lattanzio, August 2013
Since 1973, Gallup has conducted an annual public confidence survey in which Americans rate their confidence in sixteen various public institutions. Last year’s results generated the headline, “Confidence in U.S. Public Schools at New Low” (Jones, 2012). Puzzled and concerned by this trend, we examined empirical performance measures of U.S. public schools to see if public perceptions were, in fact, tethered to reality. In our paper, ‘Restoring Faith in Public Education’ (Smith, Turner & Lattanzio, 2012), we plotted National Assessment of Educational Progress (NAEP) Long Term scores in reading and mathematics, Trends in International Math and Science Study (TIMSS) scores, Progress in International Reading Literacy Study (PIRLS) scores, and high school drop-out rates against the plummeting public school confidence trend line. Our analysis indicated that such a dismal perception was not warranted when considered against these empirical benchmarks.
On June 13, 2013, Gallup reported this year’s survey results and fortunately, education was not the headline story. The big story generated from this year’s survey was that public confidence in Congress had reached an all-time low. Just 10 percent of respondents reported having confidence in Congress (Mendes & Wilke, 2013).
Confidence in public schools, on the other hand, experienced a slight uptick with 32 percent of respondents reporting confidence in public education (Mendes & Wilke, 2013). That’s up 3 percentage points over last year’s poll results. While that may seem like an encouraging sign, it’s worth noting that the poll’s margin of error is +/- 3 percent. This means, for all intents and purposes, public confidence in education remains essentially unchanged.
By Malbert Smith III, Jason Turner and Steve Lattanzio, MetaMetrics®, September 2012
Featured in Education Week. Vol 32, No. 7. October 10, 2012.
Gallup’s July 2012 ‘Confidence in Institutions’ survey reveals a disheartening lack of confidence in U.S. public schools. While the majority of Americans continue to express confidence in institutions like the military and police, those same respondents expressed a much more dismal view of public education. Participants indicating ‘a great deal’ or ‘quite a lot’ of confidence in public K-12 education fell to an all-time low of around 29%— a 5% decrease from 2008 and a drop of 29 percentage points from 1973 when Gallup first began including public schools in its survey and public confidence was around 58%.
Unfortunately, faith in the public schools has been steadily eroding since 1973. By 1982, just 42% of respondents reported confidence in public schools; and while 1985 and 1988 saw a slight rebound in positive public perception, peaks and valleys notwithstanding, the trend has been clearly downward.
By Bethany Hudnutt, MetaMetrics, July 2012
“Swiss cheese mathematics.” This metaphor aptly describes some students’ cognitive mathematical landscapes. When students miss connections among mathematical ideas, they may seem to have some substantive grasp and intuition on a topic at hand, yet they may still have major holes in their knowledge which prevent mastery of interrelated concepts. Those holes become limitations to the development of further mathematical knowledge.
By Malbert Smith, Ph.D. and Todd Sandvik, MetaMetrics, June 2012
During the past year, we have traveled the world to meet with leaders in educational assessment, technology, publishing, and research. As they have described the challenges facing education within their countries and organizations—and the strategies for confronting them—four common trends have emerged. Generally speaking, they represent advancements in thinking that can be traced to an increasingly global orientation and growing digital capabilities. Each trend represents real opportunities to improve learning and better meet the needs of students, parents, and educators. Our work with the Lexile® Framework for Reading offers examples of benefits being achieved today in support of these trends.
By Malbert Smith, Ph. D. and Jason Turner, MetaMetrics, April 2012
As the rigorous Common Core State Standards in Mathematics move from the adoption stage into the implementation stage, it is imperative that classroom educators be given the tools and resources which will allow them to move beyond whole class instruction and begin to differentiate for math students at every level. It is likely that math classrooms will continue to present a wide range of student abilities. Harnessing the Quantile Framework, however, frees educators from the constraints of the ‘R-V' (Repetition and Volume) Model of classroom instruction. By utilizing a common scale – along with the technology and resources that support its application – educators finally have the tools needed to differentiate for struggling math learners and to ensure that all students are provided targeted instruction that matches their current ability level. Additionally, the free tools provided through the Quantile Framework enable math educators to access resources and target students at just the right level, making meaningful differentiation not only possible, but more practical and likely.
by Malbert Smith, Ph.D., MetaMetrics, February 2012
Since the Common Core State Standards were published last year, much national attention has focused on the importance of text complexity in evaluating college- and career-readiness. Common Core authors David Coleman and Sue Pimentel have stated that understanding and measuring text complexity is a major shift in the new English Language Arts Standards and that these criteria are key to determining if students are adequately prepared for the academic and professional reading demands they will likely face after high school (The Hunt Institute, 2011). Subsequent reports, “Publishers’ Criteria for the Common Core State Standards in English Language Arts and Literacy” (for grades K-2 and Grades 3–12) and the more recent, “Measures of Text Difficulty: Testing Their Predictive Values for Grade Levels and Student Performance,” have echoed these same text complexity themes.
MetaMetrics focuses on the importance of matching individual readers with targeted texts that provide the right level of challenge to support continued reading growth. Long before the Common Core movement, The Lexile® Framework for Reading played an important role in articulating the reading demands typically encountered in first grade through college and careers. In fact, MetaMetrics’ research on K-12 reading demands and ultimately those of the postsecondary world are annotated in the text complexity “staircase” in the Standards’ English Language Arts Appendix A. This staircase approach to text complexity is designed to help guide students’ reading comprehension development through their school years.
The subsequent reports noted earlier were intended to provide policy makers, educators and publishers with additional information and guidance on the value of measuring text complexity. While the “Publishers’ Criteria” reports primarily reiterate much of the research contained in the Common Core, the “Measures of Text Difficulty” report offers a detailed analysis of the text complexity landscape in terms of the tools commonly found in the marketplace. The report compares six text complexity tools—Carnegie Mellon University’s and the University of Pittsburgh’s REAder-specific Practice (REAP), Renaissance Learning’s ATOS, Questar Assessment’s Degrees of Reading Power® (DRP®), Pearson’s Reading Maturity Metric, ETS’s SourceRater and MetaMetrics’ Lexile measure—using various criterion outcomes. A description of a seventh tool, Coh-Metrix’s Text Easability Assessor, is also mentioned. In summary, the report states that “there is no agreed upon gold standard” for evaluating text complexity. Its comparisons of the text complexity tools demonstrate that while they share some commonalities, there are also distinct differences (Nelson, Perfetti, Liben, & Liben, 2011). Building upon the report’s findings, this document provides a contextual framework for how these similarities and differences could be interpreted and used by the educational and publishing communities when selecting a text complexity tool.
by Colin Emerson, MetaMetrics, October 2011
The recent release of the English Language Syllabus 2010 (ELS2010) in Singapore has brought renewed interest in the way that the Singaporean education system prepares its students to read, write, speak, and hear the English language. As reading plays an important role in the development of literacy skills and general English language abilities, it is necessary to consider how students can best develop a strong foundation in and love of reading. While the ELS2010 sets out specific learning outcomes and guidelines for achievement, instruction, and assessment for learning, the standards are qualitative in nature. The purpose of this paper is to examine how the reading-specific standards set forth in ELS2010 can be strengthened for the STELLAR program using a tool to measure text complexity and reader ability, The Lexile Framework for Reading (LFR). The STELLAR program serves as the main literacy development program at the primary school level, and it is built on a pedagogic model that allows for analysis of both instructional benefits and policy implications of linking it to the LFR. The classroom activities of the STELLAR program are analyzed for areas where enhancements for students, teachers, parents, and policymakers can be made using the LFR. Consideration of STELLAR and the LFR shows that standards backed by a quantitative set of measures could inform not only enhancements for classroom instruction, but also policymaking at the school and national levels. In addition, the examination identifies other aspects of the ELS2010 curriculum that could be enhanced by linking to The Lexile Frameworks for Reading and Writing.
by Carl W. Swartz, Ph.D., Sean T. Hanlon, A. Jackson Stenner, Ph.D., Hal Burdick and Donald S. Burdick, Ph.D., MetaMetrics, October 2011
English is the unofficial technical and business language of the world. Estimates suggest that more than 1 billion people worldwide use English to varying degrees of understanding and expression. A common second language like English enables the internet to function as a digital passport allowing those whose first language might be Russian, Arabic, Cantonese, French, Spanish, or Hindi to cross international borders and share understanding of local, national, and international events and cultures. The purpose of this study was to investigate the text complexity of online English language newspapers sampled from around the world. The results of this study suggest that the text complexity of online English newspapers is commensurate with the complexity of text encountered by readers in two and four year universities, colleges and the workplace and is slightly higher than the text demands of domestic newspapers. Text at this high level may prove to be a barrier to understanding across borders and cultures. But, the level of text complexity sets an implicit aspirational goal for those who desire to be educated in or work in the United States. Our goal is not to advocate for lowering the text complexity of online English newspapers, but to enhance the reading ability of all English language learners who desire to access the information and knowledge contained in college and career texts.
by Carl W. Swartz, Ph.D., A. Jackson Stenner, Ph.D., Sean T. Hanlon, Hal Burdick, and Donald S. Burdick
Research suggests students may not maintain or attain a sufficient degree of reading expertise to achieve college and career readiness in literacy (Achieve, 2005; ACT, 2005, 2006; Alliance for Excellent Education, 2006). The new-found focus on college and career readiness provides an opportunity to further develop the instructional ingredients critical to promoting expertise such that each reader is placed on a growth trajectory predictive of college and career readiness.
Developing expertise in any field of endeavor requires immersing people in activities targeted to their abilities with opportunities to receive feedback and independent practice over long periods of time. Applying these principles in the classroom, so that each student has an opportunity to develop expertise in literacy, will require using technology that supports the teacher. Learning Oasis™ is one such technology.
by Malbert Smith, III, Ph.D., MetaMetrics, May 2009
Today, it is "in vogue" to write, talk and think about the measurement of 21st century skills. Generally, these discussions focus on what should be measured (e.g., critical thinking, cultural awareness, digital literacy), but not necessarily how these constructs should be measured.
More than 30 years ago, legendary assessment guru Oscar K. Buros reflected on the last 50 years of testing (Buros, 1977). His concern about the lack of progress made in the testing field was punctuated in the following: "If you would examine these books and the best of the achievement and intelligence tests then available, you might be surprised that so little progress has been made in the past fifty years-in fact, in some areas we are not doing as well. Except for the tremendous advances in electronic scoring, analysis, and reporting of test results, we don't have a great deal to show for fifty years of work. Essentially, achievement tests are being constructed today in the same way they were fifty years ago-the major changes being the use of more sophisticated statistical procedures for doing what we did then-mistakes and all" [p. 10].
by Malbert Smith, III, Ph.D. and Dee Brewer, M.A., M.Ed., MetaMetrics, March 2007
Every year, U.S. students go to school for an average of 180 days. During that time, most progress along a learning trajectory and grow in terms of knowledge and skills. However, when summer break comes along, the formal learning process often ends, and many students, particularly those from low-income families, begin to show learning losses. In fact, research shows that all students experience learning loss when they do not engage in educational activities during the summer.
...Scientific research over decades has confirmed that, without intervention, children who start school behind likely will stay behind and that children who cannot read at grade level by the fourth grade will likely face an ongoing struggle to learn...
by Gary L. Williamson, Ph.D., MetaMetrics, October 2006
A great deal has been written in the last few decades about the condition of public education in the United States. Many authors have argued that students are unprepared after high school for the variety of experiences they seek in life, not only those experiences related to higher education but also those related to the military, the workplace and the day-today responsibilities of citizenship.
...Two threads of recent research provide one possible way of developing and aligning (or at least informing the discussion about) student achievement standards for K-16...
by Gary L. Williamson, Ph.D., MetaMetrics, September 2006
When the No Child Left Behind (NCLB) Act of 2001 was signed into law in January 2002, it established sweeping requirements related to annual achievement testing for state accountability purposes. Since that time, states have re-evaluated their testing programs, and in some cases expanded them, to ensure they meet the requirements of the law. However, states and school districts use achievement tests for purposes other than accountability (e.g., for instructional or programmatic monitoring) and often supplement the annual accountability assessments required by NCLB with interim tests during the school year to gauge whether they are on track to meet the annual achievement targets required by the law. Consequently, students are more likely than ever before to be assessed multiple times during a school year.
...With increasing variety and frequency of assessments, combined with the inclination to employ a common metric for all assessments of a given construct, it is increasingly routine to have multiple, comparable assessment measures for reading or mathematics available for each student...
by Gary L. Williamson, Ph.D., MetaMetrics, July 2006
We are all familiar with children, either through knowing our own or through acquaintance with those of other people. Perhaps no other thing in life is as obvious as the dramatic way that human beings develop and grow. Our key social and political institutions devote a significant part of their resources to ensuring that children grow and learn to function as productive citizens. Growth and learning are central to the mission of our country's public schools...
Because there are a number of alternative ways to conceptualize student growth and to measure it, states face a challenge to design and implement accountability systems that address a variety of information needs and still comply with state and federal laws. In this context, there are naturally many viewpoints about how best to conceptualize and measure student growth and to set appropriate goals for growth. This makes it especially important for students, parents and educators to better understand student growth, how it is measured, and how growth expectations may be set in different contexts for different purposes...
by Gary L. Williamson, Ph.D., MetaMetrics, July 2004
Students who leave high school successfully may nevertheless be unprepared for the array of possibilities that face them in the postsecondary world. Whether the goal is further education, a job, or enlistment in the military, there are those who say that high school graduates are unprepared because of a lack of basic skills attained in the public schools. Some even question whether young adults have developed enough basic literacy skills to function effectively as citizens...
by Malbert Smith III, Ph.D., MetaMetrics, July 2004
With the passage of the No Child Left Behind Act, Congress reauthorized the Elementary and Secondary Education Act (ESEA) - the principal federal law affecting education from kindergarten through high school. In amending ESEA (commonly referred to as No Child Left Behind, or NCLB), the new law represents a sweeping overhaul of federal efforts to support elementary and secondary education in the United States...
One of the major weaknesses of reading education today is the lack of meaningful measurement systems. The key in the "hard" sciences is unification of measurement. In the case of the measurement of temperature in the 1600s, there were literally dozens of instrument makers with their own scale. However, once a theory of temperature had been developed and accepted, measurement unification was possible. Today, it is inconsequential whether a temperature is taken with a thermometer purchased at CVS or K-Mart - the scale is independent of the manufacturer of the instrument...
by Gary L. Williamson, Ph.D., MetaMetrics, April 2004
Since 1990, educational accountability systems have been widely implemented in the United States. The focus on accountability recently gained new emphasis with the reauthorization of the Elementary and Secondary Education Act (ESEA) signed into law by President Bush on January 8, 2002. The law, usually called the No Child Left Behind Act of 2001 (NCLB), put in place sweeping requirements for increased accountability in the public schools of the United States. A central feature of the new law is the requirement for annual assessments of students in reading and mathematics.
Because of the new federal requirements as well as state testing programs that were already in place in many states, the academic performance of students in the United States is perhaps more widely measured now than at any previous time in history. With more frequent measurement, parents and teachers have access to more information about their students' performance than at any previous time. With the increased availability of information, parents and teachers are better informed than in the past. Ironically, they may also find themselves having more questions about the results than at any time in the past...
by Colleen Lennon and Hal Burdick, MetaMetrics, April 2004
The Lexile Framework for Reading is an innovative approach to reading comprehension that can be implemented by educators, parents and readers of all ages. Lexile measures, as components of the Lexile scale, are the result of more than 20 years of ongoing research based on two well-established predictors of how difficult a text is to comprehend. By measuring both text difficulty and reader ability on the same scale, readers can be appropriately matched with books that will be both engaging and challenging.
Lexile measures are the most widely adopted reading measures in use today. Tens of thousands of books and tens of millions of newspaper and magazine articles have Lexile measures - more than 450 publishers Lexile their titles. In addition, all major standardized reading tests and many popular instructional reading programs can report student reading scores in Lexile measures. Implementation of the Lexile Framework has led to reading success and improved reading enjoyment at all levels of proficiency...
by Samantha S. Burg, Ph.D., MetaMetrics, April 2004
The psychometric models used in the context of many achievement tests assume a unidimensional construct is being measured. That is, in the context of measuring student achievement, most tests are considered to measure one latent trait, construct or ability (i.e., unidimensional). Other tests are designed to measure a combination of abilities (in which it is referred to as multidimensional). In either context, the dimensional structure of a test is intricately tied into the purpose and definition of the construct to be measured. However, it is sometimes the case that a test that is intended to be unidimensional may unintentionally be measuring more than one latent variable.