Standardized test

Standardized test
Young adults in Poland sit for their Matura exams. The Matura is standardized so that universities can easily compare results from students across the entire country.

A standardized test is a test that is administered and scored in a consistent, or "standard", manner. Standardized tests are designed in such a way that the questions, conditions for administering, scoring procedures, and interpretations are consistent[1] and are administered and scored in a predetermined, standard manner.[2]

Any test in which the same test is given in the same manner to all test takers is a standardized test. Standardized tests need not be high-stakes tests, time-limited tests, or multiple-choice tests. The opposite of a standardized test is a non-standardized test. Non-standardized testing gives significantly different tests to different test takers, or gives the same test under significantly different conditions (e.g., one group is permitted far less time to complete the test than the next group), or evaluates them differently (e.g., the same answer is counted right for one student, but wrong for another student).

Standardized tests are perceived as being more fair than non-standardized tests. The consistency also permits more reliable comparison of outcomes across all test takers.




The earliest evidence of standardized testing was in China,[3] where the imperial examinations covered the Six Arts which included music, archery and horsemanship, arithmetic, writing, and knowledge of the rituals and ceremonies of both public and private parts. Later, the studies (military strategies, civil law, revenue and taxation, agriculture and geography) were added to the testing. In this form, the examinations were institutionalized during the 6th century CE, under the Sui Dynasty.


Standardized testing was introduced into Europe in the early 19th century, modeled on the Chinese mandarin examinations,[4] through the advocacy of British colonial administrators, the most "persistent" of which was Britain's consul in Guangzhou, China, Thomas Taylor Meadows.[4] Meadows warned of the collapse of the British Empire if standardized testing was not implemented throughout the empire immediately.[4]

Prior to their adoption, standardized testing was not traditionally a part of Western pedagogy; based on the sceptically and open-ended tradition of debate inherited from Ancient Greece, Western academia favored non-standardized assessments using essays written by students. It is because of this that the first European implementation of standardized testing did not occur in Europe proper, but in British India.[5] Inspired by the Chinese use of standardized testing, in the early 19th century, British "company managers hired and promoted employees based on competitive examinations in order to prevent corruption and favoritism."[5] This practice of standardized testing was later adopted in the late 19th century by the British mainland. The parliamentary debates that ensued made many references to the "Chinese mandarin system."[4]

It was from Britain, that standardized testing spread, not only throughout the British Commonwealth, but to Europe and then America.[4] Its spread was fueled by the Industrial Revolution. Given the large number of school students during and after the Industrial Revolution, when compulsory education laws increased student populations, open-ended assessment of all students decreased. Moreover, the lack of a standardized process introduces a substantial source of measurement error, as graders might show favoritism or might disagree with each other about the relative merits of different answers.

More recently, it has been shaped in part by the ease and low cost of grading of multiple-choice tests by computer. Grading essays by computer is more difficult, but is also done. In other instances, essays and other open-ended responses are graded according to a pre-determined assessment rubric by trained graders.

United States

The use of standardized testing in the United States is a 20th-century phenomenon with its origins in World War I and the Army Alpha and Beta tests developed by Robert Yerkes and colleagues.[6]

In the United States, the need for the federal government to make meaningful comparisons across a highly de-centralized (locally controlled) public education system has also contributed to the debate about standardized testing, including the Elementary and Secondary Education Act of 1965 that required standardized testing in public schools. US Public Law 107-110, known as the No Child Left Behind Act of 2001 further ties public school funding to standardized testing.

Design and scoring

Some standardized testing uses multiple-choice tests, which are relatively inexpensive to score, but any form of assessment can be used.

Standardized testing can be composed of multiple-choice, true-false, essay questions, authentic assessments, or nearly any other form of assessment. Multiple-choice and true-false items are often chosen because they can be given and scored inexpensively and quickly by scoring special answer sheets by computer or via computer-adaptive testing. Some standardized tests have short-answer or essay writing components that are assigned a score by independent evaluators who use rubrics (rules or guidelines) and benchmark papers (examples of papers for each possible score) to determine the grade to be given to a response. Most assessments, however, are not scored by people; people are used to score items that are not able to be scored easily by computer (i.e., essays). For example, the Graduate Record Exam is a computer-adaptive assessment that requires no scoring by people (except for the writing portion).[7]

Scoring issues

Human scoring is often variable, which is why computer scoring is preferred when feasible. For example, some believe that poorly paid employees will score tests badly.[8] Agreement between scorers can vary between 60 to 85 percent, depending on the test and the scoring session. Sometimes states pay to have two or more scorers read each paper; if their scores do not agree, then the paper is passed to additional scorers.[8]

Open-ended components of tests are often only a small proportion of the test. Most commonly, a major test includes both human-scored and computer-scored sections.


There are two types of standardized test score interpretations: a norm-referenced score interpretation or a criterion-referenced score interpretation. Norm-referenced score interpretations compare test-takers to a sample of peers. Criterion-referenced score interpretations compare test-takers to a criterion (a formal definition of content), regardless of the scores of other examinees. These may also be described as standards-based assessments as they are aligned with the standards-based education reform movement.[9] Norm-referenced test score interpretations are associated with traditional education, which measures success by rank ordering students using a variety of metrics, including grades and test scores, while standards-based assessments are based on the belief that all students can succeed if they are assessed against standards which are required of all students regardless of ability or economic background.[citation needed]


The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any standardized test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any standardized test as a whole within a given context.

Evaluation standards

In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation[10] has published three sets of standards for evaluations. The Personnel Evaluation Standards[11] was published in 1988, The Program Evaluation Standards (2nd edition)[12] was published in 1994, and The Student Evaluation Standards[13] was published in 2003.

Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance.

Testing standards

In the field of psychometrics, the Standards for Educational and Psychological Testing[14] place standards about validity and reliability, along with errors of measurement and issues related to the accommodation of individuals with disabilities. The third and final major topic covers standards related to testing applications, credentialing, plus testing in program evaluation and public policy.


One of the main advantages of standardized testing is that the results can be empirically documented; therefore, the test scores can be shown to have a relative degree of validity and reliability, as well as results which are generalizable and replicable.[15] This is often contrasted with grades on a school transcript, which are assigned by individual teachers. It may be difficult to account for differences in educational culture across schools, difficulty of a given teacher's curriculum, differences in teaching style, and techniques and biases that affect grading. This makes standardized tests useful for admissions purposes in higher education, where a school is trying to compare students from across the nation or across the world.

Another advantage is aggregation. A well designed standardized test provides an assessment of an individual's mastery of a domain of knowledge or skill which at some level of aggregation will provide useful information. That is, while individual assessments may not be accurate enough for practical purposes, the mean scores of classes, schools, branches of a company, or other groups may well provide useful information because of the reduction of error accomplished by increasing the sample size.

Standardized tests, which by definition give all test-takers the same test under the same (or reasonably equal) conditions, are also perceived as being more fair than assessments that use different questions or different conditions for students according to their race, socioeconomic status, or other considerations.

Disadvantages and criticism

Standardized tests are useful tools for assessing student achievement, and can be used to focus instruction on desired outcomes, such as reading and math skills.[16] However, critics feel that overuse and misuse of these tests harms teaching and learning by narrowing the curriculum. According to the group FairTest, when standardized tests are the primary factor in accountability, schools use the tests to define curriculum and focus instruction. Critics say that "teaching to the test" disfavors higher-order learning. While it is possible to use a standardized test without letting its contents determine curriculum and instruction, frequently, what is not tested is not taught, and how the subject is tested often becomes a model for how to teach the subject.

Uncritical use of standardized test scores to evaluate teacher and school performance is inappropriate, because the students' scores are influenced by three things: what students learn in school, what students learn outside of school, and the students' innate intelligence.[17] The school only has control over one of these three factors. Value-added modeling has been proposed to cope with this criticism by statistically controlling for innate ability and out-of-school contextual factors.[18] In a value-added system of interpreting test scores, analysts estimate an expected score for each student, based on factors such as the student's own previous test scores, primary language, or socioeconomic status. The difference between the student's expected score and actual score is presumed to be due primarily to the teacher's efforts.

Supporters of standardized testing respond that these are not reasons to abandon standardized testing in favor of either non-standardized testing or of no assessment at all, but rather criticisms of poorly designed testing regimes. They argue that testing does and should focus educational resources on the most important aspects of education — imparting a pre-defined set of knowledge and skills — and that other aspects are either less important, or should be added to the testing scheme.

Scoring information loss

When tests are scored right-wrong, an important assumption has been made about learning. The number of right answers or the sum of item scores (where partial credit is given) is assumed to be the appropriate and sufficient measure of current performance status. In addition, a secondary assumption is made that there is no meaningful information in the wrong answers.

In the first place, a correct answer can be achieved using memorization without any profound understanding of the underlying content or conceptual structure of the problem posed. Second, when more than one step for solution is required, there are often a variety of approaches to answering that will lead to a correct result. The fact that the answer is correct does not indicate which of the several possible procedures were used. When the student supplies the answer (or shows the work) this information is readily available from the original documents.

Second, if the wrong answers were blind guesses, there would be no information to be found among these answers. On the other hand, if wrong answers reflect interpretation departures from the expected one, these answers should show an ordered relationship to whatever the overall test is measuring. This departure should be dependent upon the level of psycholinguistic maturity of the student choosing or giving the answer in the vernacular in which the test is written.

In this second case it should be possible to extract this order from the responses to the test items.[19] Such extraction processes, the Rasch model for instance, are standard practice for item development among professionals. However, because the wrong answers are discarded during the scoring process, attempts to interpret these answers for the information they might contain is seldom undertaken.

Third, although topic-based subtest scores are sometimes provided, the more common practice is to report the total score or a rescaled version of it. This rescaling is intended to compare these scores to a standard of some sort. This further collapse of the test results systematically removes all the information about which particular items were missed.

Thus, scoring a test right–wrong loses 1) how students achieved their correct answers, 2) what led them astray towards unacceptable answers and 3) where within the body of the test this departure from expectation occurred.

This commentary suggests that the current scoring procedure conceals the dynamics of the test-taking process and obscures the capabilities of the students being assessed. Current scoring practice oversimplifies these data in the initial scoring step. The result of this procedural error is to obscure of the diagnostic information that could help teachers serve their students better. It further prevents those who are diligently preparing these tests from being able to observe the information that would otherwise have alerted them to the presence of this error.

A solution to this problem, known as Response Spectrum Evaluation (RSE),[20] is currently being developed that appears to be capable of recovering all three of these forms of information loss, while still providing a numerical scale to establish current performance status and to track performance change.

This RSE approach provides an interpretation of the thinking processes behind every answer (both the right and the wrong ones) that tells teachers how they were thinking for every answer they provide.[21] Among other findings, this chapter reports that the recoverable information explains between two and three times more of the test variability than considering only the right answers. This massive loss of information can be explained by the fact that the "wrong" answers are removed from the test information being collected during the scoring process and is no longer available to reveal the procedural error inherent in right-wrong scoring. The procedure bypasses the limitations produced by the linear dependencies inherent in test data.

Testing bias occurs when a test systematically favors one group over another, even though both groups are equal on the trait the test measures. Critics allege that test makers and facilitators tend to represent a middle class, white background. Critics claim that standardized testing match the values, habits, and language of the test makers[citation needed]. However, being that most tests come from a white, middle-class background, it is important to note that the highest scoring groups are not people of that background, but rather tend to come from Asian populations.[22]

Not all tests are well-written, for example, containing multiple-choice questions with ambiguous answers, or poor coverage of the desired curriculum. Some standardized tests include essay questions, and some have criticized the effectiveness of the grading methods. Recently, partial computerized grading of essays has been introduced for some tests, which is even more controversial.[23]

Educational decisions

Test scores are in some cases used as a sole, mandatory, or primary criterion for admissions or certification. For example, some U.S. states require high school graduation examinations. Adequate scores on these exit exams are required for high school graduation. The General Educational Development test is often used as an alternative to a high school diploma.

Other applications include tracking (deciding whether a student should be enrolled in the "fast" or "slow" version of a course) and awarding scholarships. In the United States, many colleges and universities automatically translate scores on Advanced Placement tests into college credit, satisfaction of graduation requirements, or placement in more advanced courses. Generalized tests such as the SAT or GRE are more often used as one measure among several, when making admissions decisions. Some public institutions have cutoff scores for the SAT, GPA, or class rank, for creating classes of applicants to automatically accept or reject.

Heavy reliance on standardized tests for decision-making is often controversial, for the reasons noted above. Critics often propose emphasizing cumulative or even non-numerical measures, such as classroom grades or brief individual assessments (written in prose) from teachers. Supporters argue that test scores provide a clear-cut, objective standard that minimizes the potential for political influence or favoritism.

The National Academy of Sciences recommends that major educational decisions not be based solely on a test score.[24] The use of minimum cut-scores for entrance or graduation does not imply a single standard, since test scores are nearly always combined with other minimal criteria such as number of credits, prerequisite courses, attendance, etc. Test scores are often perceived as the "sole criteria" simply because they are the most difficult, or the fulfillment of other criteria is automatically assumed. One exception to this rule is the GED, which has allowed many people to have their skills recognized even though they did not meet traditional criteria.

See also

Major topics:

Other topics:


  1. ^ Sylvan Learning glossary, retrieved online, source no longer available
  2. ^ Popham, W.J. (1999). Why standardized tests don’t measure educational quality. Educational Leadership, 56(6), 8–15.
  3. ^ Encyclopedia Brittanica
  4. ^ a b c d e Mark and Boyer (1996), 9-10.
  5. ^ a b Kazin, Edwards, and Rothman (2010), 142.
  6. ^ Gould, S.J. (1982) A Nation of Morons. New Scientist (6 May 1982), 349–352.
  7. ^ ETS webage about scoring the GRE.
  8. ^ a b Houtz, Jolayne (August 27, 2000) "Temps spend just minutes to score state test A WASL math problem may take 20 seconds; an essay, 212 minutes". Seattle Times "In a matter of minutes, a $10-an-hour temp assigns a score to your child's test"
  9. ^ Where We Stand: Standards-Based Assessment and Accountability (American Federation of Teachers) [1][dead link]
  10. ^ Joint Committee on Standards for Educational Evaluation
  11. ^ Joint Committee on Standards for Educational Evaluation. (1988). The Personnel Evaluation Standards: How to Assess Systems for Evaluating Educators. Newbury Park, CA: Sage Publications.
  12. ^ Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards, 2nd Edition. Newbury Park, CA: Sage Publications.
  13. ^ Committee on Standards for Educational Evaluation. (2003). The Student Evaluation Standards: How to Improve Evaluations of Students. Newbury Park, CA: Corwin Press.
  14. ^ The Standards for Educational and Psychological Testing
  15. ^ Kuncel, N. R., & Hezlett, S. A. (2007). Science, 315, 1080-81.
  16. ^ The College Work Readiness Assessment.
  17. ^ Popham, W.J. (1999). Why Standardized Test Scores Don't Measure Educational Quality. Educational Leadership, 56(6) 8–15.
  18. ^ Hassel, B. & Rosch, J. (2008) "Ohio Value-Added Primer." Fordham Foundation.
  19. ^ Powell, J. C. and Shklov, N. (1992) The Journal of Educational and Psychological Measurement, 52, 847–865
  20. ^ "A Paradigm Shift in Test Scoring!"
  21. ^ Powell, Jay C. (2010) Testing as Feedback to Inform Teaching. Chapter 3 in; Learning and Instruction in the Digital Age: Making a Difference through Cognitive Approaches. New York: Springer. ISBN 978-1-4419-1550-1
  22. ^ Race and intelligence (test data)#IQ test score gap in the US
  23. ^ Weighing In On the Elements of Essay by Jay Mathews. Washington Post, 1 Aug 2004, p. A01.
  24. ^ "High Stakes: Testing for Tracking, Promotion, and Graduation"

Further reading

  • Ravitch, Diane, “The Uses and Misuses of Tests”, in The Schools We Deserve (New York: Basic Books, 1985), pp. 172–181.
  • Huddleston, Mark W. Boyer, William W.The higher civil service in the United States: quest for reform. (University of Pittsburgh Press, 1996)

External links

Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • standardized test — n. pre made test, test that is prepared in the same way for everyone …   English contemporary dictionary

  • standardized test — ➡ higher education * * * …   Universalium

  • Standardized testing and public policy — Standardized testing is used as a public policy strategy to establish stronger accountability measures for public education. While the National Assessment of Education Progress (NAEP) has served as an educational barometer for some thirty years… …   Wikipedia

  • DSST (standardized test) — DSST is an acronym for DANTES Subject Standardized Tests, reflecting the DSST s origins in the United States Department of Defense s Defense Activity for Non Traditional Education Support (DANTES) program. DSST s are credit by examination tests… …   Wikipedia

  • Test (student assessment) — A test or an examination (or exam ) is an assessment, often administered on paper or on the computer, intended to measure the test takers or respondents (often a student) knowledge, skills, aptitudes, or classification in many other topics (e.g …   Wikipedia

  • Standardized Testing and Reporting — The Standardized Testing and Reporting (STAR) Program measures performance on the California Achievement Test, Sixth Edition Survey (CAT/6 Survey), the California Content Standards Test and the Spanish Assessment of Basic Education (SABE/2). The… …   Wikipedia

  • Test of Economic Literacy — The Test of Economic Literacy or TEL is a standardized test of economics nationally norm referenced in the United States for use in upper grade levels of high schools. It is one of four grade level specific standardized economics tests (i.e.,… …   Wikipedia

  • Test of Understanding in College Economics — The Test of Understanding in College Economics or TUCE is a standardized test of economics nationally norm referenced in the United States for use at the undergraduate level, primarily targeting introductory or principles level coursework in… …   Wikipedia

  • Test of Economic Knowledge — The Test of Economic Knowledge or TEK is a standardized test of economics nationally norm referenced in the United States for use in middle schools and in lower grade levels of high schools. It is one of four grade level specific standardized… …   Wikipedia

  • test — Synonyms and related words: Bernreuter personality inventory, Binet Simon test, Brown personality inventory, Goldstein Sheerer test, IQ, IQ test, Kent mental test, Minnesota preschool scale, Olympic games, Olympics, Oseretsky test, Pap test,… …   Moby Thesaurus

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”