What is PISA?
Issued by the Organisation for Economic Co-operation and Development (OECD), PISA tests the skills and knowledge of 15-year-old students in mathematics, reading, and science. Seventy-two countries and economies took part in the 2015 assessment, which focused on science, and the data were released by the OECD on 6th December 2016.
Additional results on well-being, financial literacy and collaborative problem solving will be released in 2017.
What does PISA assess and why?
PISA focuses on the assessment of student performance in reading, mathematics and science because they are foundational to a student's ongoing education. PISA also collects valuable information on student attitudes and motivations, and formally assesses skills such as collaborative problem solving and is investigating opportunities to assess other important competencies related, for example, to global competence.
PISA draws on content that can be found in curricula across the world and looks at students’ ability to apply knowledge and skills and to analyse, reason and communicate effectively as they examine, interpret and solve problems. PISA does not prescribe or promote any one curriculum nor is it constrained by the need to find common denominators. For 2015 the goal of PISA was to assess science knowledge and skills that experts in the participating countries and economies consider to be most important for the future success of students in an increasingly science-based world.
Why is PISA assessed every three years and why does it test 15-year-olds?
A key objective of PISA is to inform and support education policy decision making within countries. A three-year cycle provides countries with timely information that includes data and analyses to consider the impact of policy decisions and related programs. If it were more frequent it would not allow sufficient time for changes and innovations to show improvement or decline, and if it were less frequent it would mean declines in performance could not be promptly addressed.
The average age of 15 was chosen because at this age young people in most OECD countries are nearing the end of compulsory education. The selection of schools and students is as inclusive as possible, so that the sample of students comes from a broad range of backgrounds and abilities.
What makes PISA unique?
PISA benefits from its worldwide scope and its regularity. More than 80 countries and economies have taken part in PISA so far and the surveys, which are carried out every three years, allow them to track their progress in meeting key learning goals. PISA is the only international education survey to measure the knowledge and skills of 15-year-olds, an age at which students in most countries are nearing the end of their compulsory time in school.
PISA is also unique in the way it looks at:
- Public policy issues: Governments, principals, teachers and parents all want answers to questions such as, "Are our schools adequately preparing young people for the challenges of adult life?", "Are some kinds of teaching and schools more effective than others?" and, "Can schools contribute to improving the futures of students from immigrant or disadvantaged backgrounds?"
- Literacy: Rather than examine mastery of specific school curricula, PISA looks at students’ ability to apply knowledge and skills in key subject areas and to analyse, reason and communicate effectively as they examine, interpret and solve problems.
- Lifelong learning: Students cannot learn everything they need to know in school. In order to be effective lifelong learners, young people need not only knowledge and skills, but also an awareness of why and how they learn. PISA both measures student performance in reading, mathematics and science literacy and also asks students about their motivations, beliefs about themselves and learning strategies.
What are the main differences between PISA and TIMSS?
The two surveys are very different. Firstly, they target different groups of students: PISA tests 15 year olds whereas TIMMS tests Grades 4, 8, and (TIMSS Advanced) the final year. Secondly, they test different things. TIMSS is curriculum-based whereas PISA assesses the application of skills to real-life problems. TIMSS focuses on content normally covered in class while PISA evaluates overarching content categories that go beyond curricula. PISA also emphasises the importance of the context in which students should be able to use their skills (schools, home and society). These differences make comparison very difficult.
How do I find out more about the PISA assessment and who develops it?
The OECD through its website and publications makes available to both the public and specialists all the key information on the methods and processes associated with the PISA surveys. The following documents are available on the Data page: the assessment frameworks that explain what is to be assessed, why and how; examples of the questionnaire items; the data from the assessment; the survey implementation tools for administration and language quality assurance; a comprehensive technical report for every cycle that includes detailed technical information on every aspect of assessment and analysis.
For each assessment cycle a selection of PISA test materials is also made available to the general public. In order to allow countries to follow their performance over time, many questions are used in more than one PISA survey. These questions cannot be made public as long as they are in use.
In addition to OECD staff and contractors, hundreds of experts, academics and researchers in participating PISA countries and economies are involved in PISA's development, analysis and reporting and details of these participants are provided within the PISA reports.
How is PISA governed?
PISA is developed and implemented under the responsibility of education ministries through PISA’s decision-making body, the PISA Governing Board. The Board has representatives from all member countries plus partner countries with Associate status, currently only Brazil. Countries appoint representatives to the Board who are knowledgeable about large-scale student assessments and their interface with educational policy and practice. Representatives comprise a mix of government officials and staff of research and academic institutions.
The Board determines the policy priorities for PISA and oversees adherence to these priorities during implementation. This includes the setting of priorities and standards for data development, analysis and reporting as well as the determination of the scope of work that will then form the basis for the implementation of PISA.
To ensure the technical robustness of PISA, a Technical Advisory Group (TAG) is appointed by the OECD comprising independent, world-renowned experts in the fields that underpin the PISA methodology, such as sampling, survey design, scaling and analysis. The TAG is regularly called upon to adjudicate the PISA methods and the results of individual countries to ensure that what is published from PISA is robust and internationally comparable.
Which institutions and teams are behind PISA?
The PISA Governing Board : The PISA Governing Board is composed of representatives of OECD members and PISA associates*. Countries and economies that participate in PISA but do not have associate status are welcome to participate in PGB meetings as observers. Representatives are appointed by their education ministries, and the PGB Chair is chosen by the Board itself. Guided by the OECD’s education objectives, the Board determines the policy priorities for PISA and makes sure that these are respected during the implementation of each PISA survey.
*Associates are countries or economies that are not OECD members but have membership rights and obligations in regard to specific OECD bodies and programmes.
The OECD Secretariat: The OECD Secretariat is responsible for the day-to-day management of PISA. This means that the PISA team monitors the survey’s implementation, manages administrative matters for the PISA Governing Board, builds consensus among countries and serves as an intermediary between the PISA Governing Board and the PISA Consortium.
The PISA National Project Managers: Working with the OECD Secretariat, the PISA Governing Board and the international contractors, the PISA National Project Managers oversee the implementation of PISA in each participating country/economy. The PISA National Project Managers are appointed by their governments. You can see a list of the 2012 National Project Managers here.
The international contractors (the "PISA Consortium"): For each PISA survey, international contractors (usually made up of testing and assessment agencies) are responsible for the design and implementation of the surveys. The contractors are chosen by the PISA Governing Board through an international call for tender. The contractors are typically referred to as the PISA Consortium.
Education authorities: PISA would not be possible without the support and guidance of the education ministries in the participating countries and economies.
The PISA Subject Matter Expert Groups: PISA has Subject Matter Expert Groups for its three key areas of testing – reading, mathematics and science literacy – as well as for other subjects when appropriate (problem solving in PISA 2012, for example). These groups are made up of world experts in each area. They design the theoretical framework for each PISA survey.
The PISA Questionnaire Expert Group: The Questionnaire Expert Group provides leadership and guidance in the construction of the PISA context questionnaires. The members of the Questionnaire Expert Group are selected by the PISA Governing Board.
Who pays for PISA?
PISA is financed exclusively through direct contributions from the participating countries and economies’ government authorities, typically education ministries.
How much does PISA cost?
Though PISA is conducted every three years, it’s easiest to think about the cost to countries per annum. The cost to each country consists of international costs (principally international contractors and OECD staff) and national costs (national centre, translation etcetera):
• International PISA costs for OECD members vary widely by country, reflecting the original agreement with the OECD when the country joined the Organisation, with an average per annum cost of around €150,000. For non-OECD members, International PISA costs are lower; here, the average per annum cost is around €45,000
• National costs also vary by country, according to factors such as population size, the number of languages in use and the nature of the political system: a small country might spend around €75,000 per annum and a medium-sized country €300,000 per annum; a large country could spend up to two or three times the latter amount
What types of test items are used in PISA and why?
PISA uses multiple-choice testing as the primary feature of its assessments because it is reliable, efficient, and supports robust and scientific analyses. It is also important to note that multiple-choice questions in PISA have a variety of formats, including highlighting of a word within a text, connecting pieces of information and making multiple selections from drop-down menus. In addition, typically up to one-third of questions in a PISA assessment are open-ended.
Students also answer a background questionnaire, providing information about themselves, their attitudes to learning and their homes. In addition, school principals are given a 20-minute questionnaire about their schools. Countries and economies can also choose to administer several optional PISA questionnaires: the computer familiarity questionnaire, the educational career questionnaire and the parent background questionnaire. In addition, many countries and economies choose to gather further information through national questionnaires. The information collected helps countries and economies to explore connections between how students perform in PISA and factors such as migration, gender and students’ socio-economic background, as well as students’ attitudes about school and their approaches to learning.
Who creates the test questions?
Participating PISA countries and economies are invited to submit questions that are then added to items developed by the OECD’s experts and contractors. The questions are reviewed by the international contractors and by participating countries and economies and are carefully checked for cultural bias. Only those questions that are unanimously approved are used in PISA. Further, before the main test there is a trial test run in all participating countries and economies. If any test questions prove to have been too easy or too hard in certain countries and economies, they are dropped from the main test in all countries and economies.
Why don’t all the students answer all the same test questions?
The PISA test is designed with the aim of providing an assessment of performance at the system (or country) level. It is not designed to produce individual student scores, so it is not necessary for each student to receive exactly the same set of test items. Thus, PISA adopts an efficient design in which the full set of test material is distributed among 13 different test booklets, which are randomly assigned to the randomly sampled students who participate in the test. This procedure enables the OECD to obtain a much greater coverage of the content than if all students completed the same version of the test.
PISA 2015 was delivered as a computer-based test, what is the significance of this?
Computers and computer technology are part of our everyday lives and it is appropriate and inevitable that PISA has progressed to a computer-based delivery mode. The overwhelming majority of countries for PISA 2015 had students complete the test on screen. For the small number of countries who were not ready for computer-based delivery it was possible for them to take the tests on paper. Student performance is comparable between the computer-based and paper-based tests within PISA 2015 and also between PISA 2015 and previous paper-based cycles. More information on the comparability of computer and paper-based tests can be found in Annex A5 of Volume I of the PISA 2015 Results.
How are schools selected in countries for participation in PISA?
PISA applies strict technical standards including for the sampling of schools and students within schools. The sampling procedures are quality assured and the achieved samples and corresponding response rates are subject to an adjudication process that verifies that they have complied with the standards set. If any country's response rate falls below the specified threshold this is reported. Further information of response rates for PISA can be found on the OECD's PISA website, and specific information on participation rates for individual countries can be found in Volume I and Volume II of the PISA 2015 Results.
How does the OECD treat data anomalies?
The following lists cases in which the OECD, on the basis of technical advice from the PISA Consortium, removed or annotated national data in the report because of technical anomalies or because the data did not meet the OECD technical standards for PISA.
- Austria: As noted in the PISA 2000 Technical Report (OECD, 2002), the Austrian sample for the PISA 2000 assessment did not adequately cover students enrolled in combined school and work-based vocational programmes as required by the technical standards for PISA. The published PISA 2000 estimates for Austria were therefore biased (OECD, 2001). This non-conformity was corrected in the PISA 2003 assessment. To allow reliable comparisons, adjustments and modified student weights were developed which make the PISA 2000 estimates comparable to those obtained in PISA 2003 (OECD Working Paper No. 5 “PISA 2000: Sample Weight Problems in Austria”).
For the PISA 2009 assessment, a dispute between teacher unions and the education minister in Austria led to the announcement of a boycott of PISA which was withdrawn after the first week of testing. The boycott required the OECD to remove identifiable cases from the dataset. Although the Austrian dataset met the PISA 2009 technical standards after the removal of these cases, the negative atmosphere in relation to education assessments affected the conditions under which the assessment was administered and could have adversely affected student motivation to respond to the PISA tasks. The comparability of the 2009 data with data from earlier PISA assessments could therefore not be ensured and data for Austria were therefore excluded from trend comparisons.
- The Netherlands: As noted in the PISA 2000 Technical Report (OECD, 2002), the response rate of schools for the Netherlands for PISA 2000 was insufficient to warrant inclusion in the PISA 2000 database. Therefore, the Netherlands is excluded from trend analysis relating to PISA 2000.
- Luxembourg: For Luxembourg changes were implemented in the assessment conditions between PISA 2000 and PISA 2003 with regard to organisational and linguistic aspects in order to improve compliance with OECD standards and to better reflect the national characteristics of the school system. In PISA 2000, students in Luxembourg had been given one assessment booklet, with the languages of testing chosen by each student one week prior to the assessment. In practice, however, familiarity with the language of assessment became an important barrier for a significant proportion of students in Luxembourg in PISA 2000. In PISA 2003 and PISA 2006, therefore, students were each given two assessment booklets – one in each of the two languages of instruction – and could choose their preferred language immediately prior to the assessment. This provided for assessment conditions that were more comparable with those in countries that have only one language of instruction and resulted in a fairer assessment of the performance of students in mathematics, science, reading and problem solving. As a result of this change in procedures, the assessment conditions and hence the assessment results for Luxembourg cannot be compared between PISA 2000 and PISA 2003. Assessment conditions between PISA 2003 and PISA 2006 had not been changed and therefore results can be compared.
- United Kingdom: In PISA 2000, the initial response rate for the United Kingdom fell 3.7% short of the minimum requirement. At that time, the United Kingdom provided evidence to the PISA Consortium that permitted an assessment of the expected performance of the non-participating schools and on the basis of which the PISA Consortium concluded that the response-bias was likely negligible and the results were therefore nevertheless included in the international report. In PISA 2003, the United Kingdom’s response rate was such that required sampling standards were not met and further investigation by the PISA Consortium did not confirm that the resulting response bias was negligible. Therefore, these data were not deemed internationally comparable and were not included in most types of comparisons. For PISA 2006, the more stringent standards were applied and PISA 2000 and PISA 2003 data for the United Kingdom were therefore not included in the comparisons of this chapter.
- United States: In PISA 2006, in the United States an error in printing the test booklets, in which the pagination was changed and instructions for some reading items directed students to the wrong page, may have affected student performance. The potential impact of the printing error on student performance was estimated by examining the relative performance of students in the United States on the item set that was common between PISA 2006 and PISA 2003, after controlling for performance on the items that were not likely to be affected by the printing error.
The predicted effect of the printing error and the wrong directions on student mean performance on the reading test was up to 6 score points, and thus exceeded one standard error of sampling. Reading performance data for the United States were therefore excluded from this publication and the PISA database.
The predicted effect of the printing error on student mean performance on the mathematics and science tests was one score point. Mathematics and science performance data for the United States, therefore, were retained.
Are the results and data from the PISA surveys publicly available?
All of the Volumes of the PISA 2015 Results are available online (Volume I, Volume II, Volume III, Volume IV and Volume V). Results from previous cycles are available on the Key Findings page. All the data from the PISA surveys can be found on the Data page.
How was the analysis of the PISA data improved for 2015?
A number of enhancements were made to the approach and process for analysing the data for the results of the PISA 2015 survey. These improvements were based on the experience of previous cycles and an understanding of how new techniques would increase the precision of measurement, the validity and reliability of the PISA data, and the stability of data between cycles. More information on these enhancements can be found in Annex A5 of Volume I of the 2015 PISA Results.
What do the test scores mean?
PISA scores can be located along specific scales developed for each subject area, designed to show the general competencies tested by PISA. These scales are divided into levels that represent groups of PISA test questions, beginning at Level 1 with questions that require only the most basic skills to complete and increasing in difficulty with each level.
Once a student’s test has been corrected, his or her score in reading, mathematics and science (plus collaborative problem solving in PISA 2015) can be located on the appropriate scale. For example, a student who lacks the skills needed to correctly complete the easiest questions on a PISA test would be classified as below Level 1, while a student has these skills would be at a higher level.
In each test subject, there is theoretically no minimum or maximum score in PISA; rather, the results are scaled to fit approximately normal distributions, with means for OECD countries around 500 score points and standard deviations around 100 score points. About two-thirds of students across OECD countries score between 400 and 600 points. Less than 2% of students, on average across OECD countries, reach scores above 700, and at most a handful of students in the PISA sample for any country reach scores above 800
How are participating countries and economies ranked in PISA?
PISA ranks participating countries and economies according to their performance in reading, mathematics and science. PISA does not give a collective score for all subjects combined; rather it gives a score for each subject area and determines rankings by the mean score of each area. However, it is not possible to assign a single exact rank in each subject to each country or economy. This is because PISA tests only a sample of students from each country or economy and this result is then adjusted to reflect their whole population of 15-year-old students. The scores thus include a small measure of statistical uncertainty and it is therefore only possible to report the range of positions (upper rank and lower rank) within which a country or economy can be placed. For example, in PISA 2003 Finland and Korea were widely reported as ranking 1st and 2nd in PISA, when in fact we can only say that, among OECD countries, Finland’s rank was between 1st and 3rd and Korea’s was between 1st and 4th.
What steps are taken to ensure the PISA tests and the results from it are robust?
Confidence in the robustness of PISA is based on the rigour which is applied to all technical aspects of the survey design, implementation and analysis, not just on the nature of the statistical model, which has developed over time and will continue to do so. Specifically on test development, the robustness of the assessment lies in the rigour of the procedures used in item development, trialling, analysis, review and selection.
The task for the experts developing the assessment is to ensure that all these aspects are taken into account, and to use their expert judgment to select a set of test items such that there is a sufficient balance across all these aspects. In PISA this is done by assessment specialists who work with advisory groups made up of international experts. Participating countries and economies also play a key role in this item selection process.
The details of the test design and development processes are available in the PISA 2015 Technical Report.
Can students cheat on PISA?
PISA is the world's foremost international, high-quality and high-impact education assessment programme and so it is vitally important that the PISA data are 100% accurate and authentic. Therefore the OECD applies very strict conditions at all levels to make sure student data is an accurate reflection of their ability and performance, and has not involved any form of cheating. This assurance starts with the Agreement for Participation between the OECD and each country. Article 4 of the agreement requires countries to comply with the comprehensive Technical Standards for PISA, including the secure management of test materials and secure administration of the assessment. These requirements are then reinforced through the PISA Operations Manual, the School Coordinators Manual and the Test Administrators Manual. These manuals have explicit instructions for the secure receipt, handling and storage of all test related materials, as well as for the secure administration of the test itself. No one other than approved project staff has access to secure PISA data and embargoed material and formal confidentiality arrangements are in place for all approved project staff.
Why doesn’t the whole of the People’s Republic of China participate in PISA?
Over recent cycles, the national ministry has been piloting PISA in several provinces in preparation for fuller participation across China. Shanghai has participated since PISA 2009. Beijing and the provinces of Jiangsu and Guangdong have participated in PISA 2015.
How have the lessons from PISA helped countries and economies improve their education systems?
In an OECD 2012 survey of PISA-participating countries and economies, the large majority of respondents said that the policies of high-performing countries or improving systems had been influential in their own policy-making processes. The same number of countries and economies also indicated that PISA had influenced the development of new elements of a national or federal assessment strategy. In relation to curriculum setting and standards, many countries and economies cited the influence of the PISA frameworks on: comparisons of national curricula with PISA frameworks and assessments; formation of common standards nationally; impact on their reading frameworks; the incorporation of PISA-like competencies in their curricula; and for setting national proficiency standards.
How do PISA results support education system improvement?
The OECD strives to identify what policies and practices appear to be 'working' in countries and economies that are recording high performance or show evidence of significant improvement over time on PISA. It then reports those findings and supports countries and economies who wish to investigate and explore the extent to which they would benefit from similar programs. The OECD is very aware of the different circumstances in different countries and economies (with over 80 participating in PISA 2018). There is no 'one size fits all' education model for countries and economies. It is not possible or appropriate to 'cut and paste' one country's education system into another country or economy.
Does the PISA 2015 assessment accurately measure students’ ability to collaborate with other humans?
In the PISA 2015 collaborative problem-solving assessment, the student test-taker interacted with computer agents instead of other human agents. This allowed for the consistent measurement of students’ abilities, as any pre-existing interpersonal relationships between classmates had no impact on students’ performance. A validation study concluded that students’ results in the human-computer assessment are informative of how they would perform in interactions with other humans. In particular, the performance of different groups of students can be reliably compared.
There were three components to the study. Firstly, four PISA collaborative problem-solving units were re-formatted by replacing one of the computer agents with another human student who, like the primary test-taker, also saw a multiple-choice selection of possible responses. Students worked in separate locations. The second involved teachers who had observed their students over the school year and thus were expected to make accurate assessments of their students’ ability to work collaboratively. The third component featured students who worked face-to-face on other re-formatted units and could freely formulate their responses.
The results of the first component showed only small differences in absolute performance scores between students who interacted with a computer agent and those who interacted with a human agent. These are not significant from a practical standpoint because of their small magnitude and because students’ scores in the human-human and human-computer assessments were highly correlated. The results of the second component showed that students’ performance scores in both the original and re-formatted collaborative problem-solving units correlated with their teachers’ opinions of their performance. The results of the third component showed that students’ performance in the original and re-formatted units, both of which took place in a virtual, computer-based setting, was a moderately good predictor of their performance in the face-to-face setting. Please see Box V.2.1 of the PISA 2015 Initial Report (Volume V) for further information.