’The primary purpose of assessment is to support learning’, so states the Estonian Lifelong Learning Strategy 2020 and its successor strategic plan Education 2035. These overall strategic plans, which have been created through broad-based stakeholder engagement, have driven and continue to drive leadership and management of the entire education system.
In reference to assessment, the plans take an ecosystemic view in that assessment is not only seen as a tool for students to learn, but also a means for the system and its people to learn. Indeed, the Ministry of Education and Research is using formative assessment vehicles including surveys to nudge stakeholders in education to take more personal and collective responsibility for their own long-term development and school improvement, whilst investing less of its time and attention into external controls.
This chapter first provides a general introduction to innovative low-stakes tests being used in Estonia to support student and teacher learning. Second, learning-to-learn, self-management and communication tests are described.
Next, scientific, mathematical and linguistic literacy tests are explored in general, and scientific literacy tests in some detail. Other low-stakes instruments are identified, but not described in detail as the purpose of this document is to provide an introduction to some Estonian innovations in education, as opposed to offering a comprehensive overview. Finally, surveys measuring student and teacher satisfaction and well-being are detailed.
A core innovation adopted by the Ministry of Education and Research has been the creation nationally of a series of largely voluntary, mark-free, but feedback-laden assessment instruments. These competences-based tests complemented with surveys seek to make a substantial contribution to building an assessment-for-learning culture. The data arising from these assessment instruments are intended to underpin an evidence-based approach focussed on enhancing teaching and learning.
The evidence becomes the basis for the accompanying teacher and in some cases, student feedback reports that focus on what students know and can do, and how to close the gap between that and what they still need to know and learn how to do. The ultimate purpose of the tests is formative for both students and teachers.
In principle, raising teacher awareness of student strengths and development needs, combined with related advice on teaching strategies should lead to enhanced teaching of both individual students and groups, and ultimately to enhanced student learning. Importantly, student feedback reports have the potential of raising student awareness of their strengths and development needs. As the student reports propose concrete strategies for enhancing learning, they should allow students to take greater control of their own learning.
By law, the tests cannot be used to create marks or as a basis for making decisions about a student’s future educational path. They are instead intended to promote the use of assessment as a powerful means for supporting student and teacher learning.
The national curriculum has since 1996 stressed the importance of general competences related to the learning process, in fact the lifelong learning process. Research in Estonia has discerned that self-management and learning skills are intertwined and help students to be more effective learners and improve their sense of well-being.1, 2
Yet, despite seeking to align policy, research results, teacher pre- and in-service education, and learning materials development in reference to general competences, there was a missing piece – the separate assessment of general competences. This is pivotal, since what is measured becomes more visible and more manageable.
As of 2018, there are nationally-produced, mark-free tests to assess the general/key competences of self-management, learning to learn and communication. They consist of two types of tests: one measures learning and self-management skills; and the other measures communication and self-management skills.
The tests are administered in Grades 2, 3 and 6. A test is also under development for the third stage of basic education (Grades 7–9). Teachers choosing to use any of these tests also have the option of filling out a related survey on their teaching practices.
The tests are freely accessible to teachers and taken by students online. The teacher survey is also online. For teachers, both test and survey use is voluntary. They are aspirational tools that are meant to inspire largely autonomous Estonian teachers to further reflect on and discuss the value of teaching general competences including learning skills. They offer teachers and indirectly students a pressure-free opportunity to learn how to improve teaching and/or learning.
Feedback is automatically generated by the testing system in the form of individual reports on each student and classes as a whole.3 The summary reports identify student development needs and help teachers interpret the results. As the various subheadings of the student tests and the teacher survey are aligned, the reports help shine a light on which teacher-reported practices appear to be effective and which ones require further attention in order to better support individual students or groups of students.
Individual student or class reports are not directly accessible to school principals. In other words the tests are intended to serve as part of the teacher’s personal toolkit for improving teaching and learning. Teachers are, however, encouraged to consult the school psychologist for support in interpreting the results, and ask school principals for any support they might need to better help students learn.
As class teachers in Estonia usually teach the same class of students beginning in Grade 1 and through to the end of Grade 6, the tests permit teachers to track student progress over several years and to get evidence on the effectiveness of teaching strategies and student learning strategies. This means teachers are well placed to adjust over several years their teaching to better meet student needs.
In 2019/2020 approximately 29% of Grade 3 and 6 students took the tests. As the uptake is not yet what had been hoped for, more work will be required to integrate further discussion about using the tests into teacher professional development, education conferences, school strategic planning, student-parent-teacher dialogue, and awareness-raising campaigns.
The nationally-produced scientific, mathematics and linguistic* (reading) tests (‘tasemetööd’) measure student literacy in these three domains at the end of the first stage (Grades 1–3) and the second stage of schooling (Grades 4–6). Currently, the Grade 4 and 7 scientific and mathematical literacy tests have been fully implemented. A Grade 10, scientific literacy test, as well as linguistic literacy tests for Grades 4 and 7, are slated for widespread implementation in the autumn of 2021.
Instead of testing students at the end of the academic year in Grades 3 and 6, the assessment takes place at the start of Grades 4 and 7. The timing increases the likelihood that both teachers and students use the test results and related feedback to enhance their own learning in the here and now, and that teachers use them to improve teaching.
The administering agency the Estonian Education and Youth Board (Harno) tests a representative sample of students each time a test is revised usually every three years. Schools, however, are able annually to sign up for the tests voluntarily, in which case their students must take the tests.
In 2018, a total of 44% of all Grade 4 students took the scientific literacy test (representative sample was 13%), and 42% of Grade 7 students took the test (representative sample was 13%). In 2019, there was no representative sample, but 62% of Grade 4 and 56% of Grade 7 students took the test.
In 2018, the tests included multiple-choice answers and 20–38% of open-ended questions. Students were asked to solve problems through research and decision-making assignments. ‘The e-test also included an Internet search task. In one task, the student must perform a virtual experiment and analyze the results of it by changing different parameters.’ 4
Multiple-choice questions are all automatically scored. Open-ended questions in the representative sample are assessed by the administering agency Harno, whilst teachers in schools voluntarily administering the tests mark the open-ended questions within the national web-based Examinations Information System following Harno guidelines.
More specifically, as a case in point, the scientific literacy test (henceforth science test) focuses on subject-related knowledge and competences, as well as general learning competences. In addition, the science test includes tasks/questions to measure deep knowledge related to underlying meanings and principles and subject-related competences. In other words, the tasks/questions mostly measure a student’s level of scientific literacy or current capacity to apply subject-specific core concepts/big ideas.
The system automatically generates feedback reports for both students and teachers. Feedback is offered in four categories: knowledge, as well as analytical, planning and interpretation skills. For Grade 4 students, the achievement is assessed under each of the four categories as being at a beginner, average or high level. In Grade 7, the three levels indicated are average, high or top. Students reports include a bar chart showing how they performed in each of the categories.
Each of the four categories has descriptors that indicate what a student who has achieved a given level can do. These have the potential of being aspirational or at least helping to further indicate what remains to be achieved. For each category the feedback to students explains what they know how to do and suggests a concrete target for improvement (e.g. when planning an experiment, you need to learn how to justify why certain conditions are necessary for conducting the experiment, in which stages and in which order…).
Teachers receive feedback on individual students and entire groups/classes. For example, if a student or an entire class has weak analytical skills, teachers may be shown the types of questions or activities that could help students to develop those skills.5, 6
Student or class reports are not directly accessible to school principals. Teachers may choose to discuss the results in general terms, in particular, when needing guidance from the school leadership team, when discussing professional needs or participating in the development of a school’s new strategic and/or work plans. In other words, the tests are intended to serve foremost as part of the student’s toolkit to improve learning and the teacher’s personal toolkit for improving teaching and learning. They can, however, provide important reference points for discussions about school improvement.
* Estonian and Russian as a first language, and Estonian as a second
In addition, there are mark-free mathematics and linguistic competences tests that aim to determine: what level a student has achieved; what are the reasons a student has not achieved an expected level; and to a lesser extent, which cognitive processes (e.g. motivation, thinking, memory and planning) may need further attention. These are considered pivotal in helping teachers and a wide range of students to build the general capacity to read and apply mathematical knowledge and skills. As entrepreneurship is also considered a core competency, a similar set of tests for measuring this competence is also under development.
There are also primarily voluntary student digital competence tests that target students in Grades 8 and 11. Students taking the digital competences tests receive personal feedback reports. Schools receive anonymised feedback reports. With the exception of digital com- petence tests, which are discussed in the chapter on digital competence, these additional instruments are not further discussed in this document.
The primary focus in the next years is on finding ways to help more teachers and students to use assessment as a springboard for learning. At the same time, testing instruments will continue to be enhanced. In particular, more attention will go into enhancing feed- back/feedforward reports, and exploring ways of increasing student and teacher uptake of feedback/feedforward. Additional professional development will be offered in how to make best use of the tests and reports. The tests will continue to be voluntary in nature.
As a further counterweight to overemphasising school-leaving examinations data, several survey instruments have been developed to measure student, teacher, and parent satisfaction with education. These too, have been called into being by the Lifelong Learning Strategy 2020, which designated education stakeholder satisfaction levels (i.e. perceived well-being) as important success indicators alongside student achievement.
The surveys help make visible those factors associated with satisfaction and well-being that influence student learning. Once these factors are made visible to stakeholders in education along with survey respondents’ assessment thereof, strengths can be better built on and areas of concern more systematically addressed. Stakeholders in education should be better placed to co-construct supportive, stimulating and safe learning environments.
As of 2018, all general education schools are offered the option of having their Grade 4, 8 and 11 students take part in the survey on an annual basis. Teachers and parents are surveyed every three years. In 2019, 92% of general education schools participated in the student survey.
More specifically, student satisfaction surveys measure:
The assessment of well-being was based on self-determination theory, according to which three innate psychological needs – autonomy, self-efficacy/competence and relatedness – must be satisfied in order to yield enhanced intrinsic motivation and well-being.
Student surveys include visible and latent variables. A visible variable such as bullying is measured by having students report on the frequency of, among other points, threats experienced on the Internet, having possessions stolen or destroyed, being picked on or taunted, and being hit or pushed.
Visible variables are also used to measure the latent variables. For example, to measure the latent variables of autonomy, self-efficacy and connectedness, some corresponding visible variables are: ‘I am confident that I am able to complete my studies.’ and ‘My classmates help me when I need it.’8 Other latent variables can include meaningfulness of learning, home environment, teacher collaboration, feedback that supports development, self-management of learning, cynicism and fatigue.
Each school gets a detailed feedback report on how its school community has assessed the factors included in the survey. Anonymised results are presented for each survey question, and aggregated data is also presented to draw broad-based conclusions, for example, about the latent variables.
Each school’s results are juxtaposed with national averages. In other words, schools annually get an overview of how satisfied and pleased students are with their learning environment and processes, and how those results compare with national averages. Every three years schools get a report about teacher satisfaction with their work and student learning processes, and about parental satisfaction with their children’s learning processes.
Importantly, the surveys themselves are instruments of connectedness offering all respondents a potentially powerful vehicle for making their voice heard in the school improvement process. The reports help students, parents, teachers and school leaders to become more attuned to the perceived positive and negative aspects of their school’s learning environment.9 They also help place autonomy, self-efficacy and connectedness at the centre of discussions related to satisfaction, well-being and learning.
This is powerful data that is difficult to ignore and tends to spur substantial discussion in schools. The public is given access to each school’s general satisfaction rating, as well as attendance and bullying-related figures. They also have access to an anonymised national overview of all results. Key stakeholders seeing, or perceiving based on the more detailed national overview, apparent problems in a school are likely to engage with schools in seeking solutions. Once issues are visible, they are more likely to be dealt with. Also, once strengths have been identified, they are more likely to be effectively employed for school improvement.
The next chapter, discusses how the Estonia is supporting the development of professional learning communities in schools and their communities. This builds on the big idea that it is better to engage than impose.