How Do I Assess Inquiry Based Learning?

Submitted by Damian Cooper, CAfLN Past President and Co-founder

No teacher today questions the importance of critical thinking as a focus for teaching.  Likewise, problem-solving.  No surprise then that problem-based learning and inquiry-based learning play an increasing role in the kinds of learning tasks that teachers set for their students.  Yet it is only a small minority of the teachers I meet who feel confident about how to assess learning in these contexts.  Why is this?

The answer, I believe, is related to issues of control and confidence.  The most fruitful inquiries will stem from students’ own questions, not questions posed by their teacher.  However, this immediately raises the oft-heard question, “How can I assess their learning if all my students are inquiring into different questions?”  The answer: assess the skills, not the content.

This highlights the importance of “backward design” as the only pedagogically sound way for teachers to approach program planning.  Backward design involves 3 questions:

  1. What essential learning must students acquire?
  2. What assessment evidence do I need to gather as proof that my students have acquired this learning?
  3. What instructional sequence will lead students to success on these assessments and, thereby, acquisition of the essential learning?

From a curriculum perspective, when a teacher engages her students in an inquiry, she is deciding that the skills of inquiry represent essential learning.  These skills include:

  • formulating a rich question
  • identifying multiple sources of information representing different perspectives
  • analyzing the quality and reliability of sources
  • formulating a position
  • communicating and defending a position
  • reflecting upon their learning

There may, of course, be specific content that is also “essential learning”.  For example, a teacher may want all of his students to acquire knowledge and understanding about the interdependence of plants and animals, as well as the fragility of the environments that support them.  And so, while each student in the class has selected his or her own inquiry focus, all students are exploring this same issue.

But in addition to skills and content outcomes, inquiry involves dispositions such as curiosity, flexibility, empathy, perseverance, scepticism.  These must be modelled and demonstrated by the teacher if they are to be acquired by her students.  Which leads again to the question, “How do I assess students’ dispositions?” The answer: talk to your students.

In fact, much of the learning that occurs as students engage in their own inquiries requires the teacher to observe and to listen.  So the need for the teacher to understand deeply the concept of triangulating her assessment is crucial.  While the finished product associated with a student’s inquiry may take the form of an oral presentation accompanied by a visual display, or a typed report, the more useful evidence from a teaching and learning perspective needs to be gathered during the inquiry process.  Hence the need for the teacher to be skilled in observing students and engaging them in conversation as they are working.  Smartphones and tablets are invaluable tools to capture such learning “in the moment”.

So let’s walk through the various assessment opportunities that present themselves during an inquiry.  And for each opportunity, the teacher needs to ask the following questions:

  1. What is the purpose of this assessment?
  2. Who is the primary user of the information being gathered?
  3. What kind of information is required, given the purpose and primary user?

Jennifer is teaching science.  She wants her students to pose their own inquiry questions to guide their individual inquiries so she employs the “INTU” model. (Bereiter & Scardamalia, 1999)  Students will learn how to formulate their own questions using the stem, “I need to understand …” Jennifer provides a set of examples of INTU questions (see 2 of these below) and, through an informal class discussion, assesses the extent to which her students understand the criteria that must be met in order to formulate a question that is rich enough to sustain in-depth inquiry.

Topic Sample INTU…
Seasonal Change I need to understand why it’s cold in Canada in winter, but much warmer in Florida.
Climate Change I need to understand whether climate change is really happening

 

Criteria to be met for INTU Questions:

  • Is the question open-ended, with no simple, correct answer?
  • Does the question allow for multiple perspectives?
  • Is the question relevant to the student and to the topic?

Jennifer may simply make anecdotal notes as she listens to and observes the discussion.  Or she may record the discussion using a tablet.  Either way, she is capturing vital diagnostic assessment evidence that will enable her to answer the following kinds of instructional questions:

  • Which students are ready to forge ahead, having already demonstrated their understanding of what an INTU question needs to be?
  • Which students will benefit from working with a peer to help them improve their initial INTU questions?
  • Which students do not presently understand the criteria for formulating a rich inquiry question and need to work with me in a small group?

The first “product” that Jennifer assesses will be students’ final versions of their INTU questions.  She will differentiate her instruction as necessary until all students have produced a question that meets the basic requirements since, without a workable question, students will not be able to pursue their inquiries.  In this respect, she is assessing for mastery.

Over the next few days, Jennifer introduces, models, and has students practice the skills and strategies associated with locating sources of information, deciding which sources are relevant to their own INTU, and then extracting that information in ways that avoid the plagiarism trap.  Her assessment throughout this stage of the inquiry process is formative.  i.e. her purpose is to improve each student’s learning, NOT to measure it.  Hence, she provides descriptive feedback, pointing out what students are doing well, where students are having difficulty, and how to improve their work.  Depending on the maturity and skill levels demonstrated by her students, some or much of this formative assessment may be undertaken by students themselves as they work in pairs.  However, this demands sophisticated skills of classroom management, as well as plenty of “frontloading” so that students are clear about behavioural norms and expectations!

Jennifer continues working through each of the stages in the inquiry process with her students. Having initially taught them how to focus their inquiry using an INTU question, she teaches them how to locate sources, and how to decide what information is pertinent to their inquiry.  She models how to analyze information, how to communicate their conclusions, and finally, how to reflect upon their learning.  At each stage, she follows a learning cycle model:

  • teach and model
  • check for student understanding
  • practice with support
  • assess & provide feedback
  • practice independently
  • assess

Because Jennifer believes that a primary purpose of assessment is to enable students to gradually become reliable, independent monitors and assessors of their own work, she involves them directly in assessing their own learning by constantly requiring them to reflect on the work they are doing, determining where they are successful, where they are struggling, and what they need to do to improve their work.  In other words, she believes profoundly in the power of assessment for and as learning.

In my own teaching through inquiry, I always required students to keep a daily learning log in which they answered 3 questions daily:

  1. What work did I complete today on my INTU?
  2. What did I learn to help answer my INTU?
  3. What am I struggling with on my INTU?

In order to develop their metacognitive skills, I taught students to refer to their log each time we had a conversation about their progress.  I also collected their logs at the end of the inquiry as a “product” that detailed the learning “process”.  Notice how the 3 methods of assessing learning – observations, conversations, and products – weave seamlessly throughout the entire inquiry-learning process.

Once students have completed their INTU inquiries, they usually can’t wait to share their findings and conclusions with their teacher and classmates.  This, then, is the summative assessment component in which the “presentation” of their findings and conclusions represents the “final product”.  These may involve a student-led seminar, a more formal oral presentation, or some other mode of sharing their work.  I recommend allowing students the choice of the format they use to share their findings.  Once again, however, while the nature of the final products will vary from student to student, the criteria that are assessed remain consistent.  These include:

  • sources represent multiple perspectives
  • clarity of student’s conclusions
  • acknowledgement of counter arguments
  • ability to respond to questions
  • quality of supporting materials (slides, display, etc.)
  • conventions (spelling, grammar)

 

It is essential that summative assessment of students’ inquiries reflects the learning outcomes that were identified at the start.  The sharing and communication of findings and conclusions should not in any way penalize students who lack confidence or public speaking skills.  During my own classroom teaching, I often had students present their findings and conclusions to me, alone.

INTU projects and other forms of inquiry are engaging for students of all ages.  They present the opportunity for students to immerse themselves in work that is truly their own – not something they do for their teacher.  It is my hope that my comments have clarified some of the perplexing assessment questions that often deter teachers from using this powerful approach to teaching and learning.

 

This post originally appeared on Fresh Grade‘s blog

Grading For Learning

Guest Blogger – Ken O’Connor

CAfLN is a network that is “dedicated to nurturing and sustaining Assessment for Learning (AfL) . . . across Canada,” but given that every province and territory (unfortunately) requires % grades, at least for grades 9-12, we need not only to “assess for learning,” but we must also “grade for learning.”

Yes, Yes, Yes we can grade for learning and despite what Alfie Kohn’s says “grading for learning” is not like “bombing for peace.” (O’Connor, 2018, 304) Determining summary grades for standards or subjects doesn’t automatically promote a culture of grading as grading for learning can create a culture of learning in schools.

A grade is the symbol (number or letter) reported at the end of a period of time as a summary of student achievement.

A mark/score is the symbol (number or letter) recorded for any single student test or performance.

Traditional grading includes scores for (almost) everything students do regardless of purpose and whether students exhibit the behavior teachers like (compliance), with those scores averaged to several decimal places regardless of when the assessment or behavior took place during the school year. The result of this approach is that school becomes a game where students try to accumulate as many points as they can to obtain the highest possible grade, which may have little or no relationship to what a student actually learned.

So how the do we grade for learning? There are a number of critical aspects as follows:

  1. Clarity and transparency about the primary purpose for grades –communicating student achievement of the learning goals at a point in time in summary format;
  2. Clarity about the primary purpose of classroom assessment – gathering information that leads to actions by students – and teachers – to improve learning and teaching;
  3. Elimination of behavior/compliance/penalties/attendance from grades;
  4. Ensuring that students understand that learning is a process in which it is acceptable to lack understanding/make errors early in the learning process by eliminating evidence from formative assessments in the determination of grades and only including evidence from summative assessments i.e., the assessments that take place toward the end of learning sequences;
  5. Ensuring that students understand also that learning is a process by emphasizing more recent evidence in the determination of grades; and
  6. Involving students in the assessment process through self-assessment, reflection, and goal setting so that they can always answer the learning questions – Where am I going? Where am I now? and How can I close the gap (improve)? (Chappuis, J. 2012, 27)

These are the sine qua non of grading for learning and they require the elimination of common practices such as averaging to calculate grades and the inclusion of homework in grades. There are also other actions that schools and teachers should take that emphasize learning and provide information that supports learning such as:

  1. Collecting evidence of student achievement based on learning goals not methods of assessment;
  2. Using performance standards that clearly describe proficiency and a limited number of levels above and below proficiency; and
  3. Ensuring that every assessment meets standards of quality so that the inferences that students and teachers make as a result of the assessments are most likely to be accurate.

A case can be made that in a perfect world we wouldn’t have grades at all but in Canada they are required by provincial policies (however misguided those policies may be) so we have a responsibility to make the assessment and grading process support learning by moving away from traditional grading toward the procedures described above. If we do that our graduates will be self-directed, independent, lifelong learners who will be likely to be successful in whatever they do after their years in K-12 schools.

 References

Chappuis, J. et al. 2012. Classroom Assessment for Student Learning: Doing It Right, Using It Well. Pearson, Upper Saddle River, NJ

O’Connor, K. 2018 How to Grade for Learning, Fourth Edition. Corwin, Thousand Oaks, CA.

A Message from CAfLN’s President, Lorna Earl

It is hard to believe that CAfLN is four years old and five years since Damian, Ken and I first met to talk about a cross-Canada network that connected educators who believe in assessment for learning.  Our enthusiasm for CAfLN was fueled by the powerful practices that we were seeing and hearing about in classrooms and schools in small towns, cities, on the Prairies, on our coastlines, in kindergartens, art classes and calculus classes, in initial teacher education, and on and on.  But we realized that we had a unique vantage point because we worked in all of these places.  Although we saw the amazing assessment practices that people like you are engaged in, we were aware that you didn’t know one another and often heard about how lonely it was for individuals and groups as they carried on each day trying to make AfL work.

At this point, we are delighted with the progress and influence of CAfLN.  We have met in central Canada (Winnipeg), the far west (Nanaimo), eastern Ontario (Kingston) and the Prairies (Saskatoon) so far.  In May, we are coming together again, in Dartmouth, Nova Scotia.  People are connecting and sharing practices and insights about policy, practices, resources and strategies.

CAfLN is a not-for-profit organization.  All of the Board members are volunteers, and many of them fund their own travel to attend CAfLN events.  We do not pay speakers.  The presenters are generally CAfLN members whose travel and accommodation are covered for the conference.  Even more importantly, any funds generated by CAfLN are used to support CAfLN members attendance at annual conferences and to facilitate networking among CAfLN members across the country.

So, start planning now for our 5th Conference and Symposium Sailing Forward with Assessment for Learning in Dartmouth on May 4th-5th, 2018. Registration will open on November 1st. Apply for a travel scholarship and join us there. It should be a great event, with an emphasis on stories and presentations from your colleagues around the country, talking about the challenges and the innovative practices that are transforming assessment in Canadian schools.

 

But you don’t need to wait until May.  CAfLN is a network, not a club or an organization or an institute.  Why?  Because we know from research that:

networks can create the conditions to support individual and collective learning through intentionally fostering and developing the opportunities for members to examine their existing beliefs, and to challenge what they do – against new ideas, new knowledge, new skills, and even new dispositions (Stoll, Fink and Earl, 2003).

This is what we aspire to in CAfLN.  Networks of educators across the country who are focused on learning and on how assessment can be the catalyst and provide the support mechanisms to enhance learning for all students.  Educators who are committed to AfL are always learning, always intentionally seeking out and/or supporting activities, people and opportunities that push beyond the status quo.  Think about it.  Do you have an idea (or many ideas) to enhance CAfLN networks?  Share it. Suggest it.  Start it.  CAFLN, as a network, is us.  Together, we can make the difference.

Reflections on My Experience with Assessment Practice

Submitted by Beate Planche ED.D.

In considering what the influences have been on my own understanding of sound assessment practice, I think first of my experiences years ago with YRDSB’s assessment literacy project. As a team, with members across areas and schools, we put our understandings on the table – and supported each other’s thinking as we questioned long standing practices which were heavily influenced by percentages, grades, evaluation and school standings. We moved forward quickly and Assessment for Learning quickly became our collective learning goal and the underpinning of changed practice. It took us a while to truly unpack what Assessment as Learning really meant and there was no getting away from the pressure to have documentation for Assessment of Learning, that sleeping giant awakened at least three times a year at report card time in the lives of educators.

Here are a few of my personal discoveries:

Structure drives behavior:

The way we organize ourselves makes a difference to our professional behavior. System leaders who model a co-learning stance build credibility and commitment (Sharratt & Planche, 2016, p. 67). This was really driven home for me as principals, teachers, curriculum consultants and SO’s sat together to unpack classroom assessment practice. Good assessment practice takes time and it becomes dynamic when we can learn together or what now call co-learn. Making time for educators to work together is not a simple thing but a crucial ingredient to building capacity across classrooms and schools.

Some structures can drive learning:

Learning communities or learning networks can be effective when there is a clear goal that is understood and everyone is involved in defining what the criteria of success should be. The “what” needs to be followed by the “how” and the “when”.  It is the actions of the learning structure that make the difference for building capacity and improving instruction and assessment practice. Without accountable actions and monitoring, we might have wonderful professional conversations and not make a difference at all. What is hopeful is that using protocols for learning can focus the learning and mitigate the tendency of groups to spend a lot of time talking with not enough focused decision making and action.

There are pitfalls to consider:

We always have to consider the impact of any practice – instructional or assessment-based through the eyes of those who will be impacted by it. Students need to be a part of norm setting, creating success criteria and reinforcing goals if we want them to have ownership of their learning. As educators, we often control a great deal of the learning process to the detriment of student empowerment. We have to discuss more often what assessment looks like in an environment where we are gradually but intentionally trying to release responsibility. A case management approach can be highly effective for students who struggle (Sharratt & Fullan, 2012). It can make personalization very relevant for all the staff who are engaging with a student or group of students. This is not just a strategy for special education or ESL students. Assessment is a first step but the next steps are the most important ones. What will we feedback to the student is important and what will we feed forward for instructional purposes? A case management approach builds a team approach to serving students if everyone takes responsibility for their part of the “case”. Moderation of student work is one of the best ways to build trust and professionalism if it is facilitated well. There are skills sets involved in moderation that we need time to develop and practice. But this is so worth it! This is one of the foundations of collaborative learning that can make a significant difference.

What I am still wondering about:

Teaching through strengths is in its infancy in assessment practice from my experience.  We assess children and find out their gaps.  Do we talk enough about teaching through their strengths?  Do we value strengths enough to build assessments around them so that students can be as successful as possible or are there still underlying issues of what “fairness” looks like?  If a child has a modification to allow an area of strength to lead, is this truly seen as a levelling the playing field?  Lots to think about!   And thus, at this stage of my career where I have moved from public education to graduate education, I am left with one enduring truth. Learning is the work! Assessment practice needs to be primarily about learning!

A Message from Our New President

Lorna Earl, Ph.D.

 

It has come to that time when we are all working to close out another school year successfully, and planning for the changes that will come in the new year.  For me, and I hope for all of the educators who participated, the CAfLN conference and symposium provided a respite and a time for reflection on what matters for teaching and learning.  It gave me a chance to eavesdrop on the conversations around the room and think about the challenges associated with staying true to the “spirit” of AfL, as described in the The Learning How to Learn (LHTL) Project in England (James et al., 2007). In its work, the LHTL team found that teachers implementing AfL in their classrooms often reflected what they called the ‘letter’ of formative assessment, focusing on the surface techniques, rather than the ‘spirit’, based on a deep understanding of the principles underlying the practices. Even in this project that focused on AfL, only about 20 per cent of the teachers in their LHTL study were using formative assessment in ways that were designed to help students develop as learners (James et al., 2007).

In a recent (2015) article for Education Canada, the Canadian Education Association magazine, colleagues and I took up this idea and tried to unravel some of the problems.  The full article can be found at http://www.cea-ace.ca/education-canada/article/unleashing-promise-assessment-learning.  Here is an excerpt from that article:

The LHTL researchers found many teachers who were attempting to engage in AfL by adding strategies to their existing assessment repertoire without shifting the purpose towards enhanced learning. This finding echoed a finding from a Canadian study in which we used the metaphor of creating an audio recording to describe the different ways in which teachers incorporate ideas of assessment for learning into their practices.[10] For some teachers, the process of incorporating new assessment strategies was like laying new sound tracks onto an existing track. Their original approach to teaching and assessment remained intact, but some additional material was superimposed upon it. The other end of the spectrum was like working with a sophisticated digitized recording system. This was rare in our study. These teachers had a sense of the components of the work and the mood they wanted to create, but operated using an open and changeable approach, skipping to anywhere in the work, adding little flourishes, and maneuvering all the bits to keep the whole production flowing. The teachers who used this digital approach were able not only to use a variety of techniques every day but also to move beyond them to circumnavigate what other teachers had experienced as obstacles. The third and most prevalent production style was a mixed one – some of it audiotape, some digitized – where teachers played with the digitized approach but kept coming back to the original tape. The transitions back and forth weren’t always smooth, and these teachers frequently expressed frustration and uncertainty about their practice.

As a result of common misunderstandings about how AfL works, teachers often engage in practical implementation based on limited understanding and superficial adoption of the ideas.[11] Over and over, teachers incorporate the techniques associated with AfL, including peer and self-assessment and routine assessments throughout a course to track students’ progress. But just adding these bits is not AfL. Certainly the tools or techniques are useful, but teachers implementing the “letter” of AfL are in the early stages of understanding and embedding the concept into their practice; they still depend on rules and embed the new ideas as add-ons.

Becoming more proficient means developing a deep understanding of the underlying theory and learning to use the ideas to solve problems and make ongoing adaptations automatically. Teachers with this “spirit” of assessment for learning do not just add strategies to their existing assessment repertoire; they internalize the underlying principles, have a strong belief in the importance of promoting student autonomy, articulate a clear conviction that they are responsible for ensuring that this takes place, and take this empowering philosophy into the classroom and communicate it to students.[12] The LHTL project demonstrated that:

although advice on specific techniques is useful in the short term, longer-term development and sustainability depends on re-evaluating beliefs about learning, reviewing the way learning activities are structured, and rethinking classroom roles and relationships.[13].

If AfL is going to have the impact on student learning that research promises, it will be essential to move beyond the “letter” of superficial add-ons and rethink a wide range of practices.  A noble and worthy challenge.

Measuring What Matters: Phase 3 Progress Report

Submitted by David Cameron from People for Education

This report provides an update on People for Education’s Measuring What Matters (MWM) initiative, including some of the early findings coming out of the school field trials.

… it isn’t about what [students] understand about seasonal changes in my science curriculum, it’s how they’re thinking critically and asking questions around those ideas within Science.’ I see [MWM] as a framework that gives greater purpose to what we are doing. And values the things we know are intrinsically important.

Measuring What Matters envisions a public education system that supports all students to develop the competencies and skills they need to live happy, healthy, economically secure, civically engaged lives; and that strengthens Canada—our society, our economy, our environment—by graduating young people with the skills to meet the challenges our country faces.

This vision can be achieved by:

  • setting broad and balanced goals for student success that include numeracy, literacy, creativity, social-emotional learning, health, and citizenship; and
  • ensuring that these goals drive policy, practice, funding, and accountability.

The goal of MWM is to explore a broader view of student success that includes a concrete set of competencies and learning conditions in the areas of creativity, citizenship, mental and physical health, social-emotional learning, and quality learning environments.

This year, eighty educators in 26 publicly funded schools and seven school boards tested the competencies in their classrooms and schools. Each field trial team designed and implemented a set of activities that were integrated within their ongoing work.

Watch teacher Kim Stolys talk about her participation in the field trials.

Several themes emerged:

  • The work aligned with participants’ professional values as educators. It resonated with what they felt were central in learning experiences, but that often did not get the same attention as academic achievement.
  • Educators took a range of approaches in their use of the MWM competencies. Some took a more narrow focus, addressing one or two competencies in a single domain; others explored combinations of competencies from several domain areas. The individuality in what educators focused on, and how they investigated it, demonstrates how personalized this work is, and how important it is to protect non-standardized learning contexts.
  • There appears to be an inextricable and dynamic link between learning conditions and specific competencies that students express: learning conditions frame and support the expression of specific competencies and, conversely, the focus on specific competencies in relation to teaching, learning, and assessment supports teachers in exploring a greater range of possible conditions and/or learning opportunities.
  • Strong interrelationships between the domains were evident across the study.
  • The specific lexicon or “language of learning” of the competencies helped define sometimes broad but ambiguous areas of learning. The language gave educators clear pathways into actions and planning in classrooms, created opportunities to communicate with each other, and to generate new conditions.
  • The framework supported broadening perspectives on where learning occurs in schools. A number of schools explored student experiences outside of the classroom, broadening the learning space beyond specific, situated moments in scheduled classroom times to include the whole school environment.

February Saskatoon Pop-up Highlights

sk-popup-4

After some minor technical difficulties, Veronica Saretsky started off the meeting and welcoming Damian Cooper from Ontario and Ken O’Connor from Florida. Both Damian and Ken shared information about the beginning of CAfLN and their hopes for connecting assessment for learning practice and practitioners across Canada. Each of them also shared what they are currently seeing in their work. How to capture evidence of learning in authentic ways and the importance of aligning practice with what we know and believe about assessment for learning emerged as themes throughout the discussion.

 

 

sk-popup-2

 

Participants then shared examples of what they are seeing and hearing in classrooms with respect to assessment and learning. This year’s conference co-chairs, Veronica Saretsky and Lori Jeschke spoke about the upcoming conference and symposium. It was an amazing time and there was an expressed desire to continue to establish these local connections.  Special thanks to Veronica for facilitating and for the treats!

 

Pop-up Meeting in Saskatoon

Thursday February 16, 2017

4:00 pm – 5:30 pm

Sion – GSCS Meeting Room

2010 7th St. E Saskatoon, SK

(Please use the front entrance and sign in when arriving)

 

Join virtual guests,CAfLN founders Damian Cooper and Ken O’Connor, along with other CAfLN members from Saskatchewan in sharing a personal, school or district idea or question about deepening our use of assessment for learning. Light refreshments will be provided.

Questions? Contact veronica.saretsky@gmail.com

or

Reserve your spot here

2016-05-14 Symposium Sharing 1

 

Vocal – Online Course on Capturing Evidence of Learning

vocal-image

VOCAL 101 is Damian Cooper’s new online professional learning course that takes K-12 educators inside classrooms to see why and how using mobile technologies to capture digital evidence through observations and conversations can be a powerful tool for assessing learning.

A concise five section course design … provides K-12 educators with an effective and manageable professional learning experience. Key concept lectures … explore the foundational ideas and research that underpin VOCAL and best-practise assessment.

Authentic in-class videos … model how teachers and their students use everyday mobile technologies to capture and use evidence of learning through observations of performance and conversations.

How-to tips, tools and strategies … provide practical support to encourage and enable educators, regardless of their experience with instructional technology, to use more observation and conversation when assessing learning.

Professional learning activities … are differentiated to reflect educator readiness with respect to VOCAL – just committed, building capacity, or confirming results.

A flexible online format … means VOCAL can be used anywhere, anytime and on any device, individually, in professional learning communities, or as the basis for collaborative inquiry.

Assessment for Learning: Meeting the Challenges of Implementation

assessment-for-learning-meeting-the-challenges-of-implementation

Springer Publications is pleased to announce the launch of a new book co-edited by CAfLN member Dany Laveault (University of Ottawa) and Linda Allal (University of Geneva) on AfL and the challenges associated with its implementation in our education systems.

“Assessment for Learning: Meeting the Challenge of Implementation” provides new perspectives on Assessment for Learning (AfL), on the challenges encountered in its implementation, and on the diverse ways of meeting these challenges. It brings together contributions from 33 researchers and authors working in a wide range of educational contexts representing Australia, Canada, England, Germany, New Zealand, Norway, Israel, Philippines, Scotland, Spain, Sweden, Switzerland, and the United States. Several Canadian authors have contributed to this book including Chris DeLuca, Don Klinger, Anne Davies, Louise Bourgeois, Ann Sherman, Sandra Herbst, Adelina Valiquette and Dany Laveault It reflects the issues, innovations, and critical reflections that are emerging in an expanding international network of researchers, professional development providers, and policy makers, all of whom work closely with classroom teachers and school leaders to improve the assessment of student learning.

The concept of Assessment for Learning, initially formulated in 1999 by the Assessment Reform Group in the United Kingdom, has inspired new ways of conceiving and practicing classroom assessment in education systems around the world. This book examines assessment for learning in a broad perspective which includes diverse approaches to formative assessment (some emphasizing teacher intervention, others student involvement in assessment), as well as some forms of summative assessment designed to support student learning. The focus is on assessment in K-12 classrooms and on the continuing professional learning of teachers and school leaders working with these classrooms.

Readers of this volume will encounter well documented accounts of AfL implementation across a large spectrum of conditions in different countries and thereby acquire better understanding of the challenges that emerge in the transition from theory and policy to classroom practice. They will also discover a wealth of ideas for implementing assessment for learning in an effective and sustainable manner. The chapters are grouped in three Parts: (1) Assessment Policy Enactment in Education Systems; (2) Professional Development and Collaborative Learning about Assessment; (3) Assessment Culture and the Co-Regulation of Learning. An introduction to each Part provides an overview and presents the suggestions and recommendations formulated in the chapters.