September 30, 2024
Subscribe to Vretta Buzz
Training is often perceived as a soft power, and often receives less attention in assessment spheres, typically overshadowed by the use of technical language and political accountability pressures in the education system, and frequently reduced to mere written instructions at the workplace. While training is a form of learning that is measured through assessment, the process of conducting assessments itself can also be improved with targeted training. In a way, training permeates all aspects of assessment, highlighting its importance and the need to form the right attitude toward training to succeed in the assessment domain. So, building that attitude toward training that contributes to both excellency of assessment implementation within the cycle and formative nature of assessment is an art of creating balance, led by the management of the organization.
Recently, Oxford, Cambridge and RSA Examinations (OCR) published a report titled “Striking the Balance[1]” discussing the need for curriculum and exams reform, focusing on balancing learning load with exam intensity and advocating for changes in assessment forms. Balancing learning and assessment may require targeted training for teachers to adjust their teaching and assessment methods, and for assessment administrators to reconsider how best to measure higher-order thinking skills through assessments. Additionally, the increased use of technology in both learning and assessment domains, fueled by tech applications, heightens the need for multi-dimensional professionals and for assessment organizations to focus on training programs that help upskill their workforce involved in all aspects of training within the assessment context.
In this article, we first focus on explaining what it takes to build a training culture and then describe the processes that illustrate the role of training in each stage of the assessment cycle (authoring, registering, administering, marking, analyzing, and reporting), aimed at reemphasizing the importance of training in the assessment domain.
What skills require training in an organization, educational or otherwise? The question centers on identifying learning needs that training can address. Institutionally, a competency model is employed to capture various factors that drive training requirements and shape the organizational culture of learning and progression for effective performance. In short, the competency model encompasses a collection of knowledge, skills, and abilities mapped out for various core and job-specific competencies required at basic, moderate, or advanced levels for both general and technical roles within an organization.
In the context of educational assessment, such a model could facilitate the implementation of training programs embedded within the baseline of recruitment processes, later incorporated into performance management and professional development, and ultimately included in succession planning schemes within the HR policies of an assessment organization. Alternatively, tech-oriented assessment organizations, such as technology providers, may adopt an agile approach to managing personnel, identifying and providing training on the spot. In this model, the competency framework is embedded in the mindset of supervising managers, who perform ad-hoc competency gap analyses as requirements evolve. The training process is more organic, with employees gaining hands-on experience during onboarding and receiving ongoing guidance from senior staff or product managers. This ensures teams remain adaptable and equipped with the necessary skills and knowledge to meet shifting demands.
Aspect | Corporate Approach | Agile Tech-Led Approach |
---|---|---|
Role Requirements for Item Author | • Must hold a degree in education or a relevant field. • Must undergo formal training sessions. • Must adhere strictly to established item writing guidelines. | • Must be adaptable to fast-paced environments. • Must be willing to learn on-the-job. • Must be capable of collaborating closely with tech teams to adjust items based on real-time feedback. |
The tone in the recruitment policy may already indicate the type of competency model embedded in the training practices of the assessment organization.
To design an effective competency model, whether clearly documented in corporate terms or implicitly understood within the team’s dynamics in agile-led teams, an assessment organization might conduct interviews and focus groups, or foster a culture of sharing in daily short sprint meetings, respectively. These methods help identify challenging scenarios where job holders have struggled, using these insights to define the behaviors and competencies necessary for job performance and to provide targeted interventions as needed. Implementing a competency model can also help reduce costs associated with general training programs by introducing competency gap assessments and benchmarking current performance evaluations against the targeted performance indicators from the model[2].
Recently, the Presidential Task Force Report of the National Council on Measurement in Education[3] identified a framework for foundational competencies in educational measurement that combines three domains of competencies (communication and collaboration, technical statistical and computational, and educational measurement) with five subdomains that subsequently embrace the competencies, illustrated visually below:
A framework for foundational competencies in educational measurement
Eventually, awareness of these trends can support the development of training programs for the workforce of assessment organizations and also be foundational to the design of assessments and the training required to develop those assessments.
Item Authoring Team Training Need. Item authors design and review new questions daily, necessitating clear instructions and guidance materials to make sure items cover the curriculum and adhere to the specified format outlined in the blueprint. If item authors are not trained on how to incorporate assessment of learning goals into the design process, then the items they create may not measure what is intended, ultimately failing to enrich the item bank with content reflective items. In the line of increased assessment literacy, an item author also needs to be trained in item writing techniques that explain each structural component of the item writing process. In organizations where item and content author roles are separate, with content authors drafting questions and item specialists focusing on item quality regardless of content, separate training may be necessary for item-related experts.
In a practical context, if the item is designed on a tech platform, training should also cover using the platform effectively during the item writing process, and various roles including item author, various types of item reviewers, and the supervisor or admin overseeing quality assurance need training on how to enhance the quality of the process in each iteration. Furthermore, item/content authors will need training on the accessibility and inclusivity aspects of assessments to make sure that elements like stems, distractors, and multimedia scenarios do not discriminate against any test takers. This focus emerging in item generation from the start of the assessment creation process is important to make sure that the final content product prior to the delivery is unbiased toward any student group, regardless of good intentions of assessment providers. Next, as AI-enabled platforms increasingly generate items, training programs for item and content authors will include terminologies related to prompt optimization and outcome validation, which involve reviewing generated content for consistency, quality, and alignment with assessment needs to support the efficiency and accuracy of item authoring teams while collaborating with AI assistants.
Finally, training for the test development team should include systemic thinking that considers not only item creation but also its presentation on the platform. In end-to-end solutions provided by technology, this systemic approach is supported through adjustments to the functionalities of the unified platform.
Training Needs of Test Delivery Teams. Test delivery is the second primary function of digital assessments, with organizations investing heavily to increase the accessibility and inclusivity of the testing environment, necessitating a number of considerations in training programs for test delivery teams. First of all, to make sure a smooth transition from item authoring to assessment delivery, it is important that the format in which assessments are presented promotes a presentation style on tech platforms that does not introduce bias. Secondly, another pressing issue in the test delivery phase of the assessment cycle is the security of the process, particularly with the increased option of taking tests from home. The recent scam involving Pearson VUE's at-home testing in Florida[4] has made this aspect of the industry more cautious, which once again underscores the importance of thorough training for the test delivery team, especially when tests support remote proctoring.
Something particularly relevant for test delivery teams is that every exam involves pre-exam, during-exam, and post-exam phases. The instructions for each phase are repeated on the exam day to ensure that every team member is aligned, regardless of earlier instructions provided. Additionally, test delivery typically involves the team's first interaction with clients, usually students, pointing out the need for a customer care-oriented approach that places the student at the center to provide a smooth and user-friendly testing experience, setting the tone for test taker satisfaction. Finally, the test delivery phase often receives the most feedback from users, leading to updates and improvements in the training programs for all phases of the assessment cycle based on this input.
Training Needs of Student Response Markers. Recruitment and training of markers is a major portion of the work within the assessment cycle, incorporating significant digital elements as both the student work and the marking are conducted digitally, which is why the digital requirements of this phase are substantial, particularly concerning the provision of training. Consequently, training face-to-face versus online requires additional training on how to conduct sessions remotely. Therefore, a prerequisite should be learning how to organize effective online training that maintains the essential elements of in-person sessions.
Additionally, training for this phase should begin earlier than for other phases' team members because markers face stricter deadlines tied to student result reporting, creating higher pressure to accurately assess constructed student responses. Therefore, training programs for this group of professionals should be prioritized during the overall preparation phase. Training for this phase can be categorized into three sets for new markers: general theory and sample-based marking for certification, real-time training during actual exams for calibration, and post-marking training for recertification, while returning markers may begin the process at the second set.
A training program for this phase needs to incorporate modules on digital platform navigation, real-time problem-solving, and methods for improving marker engagement and accuracy, making sure that all participants are fully equipped to handle the complexities of the marking process. Digital training could be supported through a dashboard like the one shown below:
Training of Data Analysts and Report Developers: The final touchpoint of the assessment cycle is the data resulting from assessing students' abilities through various types of questions. Data handling has two critical aspects: formation and analysis, and communication through reporting techniques. Firstly, data analysts should be trained not only in methods of analysis and research design thinking but also familiarized with the types of questions and the test delivery environment that form the foundation of data gathering activities in the assessment cycle. The other end of the data concerns how assessment data, processed by psychometrician analysts, is communicated. During this reporting phase, the team needs to put themselves in the stakeholders' shoes and deliver reports in language and formats that are accessible, facilitating a shared understanding of how assessments can support overall student progress.
For this stage,training is required on interpreting assessment data, reporting results effectively to various stakeholders, and applying findings to inform educational practices and decision-making.
In summary, training programs in the assessment cycle play an important role in building a culture of enlightenment and growth mindset in every phase of the assessment process, although at times these programs are taken for granted and merely documented. By integrating a training element into every assessment process, organizations can close gaps arising from various industry-specific or institutional challenges. Different organizations may adopt a more corporate or agile approach to training. Regardless of the approach, having a clear list of competencies expected for performance helps set definitive expectations for efficient assessment delivery.
Vali Huseyn is an educational assessment specialist, recognized for his expertise in development projects of various aspects of the assessment cycle. His capability to advise on the improvement of assessment delivery models, administration of different levels of assessments, innovation within data analytics, and creation of quick, secure reporting techniques sets him apart in the field. His work, expanded by collaborations with leading assessment technology firms and certification bodies, has greatly advanced his community's assessment practices. At The State Examination Centre of Azerbaijan, Vali significantly contributed to the transformations of local assessments and led key regional projects, such as unified registration and tracking platform of international testing programs, reviews of CEFR-aligned language assessments, PISA-supported assessment literacy trainings, and the institutional audit project, all aimed at improving the assessment culture across the country and former USSR region.
Vali has received two prestigious scholarships for his studies: he completed an MA in Education Policy Planning and Administration at Boston University on a Fulbright Scholarship and also studied Educational Assessment at Durham University on a Chevening Scholarship.
Discover guided practices in modernizing assessments and gain insights into the future of educational assessments by connecting with Vali on LinkedIn.