By clicking the SUBMIT button, I’m providing the above information to Vretta for the purpose of responding to my request.
CONTACTlogo
twitterfacebookfacebook instagram
Communication in High-Stakes Assessments: Are we being heard?

June 30, 2024

Communication in High-Stakes Assessments: Are we being heard?

Share:TwitterlinkedinFacebooklink

Subscribe to Vretta Buzz


Building clear, transparent and targeted communication is important for making sure that assessments are perceived as positive experiences rather than negative ones. In educational assessment, validity and reliability of assessments are so much prioritized as important quality assurance measures that the importance of communication in assessment has been somewhat overlooked. Additionally, the technical language being used by many in the field may lack strong communication focus, a limitation often related to the technical nature of their work. This can make it challenging to explain concepts that are reliable and valid yet complex to articulate in simple terms. We present cases with various scenarios where communication breakdowns have led to significant issues, from regulatory uncertainties to confidentiality dilemmas.

Inspired by discussions at the 7th Annual FLIP+ Event, themed “The Role of AI in National and Classroom Assessments” in Portugal in June, 2024, I decided to write about how failing to effectively communicate the benefits of technological integrations to stakeholders in the assessment domain, along with an excessive focus on the technological and quality developments of assessments themselves, can lead to public scrutiny. By focusing on communication with stakeholders of the domain earlier, we could potentially reduce public-related criticism and foster a spirit of collaboration, which would ultimately improve public engagement and acceptance of technology-driven solutions in high-stakes assessments.

This article explores the critical role of communication in high-stakes assessments, presenting four practical cases of miscommunication and suggesting reactive and preventive strategies.

Frequent Miscommunications: Scenarios from High-Stakes Assessments

Each case below illustrates the critical nature of maintaining clear, transparent, and timely communication between all parties involved in the assessment process. Whether it involves internal team coordination, AI-enabled trust or mistrust, or external client interactions, these examples highlight the consequences of communication failures and the strategies employed to mitigate these challenges for better outcomes in educational and corporate settings.

Regulatory Uncertainty in High-Stakes Assessments

We've all encountered situations in managing high-stakes assessments where communication gaps arose from uncertainty, specifically due to a lack of information from regulators about how the assessments would evolve over the years. This uncertainty makes it difficult to strategically prepare for a full cycle of assessments and also affects the candidates taking them in the long-run.

Reactive Strategies

When facing regulatory uncertainty in high-stakes assessments, the immediate action should be to establish direct communication with regulators to seek clarifications and updates on expected changes. You may also want to engage in ongoing dialogue to understand the regulatory landscape better and advocate for timely information sharing. To be on the safe side, it would be better to provide transparent communication to all stakeholders, including candidates, about the known factors and the potential for changes, explaining how they could impact the assessments. Additionally, it is necessary to implement flexible assessment designs that can adapt to regulatory changes with minimal disruptions. Finally, it may help to set up a contingency plan that includes scenario planning to quickly respond to sudden regulatory changes, ensuring that both preparation and response strategies are in place to minimize impact on the assessment cycle and candidates.

Preventive Strategies

The first general approach to mitigating the future impact of regulatory changes is to track regulatory trends and potential shifts that could affect high-stakes assessments, including reviewing updates and actively participating in industry forums where these changes are discussed. Additionally, building strong relationships with regulatory bodies can facilitate better communication and provide early warnings about changes, helping to anticipate and adapt to expected shifts. Then, you can expand this awareness by training your teams on managing regulatory uncertainty through understanding the process and creating adaptable frameworks, and by fostering organizational agility to quickly adapt to regulatory changes. Finally, engagement with other stakeholders in the industry could help to form a collective voice that can influence regulatory developments, promoting standards and practices that provide stability and clarity in assessment procedures.

Transparency vs. Confidentiality in High-Stakes Assessments

There is always a dilemma regarding how open we can be with the public while preserving confidentiality. Let's consider a case involving high-stakes assessments where pre-established legal cut scores dictate that about 70% of candidates will fail. This level of failure could significantly increase the client’s workload due to a surge in formal appeals, which are part of the assessment's reporting policy. 

One strategy to manage this overload is to publicize the assessment questions, increasing transparency and potentially reducing the number of appeals by 50%. However, this approach also means sacrificing the confidentiality of high-quality assessment items. Ultimately, such a decision depends on the candidates' level of awareness and familiarity with the item types and their scoring practices. A positive aspect of this transparent policy is that it encourages old and new candidates to discuss and clarify the correctness of questions on social media, which can be useful for understanding and public acceptance.

Reactive Strategies

In response to the increased number of appeals and dissatisfaction among candidates, the first step is to improve communication with all stakeholders, including detailed explanatory sessions, webinars, or public Q&A forums where candidates can ask questions and receive clarifications directly from experts. Simultaneously, it is very important to set up a dedicated response team to handle appeals timely, ensuring that each case is reviewed promptly and thoroughly to maintain trust. Additionally, it is necessary to consider a controlled release of certain exam items or partial content to balance transparency with the need to protect the integrity of the assessment, addressing specific concerns raised by candidates while maintaining the confidentiality of most items.

Preventive Strategies

The initial preventive step here is to establish a clear communication plan outlining the assessment's objectives, scoring methodology, and the rationale behind the cut scores well in advance of the exam date. This plan should include informational materials, workshops, and simulations that help candidates understand the assessment's structure and scoring system. Additionally, implementing a unified communication policy that includes preemptive feedback mechanisms - where candidates can express concerns about exam content or scoring before formal appeals through preliminary surveys or informal feedback channels immediately after the exam - could help prevent public pressure post-exam. Furthermore, regular reviews and adjustments of assessment items and processes, based on candidate feedback and changing educational standards, demonstrate alignment with public needs, supporting ongoing relevance and fairness, and thereby reducing the grounds for appeals. Finally, every public event should engage educational communities and stakeholders to promote a broader understanding of assessment standards and practices, using social media and professional forums to facilitate open dialogue while preserving the confidentiality of high-stake content.

Miscommunication in Assessment Implementation and its Impact on Scoring and Timelines

In our final case, let’s consider the scenario where the assessment design team of an assessment agency works closely with the client such as the ministry or any corporate institution and agrees to design the item types requested by the regulator. However, the assessment design team forgot to inform the test delivery team about the new type of item requested by the client, resulting in the test delivery team being unaware that this new item type had been included in the test format. Consequently, the delivery platform was not adjusted to accommodate this change, leading to the test being administered without the necessary updates for this new item type, which had an impact for the scoring team.

As a result, the exam was conducted, but the test delivery platform, not adjusted for the new item type, was unable to score the candidates' responses as required by the client. This lack of communication led to delays and additional time spent resolving the issue. Eventually, it was resolved amidst high tensions between the test design and test delivery teams, causing the delay on the scoring team’s side and the candidates had to wait longer than the initially announced timeline for their exam results.

Reactive Strategies

Once an error is identified, the test delivery team should immediately adjust the platform to support the new item type, if possible, and communicate these changes and any expected delays transparently to all stakeholders. Additionally, form a resolution team composed of members from the design, delivery, and scoring teams to address and fix the scoring discrepancies quickly. Lastly, adjust the results processing system to handle the new item type accurately and issue an official apology to the candidates, detailing the reasons for the delay and the corrective actions taken.

Preventive Strategies

To prevent such cases from happening, it is better to develop and implement a comprehensive communication plan that ensures all teams are regularly updated about any changes through shared digital platforms and mandatory cross-team briefings. Additionally, it would help to establish clear change management protocols with a checklist for notifying all impacted departments and require sign-offs from key stakeholders before any changes are implemented. Furthermore, it would nicely sum-up organizing regular training sessions for all teams to improve understanding of the assessment process and its interdependencies.

Public Trust or Mistrust in AI-supported Solutions For High-Stake Assessments

A case from recent discussions in assessment circles both at the eAA's International Conference and Awards 2024 in London and 7th Annual FLIP+ Event in Portugal focuses on public perception of AI-enabled solutions, particularly whether parents are comfortable with their children being live-proctored in remote testing environments in high-stake assessments. This concern extends to AI-enabled authoring or marking, which might introduce bias. Although human oversight in final decisions can mitigate associated risks, the public may still lack a full understanding of AI's role in the process and whether it is entirely safe to trust AI with high-stakes decisions affecting children's futures. Regulators are now puzzled about how to balance taking advantage of AI-facilitated efficiency in the assessment process with concerns about privacy, bias, and associated risks.

Reactive Strategies

In such a case described above, immediate actions should focus on public engagement through student-centered forums, detailed FAQs, and targeted informational sessions to clarify the role and processes AI is involved for stakeholders, especially parents and teachers. To further alleviate concerns, presenting case studies and evidence of AI's accuracy in marking and proctoring, as well as promptly auditing and adjusting AI algorithms are helpful, especially when biases are detected or reported in high-stake assessments. Finally, for a more systematic response, regulators and assessment bodies should collaborate to ensure that all data handling complies with strict privacy laws and best practices.

Preventive Strategies

Similar to the reactive mode explained above, but more systematically engaging stakeholders, including educators, parents, and regulators, from the initial stages of AI solution development would help prevent future mistrust and regulatory challenges. Institutionally, establishing very detailed guidelines for AI use in assessments that focus on ethical standards, transparency, and accountability can set a clear framework for daily operations. In parallel, investing in ongoing training for both the creators and users of AI assessment tools on building AI literacy on various levels promotes a deeper understanding and more responsible use on both ends in assessment environments. 

Tech-Driven Solutions: Closing Communication Gaps in Assessment

The scenarios we've examined so far necessitate systemic solutions on assessment and accountability teams that can be effectively addressed with technology-fused approaches, particularly in terms of information communication within various stages of the assessment cycle, whether it involves assessment data, test design materials, or client requests. The concept of viewing assessment as both a culture and a structure to accountability, resonated from the presentation themed Assessment to Accountability at the National Conference on Student Assessment 2024, could serve as a good introduction to this solution. So, the well-thought-out flow of information with the solution is facilitated by intuitive systems that minimize technology overlap and integration complexities across operational functions. These systems, built on a culture of data visualization, modernized data pipeline infrastructure, and evolved data management systems, with the latter being our primary focus due to its systematic impact on the assessment cycle.

Evolution of Data Management

Traditional data warehouses are becoming obsolete due to high maintenance costs, limited support for Business Intelligence (BI) and reporting, and limited capabilities for supporting modern machine learning use cases. In fact, they lack the scalability and flexibility needed to handle various data types, particularly with the advent of AI generating unstructured data like text, images, video, and audio.

Transition to Data Lake and Lakehouse Systems

Data Lake offers the capability to process large-scale data and analyze unstructured data swiftly. However, it faces challenges such as enforcing data quality and consistency, which can make appending and modifying data difficult.

Lakehouse emerges as a solution that combines the benefits of data lakes and warehouses. It facilitates the unification of data teams, prevents data from becoming stale, and breaks data silos, effectively integrating assessment and data teams. This hybrid model supports predictions, real-time data processing, flexible scalability, and management of raw data in any format, paving the way for a more integrated and efficient assessment environment.

Finally, by leveraging these tech-driven solutions, organizations can close communication gaps within the assessment process, promoting a more collaborative and effective environment for all stakeholders involved.

Future of Communication Lines in Assessment Context

In conclusion, we can acknowledge that communication lines - strategies and plans surrounding the reporting of assessment results and the cultivation of a communication culture around this reporting have become more important compared to five years ago, when public trust was less disturbed by the absence of technology in assessment processes. Since technological advancements challenge traditional methods in assessment, we need to apply technology to respond to these challenges. Therefore, integrating technology, including advanced data management systems like data lakes and lakehouses that support comprehensive data handling throughout the assessment cycle, and utilizing modernized data pipelines along with the development of intuitive, tech-driven solutions, can directly contribute to establishing clear, timely, and effective communication within the assessment cycle. Such tech-advanced solutions not only streamline processes, bring transparency and foster collaboration but also enable continuous adaptation to educational and technological changes, overcoming traditional barriers to create a more integrated, efficient, and transparent assessment system that benefits all stakeholders.


About the Author

Vali Huseyn is an educational assessment specialist, recognized for his expertise in development projects of various aspects of the assessment cycle. His capability to advise on the improvement of assessment delivery models, administration of different levels of assessments, innovation within data analytics, and creation of quick, secure reporting techniques sets him apart in the field. His work, expanded by collaborations with leading assessment technology firms and certification bodies, has greatly advanced his community's assessment practices. At The State Examination Centre of Azerbaijan, Vali significantly contributed to the transformations of local assessments and led key regional projects, such as reviews of CEFR-aligned language assessments, PISA-supported assessment literacy trainings, and the institutional audit project, all aimed at improving the assessment culture across the country and former USSR region.

Discover guided practices in modernizing assessments and gain insights into the future of educational assessments by connecting with Vali on LinkedIn.


Download Button