Change contrast:

Universal Screening

Overview

Universal screening is a critical element of any MTSS model; in fact, it has been suggested that without universal screening, MTSS cannot function as intended (Gersten, Dimino, & Haymond, 2011). While the importance of using universal screening measures within MTSS is understood, there continue to be many pertinent questions asked by school personnel regarding the application of universal screening.

The ten frequently asked questions below help explain the practice of universal screening and the critical features of a screening tool, particularly as it relates to implementation of an integrated model of MTSS for academics and behavior.

Universal Screening Frequently Asked Questions

1. What are the four purposes of student assessment?

A coordinated assessment of student results includes measures that address four distinct decision-making purposes: summative, diagnostic, progress monitoring, and screening (Hosp, Hosp, & Howell, 2007). Summative assessments, such as the Michigan Student Test of Educational Progress (M-STEP), are used to gather information about student performance compared to grade level standards, and are required by the state of Michigan and/or local districts. Diagnostic assessments provide more in-depth information about an individual student’s specific skills for the purpose of guiding future instructional supports. Progress monitoring assessments are provided more frequently and assist in determining whether students are making adequate progress and for evaluating the effectiveness of instructional/behavioral support. Screening, often referred to as universal screening, is at the “frontline” of an assessment plan within MIBLSI’s MTSS model, and thus plays a critical role.

Universal screening is the systematic assessment of all students on academic and/or social-emotional indicators for the purpose of identifying students who are at-risk, and may require support that varies in terms of level, intensity, and duration. Universal screening is typically conducted three times per school year and consists of brief assessments that measure critical skills. Because screening assessments provide some diagnostic and progress monitoring information, they can be used to assist in determining the effectiveness of the curriculum, instruction, and interventions provided to students.

2. Why does MIBLSI use an integrated screening process for both academics and behavior?

Students must have both academic and behavioral skills for school success. Within an integrated model of MTSS, academic and behavioral systems, practices, and data are interwoven and aligned. Models of integrated behavior and reading supports have been shown to produce larger gains in literacy skills than the reading-only models (Stewart, Benner, Martella, & Marchand-Martella, 2007). Therefore, it is important to integrate a universal screening process for both reading and behavior. An integrated screening process allows for early identification of students who exhibit risk on one or more academic or social/behavioral indicator. Screening data also provide a starting point from which to design instruction and intervention for students who are identified as needing additional support. “By not considering academic and behavioral needs together, critical information that can more fully inform intervention efforts and patterns of responsiveness may be overlooked” (Lane, Menzies, Oakes, & Kalberg, 2012; p. 4).

3. What are the critical features of screening measures that align with MIBLSI’s model of MTSS?

Universal screening measures provide information about individual students within a system and also act as global indicators of the overall health of a system (Ikeda, Neessen, & Witt, 2008). MIBLSI is funded to support the use of universal screening measures with the following characteristics: brief, repeatable, technically adequate, measure critical skills, have empirically derived cut scores, and are easy to administer and score (VanDerHeyden & Tilly, 2010). The academic universal screening measures supported by MIBLSI are predominantly administered in a format where an adult is interacting with one student at a time to directly assess critical academic skills (e.g., phonemic awareness, alphabetic principle, fluency). These measures are also designed at an approximate equal level of difficulty for each of the three benchmark assessments conducted during the school year, and have corresponding progress monitoring assessments at the same levels of difficulty. In addition, there is a progression of subtests to be used at each grade level, which correspond with expected skill development and standards.

The behavioral screening measures use one or more of the following formats in a single- or multi-gated procedure: teacher rating, student nomination, rank ordering, direct observation, and parent rating. Some behavioral screening measures are designed for teachers to use a form for each individual student in their class, while other behavior screeners are designed for the teacher to rate the whole class on a single form. These measures should be completed three times a year, typically 6 to 8 weeks after the start of the school year, prior to winter break, and 6 to 8 weeks before the end of the school year (Lane, Menzies, Oakes & Kalberg, 2012).

Universal screening measures with the characteristics described above have allowed districts and schools implementing MIBLSI’s integrated MTSS model to accurately identify students who are at-risk for future academic and/or behavior difficulties, and establish the need for further instructional support of those students. Additionally, these measures have been shown to predict which students are very likely (i.e., 80 to 90 percent certainty) to reach future benchmark goals and/or reach a desired level of performance on outcome measures. The fact that these measures are brief, easy to administer and score, and provide research based cut scores, makes it possible to put the resulting data in the hands of teachers on the same day their students are assessed. The recommended universal screening measures for literacy have online data systems that allow for efficient entry of data and running of reports. The reports produced by these data systems visually display results in several formats at the district, school, grade, class, and individual student levels.

The critical features of universal screening measures assist in providing districts and schools with the ability to answer important questions within the data review/decision making process. Along with other sources of data (e.g., teacher input, M-STEP), the results help schools determine the effectiveness of core instructional supports in the areas of academic and behavior. These results can also be used, along with appropriate progress monitoring data, to examine the effectiveness of both strategic and intensive instructional supports.

The publishers/developers of the universal screening measures supported by MIBLSI also have substantial training resources available, including a training of trainers model, manuals, presentations, and other handouts for training school staff, videos for practice scoring, and a network for trainers/users to receive information about research and exemplar assessment practices. All of these available resources have assisted MIBLSI schools in implementing universal screening with a high degree of fidelity.

4. What universal screening measures have been successfully used within MIBLSI’s MTSS framework?

Districts participating in MIBLSI are asked to use curriculum-based measures for reading (e.g., Acadience Reading) three times a year to screen their students in the area of reading, as well as the Student Risk Screening Scale (SRSS; Drummond, 1994) three times a year to screen their students in the area of behavior. Schools are also provided with additional analysis tools (i.e., initial grouping worksheets which include places for Acadience Reading and SRSS scores) that can be used to integrate the analysis of reading and behavior screening data, which can result in a more comprehensive view of the school system and individual student needs.

5. How can a district determine if it is ready to adopt the practice of using a universal screening measure?

The decision to adopt a universal screening tool must include careful consideration of a district’s readiness to conduct universal screening and effectively use the resulting data. The National Implementation Research Network offers six critical considerations when exploring the adoption of a new practice: need, evidence, fit, readiness for replication, resources required, and capacity for implementation (Hexagon Tool; Blasé, Kiser & Van Dyke, 2013). Using these six considerations to drive conversation regarding the selection of a tool, practice, or intervention can result in improved buy-in and improved ability to effectively install the necessary support structures for universal screening.

6. What is meant by “fidelity of implementation” pertaining to the use of universal screening?

As with any assessment, it is important to focus on the fidelity of implementation to ensure the accuracy of the data collected and thus, the integrity of the data-driven decision making process that follows. Because universal screening plays such a critical role within MIBLSI’s MTSS model, it is critical that it be implemented with a high degree of fidelity so that the most accurate data possible is used within the decision-making process. Ways to support high implementation fidelity from initial training through data analysis and decision making include: 1) ensuring that the district has fully considered readiness to adopt and support universal screening measures; 2) accessing high quality trainers who provide thorough and accurate initial training on test administration, scoring, data analysis, and decision making; 3) accessing ongoing coaching support for anyone who will be part of the universal screening process from start to finish; and 4) Ensuring access to data systems that are easy to access and allow for the timely and accurate reporting of screening results. Periodic “refresher” trainings, inter-rater reliability scoring, and data entry checks will also help to ensure the accuracy of screening data.

7. What are some important considerations when evaluating the efficiency of a universal screening measure?

With increasingly limited resources available to schools, it is critical that universal screening measures provide useful data in an efficient manner, without compromising the validity and reliability of the results. Consideration must be given to the amount of staff hours devoted to administering the assessment, as well as the ease of collecting, scoring, and entering the results into a database. The resources that will be required to provide adequate training to school staff in all of these areas should be an additional consideration.

The commodity of time needs to be highly valued and guarded by schools, due to the fact that the time teachers and other professional staff spend testing students, is time they are not instructing students, or performing other school responsibilities. Testing time should not be considered lost time, given that screening and other test data can be used to better guide instruction.

When choosing assessments, districts need to consider which ones will give them the “biggest bang for their buck,” that is, those that will provide the greatest amount of useful information, while requiring the least amount of financial and other resources. Schools should avoid screening measures that take longer than 20 minutes to administer to an individual student and select the more efficient measure if two have approximately equal levels of reliability and predictability (Gersten, Dimino, & Haymond, 2011).

8. How is universal screening data used within MIBLSI’s model of MTSS?

Information obtained from universal screening assists schools to utilize a continuous school improvement process within an MTSS model, to determine if the core curriculum and instructional practices are effective in meeting the academic and behavioral needs of at least 80 percent of their students. Furthermore, universal screening measures will assist schools with identifying those students who are not making adequate progress in the core curriculum and who will need additional instructional supports.

Information gathered from universal screening provides a portrait of students’ skills and needs for instructional support at the district, school, class, and individual levels. As an example, a district’s universal screening data may consistently indicate that the core level of instructional support in the area of reading for first graders is resulting in positive outcomes for only 60 percent of the students. As part of the data collection and decision-making process, the district would need to examine what role critical factors are contributing to these poor results. These factors could include, but not be limited to, the core reading curriculum being used, time allocation, instructional practices, and the amount of instructional support being provided.

9. What are the pros and cons of using a single universal screening measure to serve multiple purposes?

Assessments that serve multiple purposes (e.g., screening, progress monitoring, diagnostic, outcome) can assist schools in collecting data per the district’s entire assessment plan as efficiently as possible. Intentional efficiency can reduce the amount of time that adults and students must spend on assessment. A common concern is that any time spent assessing can be considered lost instructional time. Educators must also consider how time spent on assessment can ensure better time spent during instruction when assessment data are used effectively to guide instructional priorities. However, assessments being used for multiple purposes must demonstrate an acceptable level of technical adequacy across all of the assessment purposes for which they are being used. For example, a reading test that is being used tri-annually for universal screening, and also to monitor student progress in between those benchmarks, should demonstrate the needed validity and reliability both as screening measure and as a progress monitoring assessment. Districts must also consider these factors at each grade level, as technical adequacy can vary from grade to grade. Thus, an assessment may demonstrate required levels of validity and reliability as a screening tool for grades K-6, but only do so as a progress monitoring tool for grades 3-6.

10. What are the similarities and differences of universal screening at the elementary and secondary levels?

Implementation of a universal screening process is as valuable at the secondary level as it is at the elementary level. What can differ are the specific measures and the process of triangulating multiple sets of information. By the time students reach middle and high school they have accumulated a great amount of data that can be used to identify students who are likely at risk for continued academic and behavioral challenges. Historical records can be combined with other early warning signs of school dropout, such as grade point average, attendance, core course completion/failure, and behavioral screening to efficiently put together risk profiles for all students. When additional testing may be warranted as part of universal screening, researchers are finding benefits of gated screening procedures that involve the testing of all students on one measure/scale and only testing students in additional areas as they are flagged as at risk during the first “gate.” Secondary schools participating in MIBLSI start their gated screening procedures with Early Warning Indicators (Kennelly & Monrad, 2007; Vaughn & Fletcher, 2012).

References

Annotated Bibliography

Blase, K., Kiser, L. and Van Dyke, M. (2013). The Hexagon Tool: Exploring Context. Chapel Hill, NC: National Implementation Research Network, FPG Child Development Institute, University of North Carolina at Chapel Hill.

Drummond, T. (1994). The Student Risk Screening Scale (SRSS). Grants Pass, OR: Josephine County Mental Health Program.

Gersten, R., Dimino J. A., & Haymond K. (2011). Universal Screening for Students in Mathematics for the Primary Grades. In R. Gersten & R. Newman-Gonchar (Eds.)Understanding RTI in Mathematics (pp. 17-33). Baltimore, MD: Paul H. Brookes Publishing Company.

Hosp, M. K., Hosp, J. L., & Howell, K. W. (2007). The ABCs of CBM. New York, NY: The Guilford Press.

Ikeda, M., J., Neessen, E., & Witt, J. C. (2008). Best Practices in Universal Screening. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 103-114). Bethesda, MD: National Association of School Psychologists.

Kennelly, L. & Monrad, M. (2007). Approaches to Dropout Prevention: Heeding the Early Warning Signs With Appropriate Interventions. Washington, DC: National High School Center.

Lane, Menzies, Oakes, & Kalberg, 2012. Systematic Screenings of Behavior to Support Instruction: From Preschool to High School. New York, NY: The Guildford Press.

National Center on Response to Intervention. (n.d.). Screening Tools Chart. Retrieved from http://www.rti4success.org/screeningTools.

National Center on Response to Intervention. (n.d.). Webinars. Retrieved from http://www.rti4success.org/subcategorycontents/webinars.

Stewart, R. M., Benner, G. J., Martella, R. C., and Marchand-Martella, N. E. (2007). Three-tier models or reading and behavior: A research review. Journal of Positive Behavior Interventions, 9, 239-253.

VanDerHeyden, A. M., & Tilly, D. W. (2010). Keeping RtI on Track: How to identify, Repair and Prevent Mistakes That Derail Implementation. Horsham, PA: LRP Publications.

Vaughn S. & Fletcher J. M. (2012). Prologue: response to intervention with secondary students. Why the issues are different than with elementary students. In D. K. Reed, J. Wexler, & S. Vaughn. (Eds.) RTI for Reading at the Secondary Level: Recommended Literacy Practices and Remaining Questions. New York: The Guilford Press.

Page Feedback

Did you find this information useful?