Indiana

Education, From The Capitol To The Classroom

Video: What Indiana’s New A-F School Rating System Might Look Like

Concerned that the model used in issuing A-F grades to Indiana schools was too complex for anyone to comprehend the ratings, state lawmakers have thrown out the grading formula and told education officials to rewrite the system by fall.

So, what now? How will the state give your school its letter grade rating in the future?

There are several possibilities — we’ve put together a video explaining how ideas the state superintendent, a top state lawmaker and an economist have floated could fit into a new grading system.

What’s clearer, for now, is what can’t fit into the new grading system: State lawmakers’ directive forbids the new A-F rating system from deriving an academic “growth” score by comparing students to their peers, as the current system does.

Per state lawmakers’ orders, education officials will have to find a way to reflect both (A) a school’s “proficiency,” or passing, rate on statewide tests and (B) individual students’ growth in the letter grades without comparing students to each other.

What could the new grading system include?

Two Letter Grades?

Sen. Luke Kenley, R-Noblesville, pushed for a re-write. He has also argued the state should issue schools two separate letter grades. The first grade would reflect the school’s proficiency, and state officials would then base the second grade on a measure of academic growth.

Such a system would be a corrective for the current model, he tells StateImpact:

I don’t mind it being complex in the detail, but whoever your audience is needs to understand what the result is. If the result is such that they know instinctively that the grade is not representative of what’s going on in a particular school, then it’s failed to provide the transparency and the information that it wants. When you’re measuring two different elements — one is growth, the other is performance and then you blend those together — you’re getting a mix that tends to dilute anybody’s understanding of what’s really occurring.

I don’t mind the complexity on each of those points — at how they arrive at the growth grade and at the performance grade. But I think you want this to be a serviceable product that not only inside teachers can use, but those people in the public can look at and make these determinations. After all, these are public schools and this is public information.

Kenley’s idea isn’t part of the new state law mandating a re-write. But it could be a viable framework for a new grading formula.

If transparency is the legislature’s primary fear, says David Dresslar, Executive Director of the Center of Excellence in Leadership of Learning, then a “two-grade” system could be easier to explain than the current system.

“[A two-grade system] also requires an explanation,” Dresslar says, “but the explanation is more simple than the explanation of bonus points and high growth and low growth that the present system has.”

Compare To Cut Score, Not Each Other?

But Kenley’s two-grade idea doesn’t address how state officials re-wire the portion of the grading system that determines students’ academic growth score.

State superintendent Glenda Ritz floated an idea to solve that problem at April’s State Board of Education meeting.

She suggested that, instead of comparing students to each other, state officials could compare students to the cut score on the exam — it’s the same for all students and is scaled so that anyone can compare the students’ scores from grade to grade.

Ritz’s model would measure the difference between the student’s ISTEP+ score and the cut score. Theoretically — and she told board members the idea is still very preliminary — state officials could see how the difference between the student’s score and the cut score changes from year to year to determine whether that student is making adequate growth.

What About The State Board?

Ultimately, lawmakers have tasked the State Board of Education with signing off on whatever new system state education officials draw up.

At April’s meeting, board member Neil Pickett said he rejected the idea the current A-F system is opaque. During his comments, he suggested the new model should maintain its focus on students passing statewide tests:

The current model uses growth as a bonus. It adds to or takes away from the basic score. But the fundamental grade initially is based on achievement. I think it’s very important that we continue to focus on achievement of the minimum of passing of ISTEP. Growth is important. Growth is how you get to achieving the ultimate levels you need to pass the test. But the ultimate measure has got to be whether you can pass the test. You can grow for 16 years, and if you still can’t pass the test, I’m not rewarding you for that growth. You have to pass the test… ISTEP is a floor, not a ceiling… You’ve got to be able to pass ISTEP, and you’ve got to hold schools accountable for increasing the number of kids, at a minimum, who can pass that test.

Are We Measuring Growth The Right Way?

While some critics have focused on what they see as the A-F grading system’s complexity, other critics say the current model doesn’t do enough to cancel out the negative impacts poverty can have on a school’s rating.

To remedy this, University of Missouri economist Cory Koedel and his colleagues advocate for using a so-called “Value-Added Model” to measure student growth. The complex statistical model tries to compare, for instance, teachers in poor schools with teachers in other poor schools.

“Even among the highest-poverty schools,” Koedel explains, “the model is still going to identify some high-performing teachers — teachers that are doing better than other teachers in similar circumstances.”

The result, Koedel argues, is a growth measure that helps educators gauge the impact their teaching has on their students relative to other students in similar schools. He tells StateImpact:

For evaluating the adults and helping them do a better job of teaching the kids, you want to tell the adults that are working in the toughest circumstances that there are some of them who are doing a really good job. Maybe none of their kids are quite making proficiency, but they’re on the right track, there are other adults that can learn from them and they can also improve internally on what they’re doing. None of that information is coming out of these systems right now…

The schools that are getting really high growth out of their kids, you probably want to give them that signal that they’re doing that so they don’t, you know, not realize they’re doing well and just blow up and start over.

Koedel adds state officials could report value-added data alongside students’ pass rates on statewide tests.

But, if anything, a value-added model would force even closer comparisons of individual students with their peers. That’s something the legislature directed state officials to avoid.

Whatever State Officials Do …

“It’s more important that we explain [the re-written model],” says CELL’s David Dresslar. “Whatever results from this re-consideration of A-F, it needs to be defensible — but all of these ideas are defensible. More imporantly, it needs to be explained in terms that people can understand how a school is performing based on these letter grades. That’s the challenge.”

Comments

  • Karynb9

    The problem with comparing a student’s growth to the cut-score is that all of these tests have a ceiling. If a student gets a perfect score on ISTEP in year 1, there’s nowhere else to go in year 2.

    • kystokes

      Thanks for the comment, good point Karynb9. Counterargument on that front (for devil’s advocate’s sake): Perfect or scores so high to the top of the grading scale are so rare (I don’t know that’s true, but I’m guessing it’s so) that it’s unlikely if one were to be labeled low growth, there wouldn’t be enough low growth ratings to harm a school’s final letter grade…?

      • Karynb9

        But those high-level, almost-perfect-scoring students aren’t evenly distributed among schools (just as scores at the other-end-of-the-bell-curve aren’t evenly distributed among schools), which is a problem when letter grades are used to compare one school with another. What about a magnet school that is designed to teach gifted kids? What about many schools in the suburbs that achieve high test scores? You have elementary schools in the suburbs that have 60-70% of students reaching Pass+ (let alone “Pass”) status on ISTEP. Much less room for those kids to grow. Make sure student test scores in every single school in the state fit the bell curve perfectly, and you’re absolutely correct that the “benefits” of growing the lowest-of-the-low kids in your school will cancel out the “penalties” of not being able to grow your highest kids. That is not, of course, how students are placed in schools. Schools with high numbers of high-achieving students will be penalized based on nothing more than a typical regression toward the mean. If a growth model impacts the letter grade that a school receives, I can guarantee that it will be worked into individual teacher’s evaluations (much like the current growth model used in school letter grades is now wrapped into the RISE model for individual teacher evaluations where one factor includes the amount of growth an individual teacher’s students make). If I teach a classroom full of high ability students in a suburban setting and THIS becomes the new “growth model” and starts popping up as data I’m judged on for my individual evaluation, I’m in trouble. The old “growth model” was far from perfect in that it was complex and hard to explain. However, you were at least comparing apples-to-apples for the most part. In this case, simplifying the growth model so it’s easier to explain and “makes sense” for people who think that the primary focus of testing in this state should be on bringing up the scores of our lowest achieving students (which certainly is an admirable goal) has some very tough unintended consequences for those teachers who have chosen to spend their careers working with the brightest-of-the-bright and lifting those kids up to new levels (that aren’t measured on a minimum-proficiency on-grade-level test).

  • dreeves

    While I am not against accountability, we are still missing a fundamental and foundational problem: Data cannot measure good teaching. I saw some glimmer of sense in comparing how students in similar communities and schools (demographically speaking) are doing, but that does not address the fundamental fact that the tests do not measure some of the most important things that teachers do. I am all about academic rigor–even though I teach “regular” kids–but these tests and the RISE rubric and all of it miss the fact that high school, where students switch teachers at semester in many courses, is not able to be measured like the elementary/middle school grades. And the art of teaching begins with skills, but goes so far beyond them–applying skills in the pursuit of solving society’s problems and critically evaluating societal values and ills–that the tests are useless in determining the actual capabilities of our students. I want to capture the passions and talents of my students, which are the motivators behind learning, and then they will learn the skills as tools to reach valid goals that they care about. This testing mentality foster hoop-jumping, not learning. And lest you think I am just the ivory tower type, check out the way they do teacher education, and then public education, in places like Finland, who has been a world leader of late on international measures of education quality. Testing, unless used in a diagnostic sense (which ISTEP and ECAs clearly are not right now), is terribly punitive. And it is being used as a whip to drive people, which only works so long. Last year was the most pressure-packed and miserable year I have ever spent with my students, and I am determined not to allow that to happen to them or me ever again. (The students I had all year long–only about 21 out of the 65 or so sophs that I had–did well on their ECA, btw, and my RISE evaluation was “highly effective,” so I am not just making excuses for poor job performance. I had a large load overall–165-170 students.) My sophs run the gamut from the lowest GPAs in their class to fairly good students. We have two accelerated sections out of a total of seven sections per grade level, and I do not have any representatives from those top two sections. The challenge for me is to differentiate enough to meet the needs and desires of all students, and I find it disturbing that all we care about are minimums. I want all of my students to fly as high as they can go, and setting a minimum bar has unfortunately inculcated a “hoop-jumping culture” that is maddening to deal with.

  • Jo Blacketor

    Kyle – good video… now to explain the cut score to the normal person. A next good video would be to clarify why use the cut score vs. “raw numbers?” And what would that look like (using raw numbers) and adding variables into the equation as a measurement (meaning not just use IStEP but add others like Scantron, NWEA, Acuity, Wireless, etc.). Just food for thought.

    • kystokes

      It’s a good thought. Now that this panel has been formed, we’ll have something more concrete to do our next video about!

About StateImpact

StateImpact seeks to inform and engage local communities with broadcast and online news focused on how state government decisions affect your lives.
Learn More »

Economy
Education