• Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Tags
    Tags Displays a list of tags that have been used in the blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
  • Archives
    Archives Contains a list of blog posts that were created previously.
  • Login
    Login Login form

Posted by on in Assessment

standardized test

Today was our second of four days testing kids.  Keep in mind that this is only the ninth day of the new school year. 

I know that teachers must have an idea of the abilities and skills of their students in order to proceed with appropriate instruction during the year.  But, with the number of times the computers froze and crashed, with the rising levels of student frustration and burn-out, and with the disruption to the regular school day, exactly how reliable are the results of this round of assessment truly going to be? 

This will probably be viewed as heresy by those teachers and administrators groomed in an era proud of the art of “drilling down” through data, but I’m going to be honest:  Most large-scale assessments are not an accurate assessment of the ability of our children. 

One would think that a person who has been in the profession for more than three decades – fifteen of those years as a school administrator – would be joyfully embracing the overabundant piles of data points gleaned from each onslaught of testing.  But, at the risk of sounding like an old codger, I proudly assert that there are much better ways to know what our students are capable of doing. 

Last modified on
Posted by on in Assessment

standardized test

Don't show Mama Our Nation's Report Card. Not so good. 

Tonight I'm sharing my opinions, not a major statistical treatise, but I will toss some information into the bowl, like Strega Nona, and let's mix it up, and put a little honey on top. 

Tonight I offer heartfelt, plain talk about yesterday's shocking headlines, or not so, really, that our kids have failed. Or at least, didn't show any growth in fourth and eighth grade reading. Goodness. Yet here we are in America, right in the middle of endless standardized testing.

Now this. Drat. Flat scores. The sideways. Up scores, like Florida. Down, like second language learners and special needs labeled students.

Last modified on

Posted by on in Assessment


Here's another analogy to help understand why test-centered accountability doesn't work well.

All the heat in my house is run by a single thermostat. My house has three stories and a basement. The thermostat is on the first floor. The furnace runs into two out of four rooms on the second floor. There are no furnace runs to the third floor (a converted attic space).

The thermostat is supposed to turn the furnace off and on based on the temperature in the house. But it only measures the temperature in one room. In a second-floor bedroom, the temperature may be uncomfortably cold, but the thermostat doesn't measure that. In the attic room, a space heater4 may have the room super-warm, but the thermometer doesn't know that. The thermostat is by the front door-- if that door opens and cold air comes pouring in, the thermostat thinks the whole house is cold.

In short, the thermostat is an inaccurate measure of the temperature in my home because it only measures the temp in one place.

Last modified on

Posted by on in Assessment

So about that actionable data...

One of the frequently-offered reasons for the Big Standardized Tests is that they are supposed to provide information that will allow classroom teachers to "inform instruction," to tweak our instruction to better prepare for the test better educate our students. Let me show you what that really means in Pennsylvania.

Our BS Tests are called the Keystones (we're the Keystone State-- get it?). They are not a state requirement yet-- the legislature has blinked a couple of times now and kicked that can down the road. Because these tests are norm-referenced aka graded on a curve, using them as a graduation requirement is guaranteed to result in the denial of diplomas for some huge number of Pennsylvania students. However, many local districts like my own, make them a local graduation requirement in anticipation of the day when the legislature has the nerve to pull the trigger (right now 2019 is the year it all happens). The big difference with a local requirement is that we can offer an alternative assessment; our students who never pass the Keystones must complete the Binder of Doom-- a huge collection of exercises and assessment activities that allow them to demonstrate mastery.  It's no fun, but it beats not getting a diploma because you passed all your classes but failed on bad standardized test.

Why do local districts attach stakes to the Keystones? Because our school rating and our individual teacher ratings depend upon those test results.

So it is with a combination of curiosity and professional concern that I try to find real, actionable data in the Keystone results, to see if there are things I can do, compromises I can make, even insights I can glean from breaking that data down.

The short answer is no. Let me walk you through the long answer. (We're just going to stick to the ELA results here).

The results come back to the schools from the state in the form of an enormous Excel document. It has as many lines as there are students who took the test, and the column designations go from A to FB. They come with a key to identify what each column includes; to create a document that you can easily read requires a lot of column hiding (the columns with the answer to "Did this student pass the test" are BP, BQ and BR.

Many of the columns are administrivia-- did this student use braille, did the student use paper or computer, that sort of thing. But buried in the columns are raw scores and administrative scores for each section of the test. There are two "modules" and each "module" includes two anchor standards segments. The Key gives a explanation of these:



Last modified on

Posted by on in Assessment

Last Sunday afternoon the set was struck and the stage swept clean. We've come to the end of this year's spring musical. As always it was one of the highlights of my year, and as always, it reminded me of how inadequate so many of our educational models are.

There are weeks of rehearsal, learning music, learning choreography, working on blocking and lines and the underlying character work that goes with all of that. We have a cast of students in 7-12 grade in very many levels of skill and experience.

That means that in the course of assembling the show, each student learns a different set of lessons that depend a great deal on what roles they receive and what skills they bring to the table, as well as their ambition and adventurousness of spirit.

So this educational experience is extremely personalized, and that means far more than I have twelve lessons to choose from and a computer picks the next one based on how the last one turned out. My lead actor may need to learn about comedic timing, while one of my chorus folks may need to learn about the importance of the chorus in a show. My leading actress may need to learn about how to flesh out a character when the writers haven't given you much to work with. But the list of lessons will be different for every different role and every different cast member.

The lessons also vary with directors. This program is a co-op that allows my school to join in with a school just across town, and I split directing duties with an old friend who heads up the other school's program. We've divided up duties many different ways over the years, and it works because we work well together. Every theater production is a collaboration of some sort, and that collaboration is always shaped by the approaches of the people involved. Some directors have a very specific vision for the actors to bring to life, while others like to leave spaces for the actors to fill in with their own choices. We tend toward the latter, but some actors are more comfortable with the former and all sorts of combinations can get good results (and the requirements of the script itself also make a difference). All of which means that if you showed up with a specific program for exactly how a director should put together a show, I would laugh at you. Here we are with a performance based task that literally comes with a script-- and yet only a fool would claim that the script is all you need to produce a great show.

Likewise, putting on a show is the very definition of a performance-based learning experience. Yet if we were to follow the PBL model currently favored, we would break the show down into a checklist. Does the actor know the lines? Check. Does the actor know the blocking? Check. Can the actor put on her costume? Check. And on and on and even if I have checked off every micro-credential on the list, that is not the same thing as actually performing the show. Nor do we build toward that performance capability by working down the list one separate performance task at a time, because they are all part of a greater whole.

And those tasks would be performed for an evaluator, an assessor of some sort, which is not the ultimate goal. Our show was performed in front of an audience, and because it was a comedy, the audience reaction was a critical part of performance (in fact, on our second night, I saw something I've never seen in school or community theater before-- the show was stopped by audience laughter). Unlike competency-based education, which presumes that competencies can be approached as separate, discrete skills that can be measured through proxies, tasks that aren't the real thing. There is no checklist that would have substituted for dress rehearsal, no assessment more valuable than audience reactions in performance.

And speaking of assessments-- at no point in the eight-week process of preparing the show would a multiple-choice standardized test have been useful.At no point in the process did anyone think, "Hey, we need to do some assessments here to make sure that everyone is on track for a good performance." It would have been a pointless, useless waste of time.

In fact, standardization of any type is useless in this process. I have no idea how many productions of The Addams Family have been put on in community and school theaters at this point, but I will bet you the farm, the rent money, and a full box of donuts that not one of those productions looks exactly like any other. It's true that nobody who saw our production would have mistaken it for Hamlet or Oh Calcutta, but every production exists at the intersection of a specific cast, director, school, community, and stage (ours has no fly gallery, so that affects set design considerably). School theater in particular has to make adjustments for things as simple as language and as substantial as character gender (I can tell you, for instance, that interesting things happen to the subtext of Disney's Beauty and the Beast when Belle's crazy father Maurice is replaced by Belle's crazy mother Marie). It is those specific variations that most often give the special flavor and quality to the local production; the deviations from the standard are a source of excellence, not treatment-demanding flaw.

I love working with students and theater (despite the giant chunks of my life that it demands) because it is an experience that, in an absolutely authentic manner, helps each student grow and learn and discover new greatness in herself. It is an absolutely real learning and growth experience, which is why I'm always struck by how completely it does not match any of the assumptions about real learning made by the forces of ed reform. This is what real learning and growth look like, and they don't resemble the whole standard-driven test-centered punishment-fueled system that has been forced on us for the past fifteen years.

Last modified on
Hits: 4864 Comments