A story is told of the comedic teacher who challenged his colleagues to adopt the latest innovation in education. During his conference period, while in the teacher’s lounge, he claimed that some Ivy League schools in the Northeast discovered that student’s IQ grew exponentially when teachers wore funny hats.
A couple of the trend-setting teachers decided to try it with their problem classes. Low and behold, they seemed well-behaved.
Through word of mouth a few more teachers bought in to the new system. Reports of improved test scores resounded throughout the building. Teachers wore pointed hats, hats with shocking colors, and hats with plumes sticking out the side. They tested which hat received the best results. Next, the principal shared their success story with other principals. Entire districts adopted the new system. Colleges began to embrace it and produce “How To” manuals. After two years, it became evident that this was not all what it was cracked up to be. So, one-by-one, schools abandoned the “funny hat” method.
Such lunacy, although somewhat exaggerative, is all too reflective of the way school reform actually occurs. Too often changes are made in schools based on spurious research, transitory test-scores, the “publish or perish” pressure upon college professors, or multiplied millions of dollars to be made by creating a popular teaching method. Meanwhile, children are subjected to these countless buffooneries.
Myron Lieberman, in his profound work, Public Education: An Autopsy, rightly posthumated, “Like individuals, social institutions die, and their death forces us to face an uncertain future… we cannot always wait until rigor mortis sets in to consider what should be done to meet the new situation.” The present climate is a complex combination of the pungent odor of a rotting corpse, along with techno-reforms.
One of the maladies of education is the infernal cycling of fads that leave seasoned veterans with a bitter taste toward research, rendering quality school improvement nearly impossible. The latest and greatest method, which promises utopian results in learning, often results in little more than a placebo effect. After a few of these iterations, jaundiced-eyed faculty view the expert who touts the latest be-all-end-all method, little more than snake-oil salesmen.
This, in turn, provokes administrators, pressured by the high-stakes test environment, to view faculty as entrenched, unwilling to change, and "old school." In turn, they heap up research on how to change corporate culture in order to break the log-jam of resistance. They mobilize, organize, and implement the latest research to move the school forward.
Just like the original research that was not fully implemented, change may or may not happen. When it fails, faculty remain passive-aggressive. Thus, administrators begin to ask themselves, "What is wrong with us?" Then they too begin a list of scapegoats to slaughter.
Now, because of the plethora of pop-research in the field, meta-research has become a field of its one. However, the crumbling foundations of shoddy research renders the superstructure of this discipline to be a mere house of cards in many cases.
These "fidgets," supposedly verified by research, often end in disappoint after countless hours, energy, and precious funds have been spent. Or, the results are so minimal that schools are forced to defend the amount of resources allocated to the project.
Next, educators begin guilt-ridden reflection. "Why doesn't this work in our district?" they ask. "What is wrong with us?"
Next, a parade of scapegoats are considered. It didn't work in our school because of our low SES students, lack of quality teachers, unsupportive parents, or incompetent administrators. On and on the victims list goes. However, such self-flagulations to assuage false guilt is both unproductive and unnecessary.
At the heart of the malady is faulty research. We are too ready to believe whatever marches under the banner of research. Too ready to swallow whatever pill is handed to us, if the person has Ph.D. after their name, or works at some prestigious research facilities or university.
The pressure to publish at the university or research level is intense. Professors must publish to be considered for tenure, or lose their position. Researchers must concoct results in order to get that large federal grant. Keynote speakers must find, understand, and taut research for the next gig. Otherwise, opportunities will dry up. Administrators must grab on to the latest method that has worked elsewhere, to justify to the board, they are pro-actively addressing problems.
As such, there is a systemic culture, which fuels ongoing quick, shoddy research practices in the field of education. As such, the educational field is rife with published jirations pretending to be research.
Educators have lost the value of criticizing research and researchers. Beyond this, they have lazily accepted the word of research salesmen, who interpret and cite research to validate whatever they're selling.
Recently I was given an article citing various sources of "research" from an organization that touts effective instruction to the LD child. Low and behold, the article claimed the research was clear that students behave better by individualized behavior plans, affirming tone, and positive rewards.
Although the research and article may or may not be true, I would have been much more impressed with the research if it had come from the Americans to Restore Corporal Punishment. Better yet, if the research showed the LD child can be taught with the same modalities as any other child. It is far too convenient to be selective in what one deems "research," to confirm ones own prejudices and biases.
The critical question of the classroom practitioner, as well as the administrator should be, "Is it quality research? Were the results in fact valid and reliable? Is there a direct correlation between those results and my students?"
While working in carpentry, one will quickly learn the value of, "measure twice, cut once." What this means in the current discussion is, if we will not genuflect to everything claiming to be research. If we will no longer purchase the products of pop-research. If we insist that only the soundest of research be published and marketed. If we will embarass and humiliate those who try to pass on shoddy research as though it were facts. We will begin to restore the confidence of those war-weary faculty and staff, who have tired of running on a pinwheel.
A couple of the trend-setting teachers decided to try it with their problem classes. Low and behold, they seemed well-behaved.
Through word of mouth a few more teachers bought in to the new system. Reports of improved test scores resounded throughout the building. Teachers wore pointed hats, hats with shocking colors, and hats with plumes sticking out the side. They tested which hat received the best results. Next, the principal shared their success story with other principals. Entire districts adopted the new system. Colleges began to embrace it and produce “How To” manuals. After two years, it became evident that this was not all what it was cracked up to be. So, one-by-one, schools abandoned the “funny hat” method.
Such lunacy, although somewhat exaggerative, is all too reflective of the way school reform actually occurs. Too often changes are made in schools based on spurious research, transitory test-scores, the “publish or perish” pressure upon college professors, or multiplied millions of dollars to be made by creating a popular teaching method. Meanwhile, children are subjected to these countless buffooneries.
Myron Lieberman, in his profound work, Public Education: An Autopsy, rightly posthumated, “Like individuals, social institutions die, and their death forces us to face an uncertain future… we cannot always wait until rigor mortis sets in to consider what should be done to meet the new situation.” The present climate is a complex combination of the pungent odor of a rotting corpse, along with techno-reforms.
One of the maladies of education is the infernal cycling of fads that leave seasoned veterans with a bitter taste toward research, rendering quality school improvement nearly impossible. The latest and greatest method, which promises utopian results in learning, often results in little more than a placebo effect. After a few of these iterations, jaundiced-eyed faculty view the expert who touts the latest be-all-end-all method, little more than snake-oil salesmen.
This, in turn, provokes administrators, pressured by the high-stakes test environment, to view faculty as entrenched, unwilling to change, and "old school." In turn, they heap up research on how to change corporate culture in order to break the log-jam of resistance. They mobilize, organize, and implement the latest research to move the school forward.
Just like the original research that was not fully implemented, change may or may not happen. When it fails, faculty remain passive-aggressive. Thus, administrators begin to ask themselves, "What is wrong with us?" Then they too begin a list of scapegoats to slaughter.
Now, because of the plethora of pop-research in the field, meta-research has become a field of its one. However, the crumbling foundations of shoddy research renders the superstructure of this discipline to be a mere house of cards in many cases.
These "fidgets," supposedly verified by research, often end in disappoint after countless hours, energy, and precious funds have been spent. Or, the results are so minimal that schools are forced to defend the amount of resources allocated to the project.
Next, educators begin guilt-ridden reflection. "Why doesn't this work in our district?" they ask. "What is wrong with us?"
Next, a parade of scapegoats are considered. It didn't work in our school because of our low SES students, lack of quality teachers, unsupportive parents, or incompetent administrators. On and on the victims list goes. However, such self-flagulations to assuage false guilt is both unproductive and unnecessary.
At the heart of the malady is faulty research. We are too ready to believe whatever marches under the banner of research. Too ready to swallow whatever pill is handed to us, if the person has Ph.D. after their name, or works at some prestigious research facilities or university.
The pressure to publish at the university or research level is intense. Professors must publish to be considered for tenure, or lose their position. Researchers must concoct results in order to get that large federal grant. Keynote speakers must find, understand, and taut research for the next gig. Otherwise, opportunities will dry up. Administrators must grab on to the latest method that has worked elsewhere, to justify to the board, they are pro-actively addressing problems.
As such, there is a systemic culture, which fuels ongoing quick, shoddy research practices in the field of education. As such, the educational field is rife with published jirations pretending to be research.
Educators have lost the value of criticizing research and researchers. Beyond this, they have lazily accepted the word of research salesmen, who interpret and cite research to validate whatever they're selling.
Recently I was given an article citing various sources of "research" from an organization that touts effective instruction to the LD child. Low and behold, the article claimed the research was clear that students behave better by individualized behavior plans, affirming tone, and positive rewards.
Although the research and article may or may not be true, I would have been much more impressed with the research if it had come from the Americans to Restore Corporal Punishment. Better yet, if the research showed the LD child can be taught with the same modalities as any other child. It is far too convenient to be selective in what one deems "research," to confirm ones own prejudices and biases.
The critical question of the classroom practitioner, as well as the administrator should be, "Is it quality research? Were the results in fact valid and reliable? Is there a direct correlation between those results and my students?"
While working in carpentry, one will quickly learn the value of, "measure twice, cut once." What this means in the current discussion is, if we will not genuflect to everything claiming to be research. If we will no longer purchase the products of pop-research. If we insist that only the soundest of research be published and marketed. If we will embarass and humiliate those who try to pass on shoddy research as though it were facts. We will begin to restore the confidence of those war-weary faculty and staff, who have tired of running on a pinwheel.