Feeds:
Posts
Comments

Posts Tagged ‘Educational Research’

In my last post, I speculated that there were three reasons I read educational research.

  1. I encounter it (via Twitter, blogs, or in journals) and I’m curious, so I read it.
  2. I deliberately seek it out to confirm a bias. (Don’t judge me. We all do this.)
  3. I’m genuinely interested in what the research has to say on a certain topic, so I search for it.

Since biases are fun, let’s look at an article I dug up for the second reason. I’ve made my views on homework pretty clear on this blog in a couple of posts. Here’s a study I found on the subject of homework. Unfortunately, it failed to confirm my bias.

Are We Wasting Our Children’s Time By Giving Them More Homework?

The study is by Daniel J. Henderson, of New York and was published as IZA Discussion Paper No. 5547, March 2011.

The abstract reads:

Following an identification strategy that allows us to largely eliminate unobserved student and teacher traits, we examine the effect of homework on math, science, English and history test scores for eighth grade students in the United States. Noting that failure to control for these effects yields selection biases on the estimated effect of homework, we find that math homework has a large and statistically meaningful effect on math test scores throughout our sample. However, additional homework in science, English and history are shown to have little to no impact on their respective test scores.

Yikes. Math homework has a large and statistically meaningful effect on math test scores throughout our sample? Uh. Oh. I guess I’d better read more than just the abstract and see if I can figure our what is going on. The math used in the study is complicated. That might make it tricky to read.

Here’s something I wonder about. Page 9.

…higher able students benefit more from additional homework.

Perhaps higher able students are the only ones who actually do the homework, because they’re the only ones who are capable of doing it.

Later, on page 17.

Taking the Peabody Individual Achievement Test in math as our benchmark, the gain from math homework (1.77 points) corresponds to one-fourth of the raw black-white test score gap between the ages of 6 and 13

My question would be: Can we be sure the gain on that test is solely attributable to homework? Maybe we can. I’ll admit to not fully understanding the tables in the study.

Here’s a finding on page 19 that I am glad to hear. At least one of my biases was confirmed by this study.

The teachers Treatment of the homework (whether it is being recorded and/or graded) does not appear to affect the returns to math homework.

 

Read Full Post »

I’m going to write some posts over the next little while about educational research. Just before Christmas, Michael Pershan and Chris Robinson were going back and forth on Twitter about research vs. blog posts.

Capture

“If teachers can rely on blog posts, where does that leave ed research?”

That question got me thinking a whole lot about how and why I use educational research and how and why I use blog posts.

I read research for several reasons.

  1. I encounter it (via Twitter, blogs, or in journals) and I’m curious, so I read it.
  2. I deliberately seek it out to confirm a bias. (Don’t judge me. We all do this.)
  3. I’m genuinely interested in what the research has to say on a certain topic, so I search for it.

My recent blog post about delayed feedback falls into the first category. A colleague showed it to me and I was curious, so I read it.

I tend to mine blogs for ideas that I can use immediately in classrooms and workshops. Those ideas don’t have to be researched based, in my opinion. The fact that a colleague tried it already and it worked for her is sufficient for me to try it out. That endorsement is worth one class period or one unit of study of my time. I see these shared ideas the same way I saw lunchroom conversations in the 1990s. “I did this cool thing in my class today. You should try it out.

If I were contemplating a major shift in my practice, I’d probably go to research in addition to listening to colleagues. SBG would be an example of something I’d research before changing my whole practice. A blog might inspire me to try it, and the research would confirm that it’s worth doing. One year in Math 8, I did the entire course in cooperative learning groups and activities. That’s a big commitment. That’s a big shift. Research supported and justified my change.

In the next few blog posts, I’m going to look at some of the research I’ve read over the past few years. I’ll explain how I happened across it, and how I use it now.

 

Read Full Post »

It’s been an interesting enough week in the assessment world that I’m compelled to blog for the first time in a long time.

Early last week, I encountered this “Focus on Formative Feedback” literature review by Valerie Shute.

Table 4, near the end, on page 179 lists “formative feedback guidelines in relation to timing issues.” Shute recommends using immediate feedback for difficult tasks, and delayed feedback for simple tasks. She says that to promote transfer of learning, teachers should consider using delayed feedback. To support that claim, she says,

According to some researchers (e.g., Kulhavy et al., 1985; Schroth, 1992), delayed may be better than immediate feedback for transfer task performance, although initial learning time may be depressed. This needs more research.

Then, just yesterday, Dan Meyer jumps in with a post on delayed feedback.

My gut says that the timing of the feedback is far less important than the quality of the feedback. Dylan Wiliam has entire chapters dedicated to providing feedback that moves learners forward. Next steps are useful to all students. Evaluative feedback that evokes emotion isn’t particularly useful to anyone.

I’m not sure this does need more research.

Read Full Post »