Feeds:
Posts
Comments

Homework Research

In my last post, I speculated that there were three reasons I read educational research.

  1. I encounter it (via Twitter, blogs, or in journals) and I’m curious, so I read it.
  2. I deliberately seek it out to confirm a bias. (Don’t judge me. We all do this.)
  3. I’m genuinely interested in what the research has to say on a certain topic, so I search for it.

Since biases are fun, let’s look at an article I dug up for the second reason. I’ve made my views on homework pretty clear on this blog in a couple of posts. Here’s a study I found on the subject of homework. Unfortunately, it failed to confirm my bias.

Are We Wasting Our Children’s Time By Giving Them More Homework?

The study is by Daniel J. Henderson, of New York and was published as IZA Discussion Paper No. 5547, March 2011.

The abstract reads:

Following an identification strategy that allows us to largely eliminate unobserved student and teacher traits, we examine the effect of homework on math, science, English and history test scores for eighth grade students in the United States. Noting that failure to control for these effects yields selection biases on the estimated effect of homework, we find that math homework has a large and statistically meaningful effect on math test scores throughout our sample. However, additional homework in science, English and history are shown to have little to no impact on their respective test scores.

Yikes. Math homework has a large and statistically meaningful effect on math test scores throughout our sample? Uh. Oh. I guess I’d better read more than just the abstract and see if I can figure our what is going on. The math used in the study is complicated. That might make it tricky to read.

Here’s something I wonder about. Page 9.

…higher able students benefit more from additional homework.

Perhaps higher able students are the only ones who actually do the homework, because they’re the only ones who are capable of doing it.

Later, on page 17.

Taking the Peabody Individual Achievement Test in math as our benchmark, the gain from math homework (1.77 points) corresponds to one-fourth of the raw black-white test score gap between the ages of 6 and 13

My question would be: Can we be sure the gain on that test is solely attributable to homework? Maybe we can. I’ll admit to not fully understanding the tables in the study.

Here’s a finding on page 19 that I am glad to hear. At least one of my biases was confirmed by this study.

The teachers Treatment of the homework (whether it is being recorded and/or graded) does not appear to affect the returns to math homework.

 

Educational Research

I’m going to write some posts over the next little while about educational research. Just before Christmas, Michael Pershan and Chris Robinson were going back and forth on Twitter about research vs. blog posts.

Capture

“If teachers can rely on blog posts, where does that leave ed research?”

That question got me thinking a whole lot about how and why I use educational research and how and why I use blog posts.

I read research for several reasons.

  1. I encounter it (via Twitter, blogs, or in journals) and I’m curious, so I read it.
  2. I deliberately seek it out to confirm a bias. (Don’t judge me. We all do this.)
  3. I’m genuinely interested in what the research has to say on a certain topic, so I search for it.

My recent blog post about delayed feedback falls into the first category. A colleague showed it to me and I was curious, so I read it.

I tend to mine blogs for ideas that I can use immediately in classrooms and workshops. Those ideas don’t have to be researched based, in my opinion. The fact that a colleague tried it already and it worked for her is sufficient for me to try it out. That endorsement is worth one class period or one unit of study of my time. I see these shared ideas the same way I saw lunchroom conversations in the 1990s. “I did this cool thing in my class today. You should try it out.

If I were contemplating a major shift in my practice, I’d probably go to research in addition to listening to colleagues. SBG would be an example of something I’d research before changing my whole practice. A blog might inspire me to try it, and the research would confirm that it’s worth doing. One year in Math 8, I did the entire course in cooperative learning groups and activities. That’s a big commitment. That’s a big shift. Research supported and justified my change.

In the next few blog posts, I’m going to look at some of the research I’ve read over the past few years. I’ll explain how I happened across it, and how I use it now.

 

Delayed Feedback

It’s been an interesting enough week in the assessment world that I’m compelled to blog for the first time in a long time.

Early last week, I encountered this “Focus on Formative Feedback” literature review by Valerie Shute.

Table 4, near the end, on page 179 lists “formative feedback guidelines in relation to timing issues.” Shute recommends using immediate feedback for difficult tasks, and delayed feedback for simple tasks. She says that to promote transfer of learning, teachers should consider using delayed feedback. To support that claim, she says,

According to some researchers (e.g., Kulhavy et al., 1985; Schroth, 1992), delayed may be better than immediate feedback for transfer task performance, although initial learning time may be depressed. This needs more research.

Then, just yesterday, Dan Meyer jumps in with a post on delayed feedback.

My gut says that the timing of the feedback is far less important than the quality of the feedback. Dylan Wiliam has entire chapters dedicated to providing feedback that moves learners forward. Next steps are useful to all students. Evaluative feedback that evokes emotion isn’t particularly useful to anyone.

I’m not sure this does need more research.

Homework – Again

I did a bad thing. I was snarky to a pre-service teacher on Twitter. At the time, I didn’t know she was a pre-service teacher, and she asked a loaded question. That question was about homework, which is something I have strong opinions about. None of that excuses my snark.

I’ll elaborate on my Twitter responses in a more respectful tone. If she’s not still mad at me, maybe she’ll read this.

The question was:

When HW isn’t graded most students don’t do it. Wondering how this affects the development of their study habits for college – any research?

Her question is one that I hear frequently when I do workshops on formative assessment. Many teachers more seasoned than her assume that if we don’t grade it, students won’t do it. I have strong opinions on homework of any kind, and even stronger when we talk about grading it. A long time ago I explained why I hate homework.

In most classrooms the purpose of homework is to practice. This article seems to suggest that practicing is useful in math (and not particularly useful in any other subjects). Practicing in math is like practicing in basketball. Some players are so good they don’t need to practice. Others could use more practice.

After 18 years in the classroom, my observation on homework is that the students who don’t need the practice are the ones who diligently do every single question I assign. The students who could really use the practice rarely do the homework.

To address that many students weren’t doing homework, I came up with numerous elaborate grading systems for homework. None of them worked. Some really good math students still didn’t do the homework and ended up with a lower grade than they deserved. Some weaker students got their parents, their tutors or their friends to do their homework and ended up with a higher grade than they deserved. I became highly reluctant to grade anything I didn’t see them do in front of me.

As I learned more about assessment, I began to question the appropriateness of grading practice, whether it was done in front of me or not. Practice is just that. It is to prepare for the big game. We don’t assess athletes or musicians on their practice, only their final performance. In math class, that means assessing students after they have completed the learning, not during.

Given the three things I’ve addressed here – that practice in math is useful, that I am reluctant to grade things I don’t see them doing, and that I am reluctant to grade practice at all – where does that leave me? It leaves me with a very different kind of classroom than I used to have.

I used to “teach” for 80 minutes and then assign 60 minutes worth of homework that most of my students didn’t do. Now I talk less and build more time for practice (in the form of formative assessment strategies) into my lessons. At one time I was collecting some of my favorite strategies to do just that. Check out the embedded formative assessment category on this blog for some practical ideas.

The Sky Is Falling!

There’s been a lot of twitter and media buzz about a new app that scans math questions and gives answers.

Screen Shot 2014-10-23 at 10.03.43 AM

Dan Meyer has compiled some thoughts on the app over on his blog, and he has been commenting on Twitter as well.

Screen Shot 2014-10-23 at 10.03.07 AM

 

I decided to test Dan’s comment with (what else?) a test. I gave the app one exam from each of grades 7, 8, 9, 10, 11 and 12. My conclusion is that it doesn’t solve them anywhere near as well as most kids would.

Full disclosure: The grades 10, 11 and 12 exams were ones I created, and I’m always conscious of trying to avoid having questions that can be answered with a calculator alone. The grades 7, 8 and 9 exams were from a publisher.

I tried to pick topics it had a shot at solving. I tried to pick topics with mostly number and equations.

Grade 10 – Algebra and Number

The app got 0/30 on my exam. On the questions I thought it should be able to answer, it got 0/10.

This one was its most blatant error. I did have it centered properly prior to snapping a screenshot. It registered the 2, and ignored it.Photo 2014-10-23, 10 21 34 AM

 

These were its first steps. It had trouble recognizing that square bracket.


Photo 2014-10-23, 10 27 13 AM


Grade 11 – Radicals and Absolute Value

The app got 3/32 on my exam. On the questions I thought it should be able to answer, it got 3/13. I was a little surprised. Clearly I need to tweak some questions. Here’s one it got right.

Photo 2014-10-23, 10 35 52 AM

 

Grade 12 – Exponents and Logarithms

The app can’t recognize logs, or manipulate anything but the most rudimentary equations. It got 0/30 overall and 0/9 on the ones I thought it should get.

Math 7 – Integers

The app struggles with brackets. I hovered over expressions like (-2) + (4) – (-7) endlessly waiting for an answer of any kind (right or wrong) and never got anything. It got 0/20 overall and 0/9 on the ones I thought it would get.

Math 8 – Fractions

The app nails fraction calculations. It got 7/20 overall and got 7/7 on the ones I would have expected it to get. Here’s one it got right.

Photo 2014-10-23, 10 55 46 AMPhoto 2014-10-23, 10 55 38 AMPhoto 2014-10-23, 10 55 32 AM

Math 9 – Equations 

The app got 6/20 overall and 6/6 on the ones I would have expected it to get. It solves basic equations (no logs, no powers, no quadratics, few brackets) correctly every time I try it. Some of the steps seem convoluted to me.

Photo 2014-10-23, 11 05 21 AM (1)Photo 2014-10-23, 11 05 25 AMPhoto 2014-10-23, 11 05 28 AMPhoto 2014-10-23, 11 05 32 AMPhoto 2014-10-23, 11 05 38 AMPhoto 2014-10-23, 11 05 50 AMPhoto 2014-10-23, 11 05 42 AMPhoto 2014-10-23, 11 05 55 AMPhoto 2014-10-23, 11 05 59 AM

 

 

Photo 2014-10-23, 11 06 01 AM

 

I’m not sure this is the game changer some people fear it is. It’s a calculator, and not a particularly accurate one. As long as we’re asking the right questions, let them use this app. Just have them check their answers on a calculator.

Max Ray and I communicated 3 years ago after I was in Philadelphia for a conference. We never actually met while I was there. Since then, he has published a book on problem solving. I’ll confess I bought it, but haven’t read it yet.

We were both at Twitter Math Camp last week. Early on the first morning, I nervously introduced myself to him and said hello. I didn’t want to take up too much of his time, since he was getting ready for his morning session. I intended to go to his problem solving session later on in the camp, but chose differently in that time slot only because there was another one I though might be slightly more relevant to my work next year. Because of that, I didn’t get to interact with him again, which I was regretting.

Then, Max ended up sitting beside me during Eli Luberoff’s keynote and Desmos demonstration. Desmos is more than just an awesome and free online graphing calculator. Eli was having us work through the Desmos Function Carnival activity, which is part of their set of classroom activities that are “Hand-crafted classroom activities. Designed by teachers. Built with love by Desmos.”

I had my laptop open, so Max slid over and we worked together. I’m the type of kid who needs to get everything right, so I was trying my hardest to get stuff done well.

Max said, “Get the next one wrong.” After I got over my initial shock, I remembered where I was and what we were doing. We were evaluating an online classroom activity. In order for an online classroom activity to be useful to students, it has to be built to provide them with useful feedback when they are wrong. I saw what Max was doing and why he wanted to be wrong. We got a few wrong, and we got feedback that would be useful to students. Thanks, Max, for that brief interaction, and for reminding me what I should have been doing in that moment. It made me think I should have gone to his problem solving session.

For the record, the Desmos function carnival appears to be a remarkably well-built activity. Check out David Cox’s video showing what the teacher can see over time as a class works on it. I intend to spend some time this summer working through some others.

 

Golf and TMC

Last week I attended Twitter Math Camp (yes, that’s a real thing). In my conversations with other attendees, we frequently ended up talking about how intimidating it felt being in the company of the other TMCers. This was a passionate and committed group of professionals sharing their best stuff. It was easy to be in awe of some of the presentations. This feeling of awe caused some people to experience angst.

This blog post from Mr. Kent really got people talking. If you haven’t read it yet, head on over there and check it out. I’ll wait. It’s way shorter than this post. Among other things, he says,

To be surrounded by this many people that are this far above me in every area of teaching, learning, growing, intellect, honesty, humor, and kindness, hit me like a stake through my heart. I truly feel as I am nothing compared to those I met here.

I respect how he was feeling. I had moments where I had similar thoughts myself. I’m writing this post to try to make him feel better, at the risk of offending other attendees. I suspect he’s a good teacher.

In addressing this post on her own blog, Kate Nowak says,

I’d just like to say, everybody chill the &^%$ out. We are all good at some things and suck at other things. One thing we all share is the recognition that we all have work to do, and that we can all get better, and that focusing on that is worth our time.

In the comments on Mr. Kent’s blog, Jen weighs in strongly with,

The truth is, many of the teachers at TMC14 have also admitted feeling inferior (look through this morning’s #tmc14 thread). It’s hard not to when so many great ideas are being shared – but remember, these people are there sharing a few great ideas, they can’t all be that awesome all of the time. What makes it harder still is the celebrity reception some of the veterans get from those newer to the mtbos. That’s not reality, and I wish it would stop.

I want to take her last comment a step farther. We need to get over ourselves, as a group. Underlying feelings like Mr. Kent’s, at least when I have them, is the assumption that everyone ELSE at TMC is a superstar. I suspect most of the attendees felt that way at one point or another. Is it possible that all TMC attendees are superstars? We (the attendees) sure act like it is.

I readily accept that the teachers who attend TMC are:

  • committed – They do it on their time and 69% pay their own way.
  • passionate – They talk about math teaching almost all the time inside and outside the conference. Informal sessions went on in the hotel after the conference was supposed to be over each day
  • learners – These are people who want to get better.

Indulge me as I present a golf analogy, if you will.

The local sports radio station is organizing a golf trip to Mexico. The people who go on trips like that are committed (they do it on their time and they pay their own way), passionate (they love that silly game enough to go to all the way to Mexico, which is a long way from Edmonton) and I assume they want to improve their game. I have never been on a trip like that, but I’m fairly confident that some of those committed and passionate players stink at golf.

I’m 22 years into this education game. I’ve worked with hundreds of teachers. In the past five years consulting, I’ve been in dozens of classrooms and observed the teachers working in them. I am fairly confident that most of our teachers are good. I have a theory based purely on observation and my gut that says there are a small percentage of superstar teachers and a small percentage of teachers who really struggle. I sometimes put numbers on those small percentages that range from 2 to 10%, depending on who I’m talking to. I do picture a bell curve with a smallish standard deviation. I have absolutely no scientific data to back up my hypothesis. Personally, there are days when I feel like I’m a little bit to the right of the mean and days when I feel like I’m a little bit to the left of the mean. I rarely feel like I’m off in the far ends on either side. I’m good with that.

I don’t know why TMC attendees would be any different in pure teaching ability than any other set of 150 educators.

At TMC, committed and passionate educators share the best of what they do. Of course it looks good. They don’t share the lessons that tanked (Well, I did, but maybe I’m the only one that ever happens to. See, that self-doubt keeps popping up.). They don’t share the practices they tried that failed. They share their successes.

Let’s go back to that golf trip for a minute. Even the terrible golfers who go to Mexico probably have some elements of the game that they are good at. Maybe one of them knows a good tip for improving putting. Another knows how to correct a slice. Each of them could probably find a thing or two to share that benefits other players. Those things alone, though, aren’t enough for any of our terrible golfers to join the pro tour and make millions. Only a small percentage of golfers are that good.

There are superstar teachers at TMC, certainly. But not exclusively. There are a whole lot of good teachers sharing the best of what they do. Even the best ones there would tell you honestly that they have things to learn. They don’t go to TMC to be in the spotlight. They go to learn, and that’s what it’s about. It’s not about comparing our skills. It’s about growing together.