I have two questions for behavior analysts smarter than me.
If you have an answer or comment please do.
Why do behavior analysts continue to use the terms: “Positive Reinforcement”; Negative Reinforcement; Positive Punishment; and Negative Punishment when the distinction between positive and negative (presentation and withdrawal) cannot be made? For example, John walks from his dark living room to the sunny outdoors. He looks, then puts his sunglasses on. Was the reinforcer for putting on sunglasses the presentation of clear vision or the withdrawal of glare? Think about it.. Forget what you studied for the BCBA exam. It’s not either or. It’s both. It can’t be determined because you automatically present something simultaneously whenever you withdraw something. If you remove a coaster from the table you present what was under the coaster simultaneously. So, behavioral scientists and practitioners should use the terms Reinforcement/ reinforcer or punishment/ punisher instead of a positive reinforcer or positive reinforcement, negative reinforcer, and so on. But they don’t. Why not? Jack Michael brought this issue up in the 1970s. There was an issue of The Behavior Analyst years ago that discussed it with some advocating the positive and negative bits of the terminology. Anyway, if you have an answer (inertia?), please advise.
When I first learned about behavior analysis, I learned about the law of least effort. If there were two responses that provided the identical reinforcer an organism would emit the response that required the least effort over the response that required greater effort. Then I learned that effort was confounded by time. If a researcher could separate the effects of time and effort on responding they would find that time trumped effort. The organism would emit the response that saved time even if it required more effort. However, I have looked for a research study that examined this issue and have not found any. I haven’t found it discussed in any of my textbooks. I’m not affiliated with a university so I’m limited to Google Scholar. If you are knowledgeable about this issue, please advise.
Summary of previous posts
Check them out at:
The first post tied in Keller’s Personalized System of Instruction with today’s Competency-Based Instruction. I also discussed Reading curricula good and bad.
The second post revolves around spiral math curricula and discusses why it is inferior.
The third discusses an amazing classroom management system, LearnBall.
A video of Mr. Verga who discusses LearnBall.
Sit and Watch is a simple yet effective procedure to help teach children to participate in classroom activities and follow instructions at home amended slightly.
This post is about the design of schools. Watch the YouTube videos if you haven’t already.
More about the design of schools.
Eat, Eat, you deserve it. Pasta Puttanesca, a dish for any man or woman. Plus a list of a few decent songs to pass the time away.
Thank you for commenting on my two questions. I will read the articles you mentioned. While I was at WMU Jack Michael talked about “the state of the organism”, and used state diagrams to explain S-R-C relations. The organism is in a certain state of being, it responds, the state of the organism changes. If response frequency increases it’s a reinforcing condition, decreases it’s a punishing condition. It’s the change in the organism's state that matters not the notion of presentation or removal. Presentation, removal are concepts that are not needed to describe response consequence relations. Then he'd talk about the sunglasses example. Then talk about how we could use the terms: avoidance, escape, response cost, and so on without using the terms positive and negative reinforcer/punisher. Later, when I was at ASU I took coursework from Peter Killeen who talked about the same issue and agreed with Michael. (Although he didn’t mention state diagrams). So, here I had two pillars of behavior analysis explain away +/- reinforcement. I thought the field would eventually discard the terms, but it did not.
When I taught a parenting class, I used the law of least effort to explain the reduction of tantrums in toddlers once they began to talk instead of tantrum to make their desires known. Later I listened to Killeen at ABAI who mentioned that the law of least effort was confounded by time. He said a JEAB researcher teased out time and effort to declare that organisms would save time over effort for the same reinforcer. Law of least time. I wondered if there was any corresponding applied research. So, I went to the Cooper ABA book and could not find anything in there about the law of least effort, (or least time). That surprised me. I’ve looked in other books and research articles from time to time, that’s why I asked that question.
Anyway, thanks again for reading my post and commenting. I appreciate it.
Paul
Hi Paul, as to your first point (why does the distinction between positive/negative reinforcement persist): Catania's chapter in the Handbook of Applied Behavior Analysis has great thoughts about this. For example, a rat in a chamber can press a lever to turn on a heat lamp. If it does, does it gain heat or escape cold? The answer, according to Catania, is not a matter of physics. One potential answer that Catania gives: does the preceding behavior interfere with the operant response? In the case of the rat, the cold air leads to huddling close to the wall of the chamber, shivering, etc. These are behaviors that make a lever press less likely, and therefore turning on the heat lamp negatively reinforces a lever press by allowing the rat to escape cold. The utility of the language is questionable, but it's obviously widespread and easy to understand. I sometimes compare it to something like teaching the concept of gravity: is gravity actually a force that pulls us down? Probably not if you're a physicist.
As to your second point (delay discounting vs. response effort), here is a full text article that summarizes some of the issues: https://www.sciencedirect.com/science/article/pii/S0376635721001947
Confounds include: delays with smaller/larger rewards are discounted differently; loss aversion has different effects depending on various factors; and "effort" could probably be defined in multiple ways (after all, what if by "effort" you mean a task that takes longer -- that would be tied inextricably to delay; this could be solved in a yoked condition, perhaps).
Further confusion: based on this recently published article in JEAB https://onlinelibrary.wiley.com/doi/abs/10.1002/jeab.882, real rewards are discounted *less* than hypothetical rewards. Most delay discounting research has relied on hypothetical rewards, and so it could be possible that the literature base overestimates delay discounting.