Assalamua'laikum, kaifahal?
On the 2nd of April 2013, we have learn about schedules of
reinforcement on operant conditioning by Skinner.There have two types
reinforcement schedules:
Continuous reinforcement -
the desired behavior is reinforced every single time it occurs. Generally, this
schedule is best used during the initial stages of learning in order to create
a strong association between the behavior and the response. Once the response
if firmly attached, reinforcement is usually switched to a partial
reinforcement schedule.
Partial reinforcement - In
partial reinforcement, the response is reinforced only part of the time.
Learned behaviors are acquired more slowly with partial reinforcement, but the
response is more resistant to extinction. There are four schedules of partial
reinforcement:
1.
Fixed-ratio schedules
are those where a response is reinforced only after a specified number of
responses. This schedule produces a high, steady rate of responding with only a
brief pause after the delivery of the reinforcer. For example, production Line
Work: Workers at a widget factory are paid for every 15 widgets they make. This
results in a high production rate and workers tend to take few breaks. It can,
however, lead to burnout and lower-quality work
2.
Variable-ratio schedules
occur when a response is reinforced after an unpredictable number of responses.
This schedule creates a high steady rate of responding. Gambling and lottery
games are good examples of a reward based on a variable ratio schedule. Another
example give by Madam is pop quiz. That's right madam, we keep prepared before
come to your class.
3.
Fixed-interval schedules
are those where the first response is rewarded only after a specified amount of
time has elapsed. This schedule causes high amounts of responding near the end
of the interval, but much slower responding immediately after the delivery of
the reinforcer. Example like in a Lab Setting: Imagine that you are training a
rat to press a lever, but you only reinforce the first response after a
ten-minute interval. The rat does not press the bar much during the first 5
minutes after reinforcement, but begins to press the lever more and more often
the closer you get to the ten minute mark. In real world, a weekly paycheck is
a good example of a fixed-interval schedule. The employee receives
reinforcement every seven days, which may result in a higher response rate as
payday approaches.
4.
Variable-interval schedules
occur when a response is rewarded after an unpredictable amount of time has
passed. This schedule produces a slow, steady rate of response. madam give us
an example like waiting on the bus or train that we does not know when exactly
the bus will arrive. So we keep waiting even there were about one or two hour
waiting at the bus stop.
Choosing a Schedule
Deciding when to reinforce a behavior can depend upon a
number of factors. In cases where you are specifically trying to teach a new
behavior, a continuous schedule is often a good choice. Once the behavior has
been learned, switching to a partial schedule is often preferable. Realistically,
reinforcing a behavior every single time it occurs can be difficult and
requires a great deal of attention and resources. Partial schedules not only
tend to lead to behaviors that are more resistant to extinction, they also
reduce the risk that the subject will become satiated. If the reinforcer being
used is no longer desired or rewarding, the subject may stop performing the
desired behavior.
At the end we got a pop quiz. Time for variable ratio
schedule!
No comments:
Post a Comment